WorldWideScience

Sample records for aer benchmark specification

  1. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  2. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  3. Continuation of the VVER burnup credit benchmark. Evaluation of CB1 results, overview of CB2 results to date, and specification of CB3

    International Nuclear Information System (INIS)

    A calculational benchmark focused on VVER-440 burnup credit, similar to that of the OECD/NEA/NSC Burnup Credit Benchmark Working Group, was proposed on the 96'AER Symposium. Its first part, CB1, was specified there whereas the second part, CB2, was specified a year later, on 97'AER Symposium in Zittau. A final statistical evaluation is presented of CB1 results and summarizes the CB2 results obtained to date. Further, the effect of an axial burnup profile of VVER-440 spent fuel on criticality ('end effect') is proposed to be studied in the CB3 benchmark problem of an infinite array of VVER-440 spent fuel rods. (author)

  4. Ensemble approach to predict specificity determinants: benchmarking and validation

    Directory of Open Access Journals (Sweden)

    Panchenko Anna R

    2009-07-01

    Full Text Available Abstract Background It is extremely important and challenging to identify the sites that are responsible for functional specification or diversification in protein families. In this study, a rigorous comparative benchmarking protocol was employed to provide a reliable evaluation of methods which predict the specificity determining sites. Subsequently, three best performing methods were applied to identify new potential specificity determining sites through ensemble approach and common agreement of their prediction results. Results It was shown that the analysis of structural characteristics of predicted specificity determining sites might provide the means to validate their prediction accuracy. For example, we found that for smaller distances it holds true that the more reliable the prediction method is, the closer predicted specificity determining sites are to each other and to the ligand. Conclusion We observed certain similarities of structural features between predicted and actual subsites which might point to their functional relevance. We speculate that majority of the identified potential specificity determining sites might be indirectly involved in specific interactions and could be ideal target for mutagenesis experiments.

  5. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    Energy Technology Data Exchange (ETDEWEB)

    Orii, Shigeo [Japan Atomic Energy Research Inst., Tokyo (Japan)

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  6. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  7. Adverse Event Reporting System (AERS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Adverse Event Reporting System (AERS) is a computerized information database designed to support the FDA's post-marketing safety surveillance program for all...

  8. Assessment of Usability Benchmarks: Combining Standardized Scales with Specific Questions

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2011-12-01

    Full Text Available The usability of Web sites and online services is of rising importance. When creating a completely new Web site, qualitative data are adequate for identifying the most usability problems. However, changes of an existing Web site should be evaluated by a quantitative benchmarking process. The proposed paper describes the creation of a questionnaire that allows a quantitative usability benchmarking, i.e. a direct comparison of the different versions of a Web site and an orientation on general standards of usability. The questionnaire is also open for qualitative data. The methodology will be explained by the digital library services of the ZBW.

  9. Results of the isotopic concentrations of VVER calculational burnup credit benchmark No. 2(CB2)

    International Nuclear Information System (INIS)

    Results of the nuclide concentrations are presented of VVER Burnup Credit Benchmark No. 2(CB2) that were performed in The Nuclear Technology Center of Cuba with available codes and libraries. The CB2 benchmark specification as the second phase of the VVER burnup credit benchmark is summarized. The CB2 benchmark focused on VVER burnup credit study proposed on the 97' AER Symposium. The obtained results are isotopic concentrations of spent fuel as a function of the burnup and cooling time. The depletion point 'ORIGEN2' code and other codes were used for the calculation of the spent fuel concentration. (author)

  10. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Woodworth, Ken [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kuruganti, Teja [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-17

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the software for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.

  11. AerChemMIP: Quantifying the effects of chemistry and aerosols in CMIP6

    OpenAIRE

    Collins, William J; Lamarque, Jean-François; Schulz, Michael; Boucher, Olivier; Eyring, Veronika; Hegglin, Michaela I.; Maycock, Amanda; Myhre, Gunnar; Prather, Michael; Shindell, Drew; Smith, Steven J.

    2016-01-01

    The Aerosol Chemistry Model Intercomparison Project (AerChemMIP) is endorsed by the Coupled-Model Intercomparison Project 6 (CMIP6) and is designed to quantify the climate and air quality impacts of aerosols and chemically-reactive gases. These are specifically near-term climate forcers (NTCFs: tropospheric ozone and aerosols, and their precursors), methane, nitrous oxide and ozone-depleting halocarbons. The aim of AerChemMIP is to answer four scientific questions: 1. How have anthropogeni...

  12. Results of the isotopic concentrations of VVER calculational burnup credit benchmark no. 2(cb2

    International Nuclear Information System (INIS)

    The characterization of the irradiated fuel materials is becoming more important with the Increasing use of nuclear energy in the world. The purpose of this document is to present the results of the nuclide concentrations calculated Using Calculation VVER Burnup Credit Benchmark No. 2(CB2). The calculations were Performed in The Nuclear Technology Center of Cuba. The CB2 benchmark specification as the second phase of the VVER burnup credit benchmark is Summarized in [1]. The CB2 benchmark focused on VVER burnup credit study proposed on the 97' AER Symposium [2]. It should provide a comparison of the ability of various code systems And data libraries to predict VVER-440 spent fuel isotopes (isotopic concentrations) using Depletion analysis. This phase of the benchmark calculations is still in progress. CB2 should be finished by summer 1999 and evaluated results could be presented on the next AER Symposium. The obtained results are isotopic concentrations of spent fuel as a function of the burnup and Cooling time. The depletion point ORIGEN2[3] code was used for the calculation of the spent Fuel concentration. The depletion analysis was performed using the VVER-440 irradiated fuel assemblies with in-core Irradiation time of 3 years, burnup of the 30000 mwd/TU, and an after discharge cooling Time of 0 and 1 year. This work also comprises the results obtained by other codes[4].

  13. The PRISM Benchmark Suite

    OpenAIRE

    Kwiatkowsa, Marta; Norman, Gethin; Parker, David

    2012-01-01

    We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.

  14. Autopistas y aeródromos

    Directory of Open Access Journals (Sweden)

    Rodríguez, Georges

    1957-11-01

    Full Text Available Estado actual, realización y proyectos de algunas de las pistas de aeródromos y grandes vías de circulación en Francia. El autor se extiende, además, en algunas consideraciones de ejecución, características específicas y maquinaria auxiliar potente.

  15. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Parma, Edward J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  16. SMART- Small Motor AerRospace Technology

    Science.gov (United States)

    Balucani, M.; Crescenzi, R.; Ferrari, A.; Guarrea, G.; Pontetti, G.; Orsini, F.; Quattrino, L.; Viola, F.

    2004-11-01

    This paper presents the "SMART" (Small Motor AerRospace Tecnology) propulsion system, constituted of microthrusters array realised by semiconductor technology on silicon wafers. SMART system is obtained gluing three main modules: combustion chambers, igniters and nozzles. The module was then filled with propellant and closed by gluing a piece of silicon wafer in the back side of the combustion chambers. The complete assembled module composed of 25 micro- thrusters with a 3 x 5 nozzle is presented. The measurement showed a thrust of 129 mN and impulse of 56,8 mNs burning about 70mg of propellant for the micro-thruster with nozzle and a thrust of 21 mN and impulse of 8,4 mNs for the micro-thruster without nozzle.

  17. Estudio de impacto ambiental de un aeródromo

    Directory of Open Access Journals (Sweden)

    Gómez Orea, Domingo

    1996-04-01

    Full Text Available Airports and aerodromes are transport infrastructures which, apart from contributing to the mobility of people and goods, favor social development, since they promote new activities, stimulate local initiatives and reassess bordering areas. This article presents a synthesis of the Environmental Impact Study of a future aerodrome, whose design project is way at present. The aerodrome is submitted to the EIA administrative procedure due to the currently applicable specific legislation: R.D. 1302/86. The technical document submitted for study is the special plan of the aerodrome. One of the basic criteria in the conception of airports and private airfields is the compatibility with the habitability of the environment as well as with the ecological and landscape conditions. This criteria should play a part in the orientation of the runways, trajectory of the taking off and landing maneuvers..., even in the location and design og the parking spaces, hangars and other facilities. This idea suggests the design be conceived with environmental-friendly sensibility, right from the initial stages, without leaving the responsibility of this issue to the environmental study impact. The present study has its own style of approaching the dissemination, consisting of the idea that the reader will find the methodological aspects more useful than the technical data which are of consequence only in the handling of the different project stages. The aspects which are also considered important for the reader are those which allowed the team of editors to form their criteria on the issues of expenses, environmental benefits of the design, its acceptability, etc. The methodology applied is a classical one, in accordance with the requirements of the EIA regulations.

    Los aeropuertos y aeródromos son infraestructuras de transporte que, además de contribuir a la movilidad de las personas y mercancías, fomentan el desarrollo porque promocionan nuevas

  18. A field-based method to derive macroinvertebrate benchmark for specific conductivity adapted for small data sets and demonstrated in the Hun-Tai River Basin, Northeast China.

    Science.gov (United States)

    Zhao, Qian; Jia, Xiaobo; Xia, Rui; Lin, Jianing; Zhang, Yuan

    2016-09-01

    Ionic mixtures, measured as specific conductivity, have been increasingly concerned because of their toxicities to aquatic organisms. However, identifying protective values of specific conductivity for aquatic organisms is challenging given that laboratory test systems cannot examine more salt-intolerant species nor effects occurring in streams. Large data sets used for deriving field-based benchmarks are rarely available. In this study, a field-based method for small data sets was used to derive specific conductivity benchmark, which is expected to prevent the extirpation of 95% of local taxa from circum-neutral to alkaline waters dominated by a mixture of SO4(2-) and HCO3(-) anions and other dissolved ions. To compensate for the smaller sample size, species level analyses were combined with genus level analyses. The benchmark is based on extirpation concentration (XC95) values of specific conductivity for 60 macroinvertebrate genera estimated from 296 sampling sites in the Hun-Tai River Basin. We derived the specific conductivity benchmark by using a 2-point interpolation method, which yielded the benchmark of 249 μS/cm. Our study tailored the method that was developed by USEPA to derive aquatic life benchmark for specific conductivity for basin scale application, and may provide useful information for water pollution control and management.

  19. VLSI implementation of a 2.8 Gevent/s packet based AER interface with routing and event sorting functionality

    Directory of Open Access Journals (Sweden)

    Stefan eScholze

    2011-10-01

    Full Text Available State-of-the-art large scale neuromorphic systems require sophisticated spike event communication between units of the neural network. We present a high-speed communication infrastructure for a waferscale neuromorphic system, based on application-specific neuromorphic communication ICs in an FPGA-maintained environment. The ICs implement configurable axonal delays, as required for certain types of dynamic processing or for emulating spike based learning among distant cortical areas. Measurements are presented which show the efficacy of these delays in influencing behaviour of neuromorphic benchmarks. The specialized, dedicated AER communication in most current systems requires separate, low-bandwidth configuration channels. In contrast, the configuration of the waferscale neuromorphic system is also handled by the digital packet-based pulse channel, which transmits configuration data at the full bandwidth otherwise used for pulse transmission. The overall so-called pulse communication subgroup (ICs and FPGA delivers a factor 25-50 more event transmission rate than other current neuromorphic communication infrastructures.

  20. Efeitos do estado e especificidade do treinamento aeróbio na relação %VO2max versus %FCmax durante o ciclismo Effects of the state and specificity of aerobic training on the %VO2max versus %HRmax ratio during cycling

    Directory of Open Access Journals (Sweden)

    Fabrizio Caputo

    2005-01-01

    Full Text Available OBJETIVO: Determinar os efeitos do estado e especificidade de treinamento aeróbio na relação entre o percentual do consumo máximo de oxigênio (%VO2max e o percentual da frequência cardíaca máxima (%FCmax durante o exercício incremental realizado no cicloergômetro. MÉTODOS: Sete corredores, 9 ciclistas, 11 triatletas e 12 sedentários, todos do sexo masculino e aparentemente saudáveis, foram submetidos a um teste incremental até a exaustão no cicloergômetro. Regressões lineares entre %VO2max e %FCmax foram determinadas para cada indivíduo. Com base nessas regressões, foram calculados %FCmax correspondentes a determinados %VO2max (50, 60, 70, 80 e 90% de cada participante. RESULTADOS: Não foram encontradas diferenças significantes entre todos os grupos nos %FCmax para cada um dos %VO2max avaliados. Analisando-se os voluntários como um único grupo, as médias dos %FCmax correspondentes a 50, 60, 70, 80 e 90% %VO2max foram 67, 73, 80, 87, e 93%, respectivamente. CONCLUSÃO: Nos grupos analisados, a relação entre o %VO2max e %FCmax durante o exercício incremental no ciclismo não é dependente do estado e especificidade do treinamento aeróbio.OBJECTIVE: To determine the effects of the status and specificity of exercise training in the ratio between maximum oxygen consumption (%VO2max and the percentage of maximal heart rate (%HRmax during incremental exercise on a cycle ergometer. METHODS: Seven runners, 9 cyclists, 11 triathletes, and 12 sedentary individuals, all male and apparently healthy, underwent exhaustive incremental exercise on cycle ergometers. Linear regressions between %VO2max x %HRmax were determined for each individual. Based on these regressions, %HRmax was assessed corresponding to a determined %VO2max (50, 60, 70, 80, and 90% from each participant. RESULTS: Significant differences were not found between the groups in %HRmax for each of the %VO2max assessed. Analyzing the volunteers as a single group, the

  1. Pan-specific MHC class I predictors: A benchmark of HLA class I pan-specific prediction methods

    DEFF Research Database (Denmark)

    Zhang, Hao; Lundegaard, Claus; Nielsen, Morten

    2009-01-01

    emerging pathogens. Methods have recently been published that are able to predict peptide binding to any human MHC class I molecule. In contrast to conventional allele-specific methods, these methods do allow for extrapolation to un-characterized MHC molecules. These pan-specific HLA predictors have...... not previously been compared using independent evaluation sets. Results: A diverse set of quantitative peptide binding affinity measurements was collected from IEDB, together with a large set of HLA class I ligands from the SYFPEITHI database. Based on these data sets, three different pan-specific HLA web......-accessible predictors NetMHCpan, Adaptive-Double-Threading (ADT), and KISS were evaluated. The performance of the pan-specific predictors was also compared to a well performing allele-specific MHC class I predictor, NetMHC, as well as a consensus approach integrating the predictions from the NetMHC and Net...

  2. AER Working Group D on VVER safety analysis minutes of the meeting in Rez, Czech Republic 18-20 May 1998

    International Nuclear Information System (INIS)

    AER Working Group D on VVER reactor safety analysis held its seventh meeting in Hotel Vltava in Rez near Prague during the period 18-20 May 1998. There were altogether 11 participants from 8 member organisations. The coordinator for the working group, Mr. P. Siltanen (IVO) served as chairman. In addition to the general information exchange on recent activities, the topics of the meeting included: First review of solutions to the 3-dimensional AER Dynamic Benchmark Problem No. 5 on a steam line break accident. This benchmark involves a break of the main steam header. Safety analysis of reactivity events. Recent code development work and fuel behaviour. Coolant mixing calculations and experiments related to diluted slugs. A list of participants and a list of handouts distributed at the meeting are attached to the minutes. (author)

  3. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    Science.gov (United States)

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements. PMID:26763289

  4. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    Science.gov (United States)

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements.

  5. A procedure for benchmarking specific full-scale activated sludge plants used for carbon and nitrogen removal

    NARCIS (Netherlands)

    Abusam, A.; Keesman, K.J.; Spanjers, H.; Straten, van G.; Meinema, K.

    2002-01-01

    To enhance development and acceptance of new control strategies, a standard simulation benchmarking methodology to evaluate the performance of wastewater treatment plants has recently been proposed. The proposed methodology is, however, for a typical plant and typical loading and environmental condi

  6. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  7. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  8. CB2 result evaluation (VVER-440 burnup credit benchmark)

    International Nuclear Information System (INIS)

    The second portion of the four-piece international calculational benchmark on the VVER burnup credit (CB2) prepared in the collaboration with the OECD/NEA/NSC Burnup Credit Criticality Benchmarks Working Group and proposed to the AER research community has been evaluated. The evaluated results of calculations performed by analysts from Cuba, the Czech Republic, Finland, Germany, Russia, Slovakia and the United Kingdom are presented. The goal of this study is to compare isotopic concentrations calculated by the participants using various codes and libraries for depletion of the VVER-440 fuel pin cell. No measured values were available for the comparison. (author)

  9. Information about AER WG A on improvement, extension and validation of parametrized few-group libraries for VVER 440 and VVER 1000

    International Nuclear Information System (INIS)

    Joint AER Working Group A on 'Improvement, extension and validation of parameterized few-group libraries for WWER-440 and WWER-1000' and AER Working Group B on 'Core design' nineteenth meeting was hosted by VUJE a. s. in Modra - Harmonia (Slovakia) during the period of 20. to 22. April 2010. There were present altogether 12 participants from 8 member organizations and 9 papers were presented (8 of them in written form). Objectives of the meeting of WG A are: Issues connected with spectral calculations and few-groups libraries preparation, their accuracy and validation. Presentations were devoted to some aspects of transport and diffusion calculations and to the benchmark dealing with WWER-1000 core periphery power tilt. Tamas Parko (co-authors Istvan Pos and Sandor Patai Szabo) described in his presentation 'Application of Discontinuity factors in C-PORCA 7 code', Radoslav Zajac (co-authors Petr Darilek and Vladimir Necas) spoke about 'Fast Reactor Nodalisation in HELIOS Code', Gabriel Farkas presented 'Calculation of Spatial Weighting Functions of Ex-Core Neutron Detectors for WWER-440 Using Monte Carlo Approach' and Daniel Sprinzl (co-authors Vaclav Krysl, Pavel Mikolas and Jiri Svarny) provided a definition of a benchmark in ' 'MIDICORE' WWER-1000 core periphery power tilt benchmark proposal'. (Author)

  10. Neutron Reference Benchmark Field Specifications: ACRR Polyethylene-Lead-Graphite (PLG) Bucket Environment (ACRR-PLG-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Parm, Edward J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integral dosimetry measurements in the neutron field are reported.

  11. Signal detection in FDA AERS database using Dirichlet process.

    Science.gov (United States)

    Hu, Na; Huang, Lan; Tiwari, Ram C

    2015-08-30

    In the recent two decades, data mining methods for signal detection have been developed for drug safety surveillance, using large post-market safety data. Several of these methods assume that the number of reports for each drug-adverse event combination is a Poisson random variable with mean proportional to the unknown reporting rate of the drug-adverse event pair. Here, a Bayesian method based on the Poisson-Dirichlet process (DP) model is proposed for signal detection from large databases, such as the Food and Drug Administration's Adverse Event Reporting System (AERS) database. Instead of using a parametric distribution as a common prior for the reporting rates, as is the case with existing Bayesian or empirical Bayesian methods, a nonparametric prior, namely, the DP, is used. The precision parameter and the baseline distribution of the DP, which characterize the process, are modeled hierarchically. The performance of the Poisson-DP model is compared with some other models, through an intensive simulation study using a Bayesian model selection and frequentist performance characteristics such as type-I error, false discovery rate, sensitivity, and power. For illustration, the proposed model and its extension to address a large amount of zero counts are used to analyze statin drugs for signals using the 2006-2011 AERS data. PMID:25924820

  12. Discussion forum on electron beam instruments AERE Harwell

    International Nuclear Information System (INIS)

    The purpose of this catalogue is to provide a source of information on the equipment available at AERE Harwell, to the nuclear and non-nuclear scientist. The original aim, that is, is to provide data on electron/proton beam instruments has been revised to include optical devices and ancillary preparatory equipment. The intention is to enable prospective users to have a contact who can provide further detailed information, although it must be recognised that work on certain projects completely fills the time available. This publication has been updated, first catalogue published in January 1975, to August 1980 and it is the intention that it should form part of a similar publication which incorporates details of similar equipment available throughout the UKAEA. (author)

  13. CFD Simulation of Thermal-Hydraulic Benchmark V1000CT-2 Using ANSYS CFX

    OpenAIRE

    Thomas Höhne

    2009-01-01

    Plant measured data from VVER-1000 coolant mixing experiments were used within the OECD/NEA and AER coupled code benchmarks for light water reactors to test and validate computational fluid dynamic (CFD) codes. The task is to compare the various calculations with measured data, using specified boundary conditions and core power distributions. The experiments, which are provided for CFD validation, include single loop cooling down or heating-up by disturbing the heat transfer in the steam gene...

  14. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    Energy Technology Data Exchange (ETDEWEB)

    Fujii, K [Graduate School of Medicine, Nagoya University, Nagoya, JP (Japan); UCLA School of Medicine, Los Angeles, CA (United States); Bostani, M; Cagnon, C; McNitt-Gray, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks.

  15. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    International Nuclear Information System (INIS)

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks

  16. Microbiota aeróbia conjuntival nas conjuntivites adenovirais

    Directory of Open Access Journals (Sweden)

    Nakano Eliane Mayumi

    2002-01-01

    Full Text Available Objetivos: Estudar a microbiota aeróbica conjuntival em pacientes com quadro clínico de conjuntivite viral aguda. Método: Trinta pacientes entre 18 e 40 anos portadores de conjuntivite adenoviral e 30 pacientes sem a doença foram submetidos à colheita de material da conjuntiva para cultura. Os portadores de conjuntivite adenoviral foram submetidos ao exame até 3 dias após o início dos sintomas. As culturas foram realizadas utilizando-se os meios de ágar-sangue e ágar-chocolate. Pacientes em uso de medicação tópica ou sistêmica, usuários de lentes de contato e aqueles com doença ocular prévia ou doença sistêmica foram excluídos. Resultados: Houve positividade significantemente maior das culturas de conjuntiva nos pacientes com conjuntivite adenoviral (33,3%, sendo Haemophylus influenzae em 50% e Streptococcus pneumoniae em 50% quando comparados ao grupo controle (6,6% - Staphylococcus coagulase negativo. O grupo de pacientes com conjuntivite e que apresentaram culturas positivas, não diferiu em nenhum dos critérios clínicos analisados do grupo com culturas negativas. Conclusão: Pacientes com conjuntivite adenoviral apresentaram maior freqüência de exames de cultura de amostra de conjuntiva positivas quando comparados aos controles normais. Os pacientes com conjuntivite adenoviral com cultura positiva apresentaram evolução clínica semelhante aos pacientes com cultura negativa. Os agentes isolados na microbiota conjuntival no grupo com conjuntivite foram diferentes do observado no grupo normal. Porém o resultado das culturas não apresentou correlação com a evolução clínica.

  17. Fusion Welding of AerMet 100 Alloy

    Energy Technology Data Exchange (ETDEWEB)

    ENGLEHART, DAVID A.; MICHAEL, JOSEPH R.; NOVOTNY, PAUL M.; ROBINO, CHARLES V.

    1999-08-01

    A database of mechanical properties for weldment fusion and heat-affected zones was established for AerMet{reg_sign}100 alloy, and a study of the welding metallurgy of the alloy was conducted. The properties database was developed for a matrix of weld processes (electron beam and gas-tungsten arc) welding parameters (heat inputs) and post-weld heat treatment (PWHT) conditions. In order to insure commercial utility and acceptance, the matrix was commensurate with commercial welding technology and practice. Second, the mechanical properties were correlated with fundamental understanding of microstructure and microstructural evolution in this alloy. Finally, assessments of optimal weld process/PWHT combinations for cotildent application of the alloy in probable service conditions were made. The database of weldment mechanical properties demonstrated that a wide range of properties can be obtained in welds in this alloy. In addition, it was demonstrated that acceptable welds, some with near base metal properties, could be produced from several different initial heat treatments. This capability provides a means for defining process parameters and PWHT's to achieve appropriate properties for different applications, and provides useful flexibility in design and manufacturing. The database also indicated that an important region in welds is the softened region which develops in the heat-affected zone (HAZ) and analysis within the welding metallurgy studies indicated that the development of this region is governed by a complex interaction of precipitate overaging and austenite formation. Models and experimental data were therefore developed to describe overaging and austenite formation during thermal cycling. These models and experimental data can be applied to essentially any thermal cycle, and provide a basis for predicting the evolution of microstructure and properties during thermal processing.

  18. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  19. Status of the international criticality safety benchmark evaluation project (ICSBEP)

    International Nuclear Information System (INIS)

    Since ICNC'99, four new editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments have been published. The number of benchmark specifications in the Handbook has grown from 2157 in 1999 to 3073 in 2003, an increase of nearly 1000 specifications. These benchmarks are used to validate neutronics codes and nuclear cross-section data. Twenty evaluations representing 192 benchmark specifications were added to the Handbook in 2003. The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) is provided in this paper along with a summary of the newly added benchmark specifications that appear in the 2003 Edition of the Handbook. (author)

  20. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  1. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  2. dackel acts in the ectoderm of the zebrafish pectoral fin bud to maintain AER signaling.

    Science.gov (United States)

    Grandel, H; Draper, B W; Schulte-Merker, S

    2000-10-01

    Classical embryological studies have implied the existence of an apical ectodermal maintenance factor (AEMF) that sustains signaling from the apical ectodermal ridge (AER) during vertebrate limb development. Recent evidence suggests that AEMF activity is composed of different signals involving both a sonic hedgehog (Shh) signal and a fibroblast growth factor 10 (Fgf10) signal from the mesenchyme. In this study we show that the product of the dackel (dak) gene is one of the components that acts in the epidermis of the zebrafish pectoral fin bud to maintain signaling from the apical fold, which is homologous to the AER of tetrapods. dak acts synergistically with Shh to induce fgf4 and fgf8 expression but independently of Shh in promoting apical fold morphogenesis. The failure of dak mutant fin buds to progress from the initial fin induction phase to the autonomous outgrowth phase causes loss of both AER and Shh activity, and subsequently results in a proximodistal truncation of the fin, similar to the result obtained by ridge ablation experiments in the chicken. Further analysis of the dak mutant phenotype indicates that the activity of the transcription factor engrailed 1 (En1) in the ventral non-ridge ectoderm also depends on a maintenance signal probably provided by the ridge. This result uncovers a new interaction between the AER and the dorsoventral organizer in the zebrafish pectoral fin bud.

  3. A massa gorda de risco afeta a capacidade aeróbia de jovens adolescentes

    Directory of Open Access Journals (Sweden)

    Luís Massuça

    2013-12-01

    Full Text Available OBJETIVO: Estudar o comportamento do sexo e os efeitos da idade e da massa gorda sobre a capacidade aeróbia de jovens adolescentes. MÉTODOS: Os 621 estudantes do ensino secundário participantes no estudo (14 aos 17 anos; feminino: n = 329, idade, 15,84 ± 0,92 anos; masculino: n = 292, idade, 15,82 ± 0,87 anos foram avaliados em duas categorias: morfologia (altura, peso e % massa gorda - %MG e aptidão física (capacidade aeróbia. As medições antropométricas foram realizadas de acordo com o protocolo descrito por Marfell-Jones e a %MG foi calculada por bioimpedância. A avaliação da capacidade aeróbia foi realizada com o teste aeróbio de corrida - PACER, e VO2máx relativo foi calculado utilizando a equação de Léger. Os resultados das avaliações foram classificados de acordo com os valores normativos das tabelas de referência da bateria de testes FITNESSGRAM® As técnicas estatísticas utilizadas foram: 1 cálculo de frequências; 2 teste t de Student para amostras independentes; e 3 ANOVA two-way seguida do teste post-hoc HSD de Bonferroni. RESULTADOS: 1 existem diferenças significativas entre sexos no que se refere à %MG e ao VO2máx; 2 durante a adolescência, o VO2máx estabiliza nos rapazes e sofre um declínio nas moças; 3 independentemente do sexo, a classe de %MG e a idade cronológica têm um efeito significativo sobre a capacidade aeróbia; e 4 em jovens adolescentes, com %MG de risco, a redução da %MG para níveis saudáveis parece resultar na melhoria da capacidade aeróbia. CONCLUSÃO: O impacto da %MG na capacidade aeróbia, reforça a importância da educação física escolar na promoção da saúde cardiovascular.

  4. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in...

  5. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The applica......nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  6. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... of the ‘inside’ costs of the sub-component, technical specifications of the product, opportunistic behavior from the suppliers and cognitive limitation. These are all aspects that easily can dismantle the market mechanism and make it counter-productive in the organization. Thus, by directing more attention...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...

  7. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  8. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  9. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  10. Neutron Reference Benchmark Field Specification: ACRR 44 Inch Lead-Boron (LB44) Bucket Environment (ACRR-LB44-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Parma, Edward J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  11. Articulated Entity Relationship (AER Diagram for Complete Automation of Relational Database Normalization

    Directory of Open Access Journals (Sweden)

    P. S. Dhabe

    2010-05-01

    Full Text Available In this paper an Articulated Entity Relationship (AER diagram is proposed, which is an extension of EntityRelationship (ER diagram to accommodate the Functional Dependency (FD information as its integral partfor complete automation of normalization. In current relational databases (RDBMS automation ofnormalization by top down approach is possible using ER diagram as an input, provided the FD informationis available independently, meanwhile, through user interaction. Such automation we call partial andconditional automation. To avoid this user interaction, there is a strong need to accommodate FDinformation as an element of ER diagram itself. Moreover, ER diagrams are not designed by taking intoaccount the requirements of normalization. However, for better automation of normalization it must be anintegral part of conceptual design (ER Diagram. The prime motivation behind this paper to design a systemthat need only proposed AER diagram as a sole input and normalize the database up to a given normal formin one go. This would allow more amount of automation than the current approach. Such automation we callas total and unconditional automation, which is better and complete in true sense. As the proposed AERdiagram is designed by taking in to account the normalization process, normalization up to Boyce CoddNormal Form (BCNF becomes an integral part of conceptual design. Additional advantage of AER diagramis that any modifications (addition, deletion or updation of attributes made to the AER diagram willautomatically be reflected in its FD information. Thus description of schema and FD information isguaranteed to be consistent. This cannot be assured in current approach using ER diagrams, as schema andFD information are provided to the system at two different times, separately.

  12. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  13. Aeroelastic Benchmark Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the...

  14. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  15. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  16. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  17. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  18. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  19. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  20. Uncertainties in modelling Mt. Pinatubo eruption with 2-D AER model and CCM SOCOL

    Science.gov (United States)

    Kenzelmann, P.; Weisenstein, D.; Peter, T.; Luo, B. P.; Rozanov, E.; Fueglistaler, S.; Thomason, L. W.

    2009-04-01

    Large volcanic eruptions may introduce a strong forcing on climate. They challenge the skills of climate models. In addition to the short time attenuation of solar light by ashes the formation of stratospheric sulphate aerosols, due to volcanic sulphur dioxide injection into the lower stratosphere, may lead to a significant enhancement of the global albedo. The sulphate aerosols have a residence time of about 2 years. As a consequence of the enhanced sulphate aerosol concentration both the stratospheric chemistry and dynamics are strongly affected. Due to absorption of longwave and near infrared radiation the temperature in the lower stratosphere increases. So far chemistry climate models overestimate this warming [Eyring et al. 2006]. We present an extensive validation of extinction measurements and model runs of the eruption of Mt. Pinatubo in 1991. Even if Mt. Pinatubo eruption has been the best quantified volcanic eruption of this magnitude, the measurements show considerable uncertainties. For instance the total amount of sulphur emitted to the stratosphere ranges from 5-12 Mt sulphur [e.g. Guo et al. 2004, McCormick, 1992]. The largest uncertainties are in the specification of the main aerosol cloud. SAGE II, for instance, could not measure the peak of the aerosol extinction for about 1.5 years, because optical termination was reached. The gap-filling of the SAGE II [Thomason and Peter, 2006] using lidar measurements underestimates the total extinctions in the tropics for the first half year after the eruption by 30% compared to AVHRR [Rusell et. al 1992]. The same applies to the optical dataset described by Stenchikov et al. [1998]. We compare these extinction data derived from measurements with extinctions derived from AER 2D aerosol model calculations [Weisenstein et al., 2007]. Full microphysical calculations with injections of 14, 17, 20 and 26 Mt SO2 in the lower stratosphere were performed. The optical aerosol properties derived from SAGE II

  1. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  2. A polishing hybrid AER/UF membrane process for the treatment of a high DOC content surface water.

    Science.gov (United States)

    Humbert, H; Gallard, H; Croué, J-P

    2012-03-15

    The efficacy of a combined AER/UF (Anion Exchange Resin/Ultrafiltration) process for the polishing treatment of a high DOC (Dissolved Organic Carbon) content (>8 mgC/L) surface water was investigated at lab-scale using a strong base AER. Both resin dose and bead size had a significant impact on the kinetic removal of DOC for short contact times (i.e. treatment conditions were applied in combination with UF membrane filtration on water previously treated by coagulation-flocculation (i.e. 3 mgC/L). A more severe fouling was observed for each filtration run in the presence of AER. This fouling was shown to be mainly reversible and caused by the progressive attrition of the AER through the centrifugal pump leading to the production of resin particles below 50 μm in diameter. More important, the presence of AER significantly lowered the irreversible fouling (loss of permeability recorded after backwash) and reduced the DOC content of the clarified water to l.8 mgC/L (40% removal rate), concentration that remained almost constant throughout the experiment. PMID:22200260

  3. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    Science.gov (United States)

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  4. Sistema bio - inspirado basado en AER aplicado a automoción

    OpenAIRE

    González Blanco, Manuel

    2012-01-01

    VULCANO (Ref: TEC2009-10639-C04-04) En este proyecto se ha trabajado en el diseño e implementación de un sistema bio-inspirado basado en eventos y su adaptación a cámaras comerciales, evitando unos de los principales problemas de este tipo de sistemas. Por otro lado, se ha aplicado el sistema a un entorno de automoción mediante la utilización de un simulador altamente inmersivo. De este modo se ha evaluado el rendimiento y adecuación de los sistemas basados en AER para la medición de la...

  5. AERE contracts with DoE on the treatment and disposal of intermediate level wastes

    International Nuclear Information System (INIS)

    This document reports work carried out in 1983/84 under 10 contracts between DoE and AERE on the treatment and disposal of intermediate level wastes. Individual summaries are provided for each contract report within the document, under the headings: comparative evaluation of α and βγ irradiated medium level waste forms; modelling and characterisation of intermediate level waste forms based on polymers; optimisation of processing parameters for polymer and bitumen modified cements; ceramic waste forms; radionuclide release during leaching; ion exchange processes; electrical processes for the treatment of medium active liquid wastes; fast reactor fuel element cladding; dissolver residues; flowsheeting/systems study. (U.K.)

  6. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  7. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  8. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  9. DOE Commercial Building Benchmark Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  10. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  11. Dose assessment for CEGB users of the Kodak type 2 film used in the NRPB/AERE holder

    International Nuclear Information System (INIS)

    Some work, complementary to that of the National Radiological Protection Board (NRPB) and the Atomic Energy Research Establishment (AERE), has been done at Berkeley Nuclear Laboratories (BNL) on the response of the Kodak Type 2 film in the NRPB/AERE holder. Initial results indicate that the combination forms a satisfactory dosemeter. Comparison between the BNL and NRPB results shows differences which appear to be due to the fact that the angle of incidence was 900 for the former and 350 for the latter. Some conclusions are drawn on dosimetry but in general, for CEGB users, no substantial changes from existing procedures are required. (author)

  12. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  13. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  14. Benchmarking for plant maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Komonen, K.; Ahonen, T.; Kunttu, S. (VTT Technical Research Centre of Finland, Espoo (Finland))

    2010-05-15

    The product of the project, e-Famemain, is a new kind of tool for benchmarking, which is based on many years' research efforts within Finnish industry. It helps to evaluate plants' performance in operations and maintenance by making industrial plants comparable with the aid of statistical methods. The system is updated continually and automatically. It carries out automatically multivariate statistical analysis when data is entered into system, and many other statistical operations. Many studies within Finnish industry during the last ten years have revealed clear causalities between various performance indicators. In addition, these causalities should be taken into account when utilising benchmarking or forecasting indicator values e.g. for new investments. The benchmarking system consists of five sections: data input section, positioning section, locating differences section, best practices and planning section and finally statistical tables. (orig.)

  15. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  16. DEVELOPMENT OF A MARKET BENCHMARK PRICE FOR AGMAS PERFORMANCE EVALUATIONS

    OpenAIRE

    Good, Darrel L.; Irwin, Scott H.; Jackson, Thomas E.

    1998-01-01

    The purpose of this research report is to identify the appropriate market benchmark price to use to evaluate the pricing performance of market advisory services that are included in the annual AgMAS pricing performance evaluations. Five desirable properties of market benchmark prices are identified. Three potential specifications of the market benchmark price are considered: the average price received by Illinois farmers, the harvest cash price, and the average cash price over a two-year crop...

  17. 42 CFR 422.258 - Calculation of benchmarks.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly benchmark amount” means, for a month in a year: (1) For MA local plans with service areas entirely within...

  18. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  19. Hypersensitivity reactions to anticancer agents: Data mining of the public version of the FDA adverse event reporting system, AERS

    Directory of Open Access Journals (Sweden)

    Sakaeda Toshiyuki

    2011-10-01

    Full Text Available Abstract Background Previously, adverse event reports (AERs submitted to the US Food and Drug Administration (FDA database were reviewed to confirm platinum agent-associated hypersensitivity reactions. The present study was performed to confirm whether the database could suggest the hypersensitivity reactions caused by anticancer agents, paclitaxel, docetaxel, procarbazine, asparaginase, teniposide, and etoposide. Methods After a revision of arbitrary drug names and the deletion of duplicated submissions, AERs involving candidate agents were analyzed. The National Cancer Institute Common Terminology Criteria for Adverse Events version 4.0 was applied to evaluate the susceptibility to hypersensitivity reactions, and standardized official pharmacovigilance tools were used for quantitative detection of signals, i.e., drug-associated adverse events, including the proportional reporting ratio, the reporting odds ratio, the information component given by a Bayesian confidence propagation neural network, and the empirical Bayes geometric mean. Results Based on 1,644,220 AERs from 2004 to 2009, the signals were detected for paclitaxel-associated mild, severe, and lethal hypersensitivity reactions, and docetaxel-associated lethal reactions. However, the total number of adverse events occurring with procarbazine, asparaginase, teniposide, or etoposide was not large enough to detect signals. Conclusions The FDA's adverse event reporting system, AERS, and the data mining methods used herein are useful for confirming drug-associated adverse events, but the number of co-occurrences is an important factor in signal detection.

  20. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...

  1. Benchmark problem proposal

    International Nuclear Information System (INIS)

    The meeting of the Radiation Energy Spectra Unfolding Workshop organized by the Radiation Shielding Information Center is discussed. The plans of the unfolding code benchmarking effort to establish methods of standardization for both the few channel neutron and many channel gamma-ray and neutron spectroscopy problems are presented

  2. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  3. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  4. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  5. Benchmarking Public Procurement 2016

    OpenAIRE

    World Bank Group

    2015-01-01

    Benchmarking Public Procurement 2016 Report aims to develop actionable indicators which will help countries identify and monitor policies and regulations that impact how private sector companies do business with the government. The project builds on the Doing Business methodology and was initiated at the request of the G20 Anti-Corruption Working Group.

  6. Sp6 and Sp8 transcription factors control AER formation and dorsal-ventral patterning in limb development.

    Directory of Open Access Journals (Sweden)

    Endika Haro

    2014-08-01

    Full Text Available The formation and maintenance of the apical ectodermal ridge (AER is critical for the outgrowth and patterning of the vertebrate limb. The induction of the AER is a complex process that relies on integrated interactions among the Fgf, Wnt, and Bmp signaling pathways that operate within the ectoderm and between the ectoderm and the mesoderm of the early limb bud. The transcription factors Sp6 and Sp8 are expressed in the limb ectoderm and AER during limb development. Sp6 mutant mice display a mild syndactyly phenotype while Sp8 mutants exhibit severe limb truncations. Both mutants show defects in AER maturation and in dorsal-ventral patterning. To gain further insights into the role Sp6 and Sp8 play in limb development, we have produced mice lacking both Sp6 and Sp8 activity in the limb ectoderm. Remarkably, the elimination or significant reduction in Sp6;Sp8 gene dosage leads to tetra-amelia; initial budding occurs, but neither Fgf8 nor En1 are activated. Mutants bearing a single functional allele of Sp8 (Sp6-/-;Sp8+/- exhibit a split-hand/foot malformation phenotype with double dorsal digit tips probably due to an irregular and immature AER that is not maintained in the center of the bud and on the abnormal expansion of Wnt7a expression to the ventral ectoderm. Our data are compatible with Sp6 and Sp8 working together and in a dose-dependent manner as indispensable mediators of Wnt/βcatenin and Bmp signaling in the limb ectoderm. We suggest that the function of these factors links proximal-distal and dorsal-ventral patterning.

  7. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  8. A protein–DNA docking benchmark

    NARCIS (Netherlands)

    van Dijk, M.; Bonvin, A.M.J.J.

    2008-01-01

    We present a protein–DNA docking benchmark containing 47 unbound–unbound test cases of which 13 are classified as easy, 22 as intermediate and 12 as difficult cases. The latter shows considerable structural rearrangement upon complex formation. DNA-specific modifications such as flipped out bases an

  9. Radiography benchmark 2014

    Science.gov (United States)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  10. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    : SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.

  11. SP2Bench: A SPARQL Performance Benchmark

    CERN Document Server

    Schmidt, Michael; Lausen, Georg; Pinkel, Christoph

    2008-01-01

    Recently, the SPARQL query language for RDF has reached the W3C recommendation status. In response to this emerging standard, the database community is currently exploring efficient storage techniques for RDF data and evaluation strategies for SPARQL queries. A meaningful analysis and comparison of these approaches necessitates a comprehensive and universal benchmark platform. To this end, we have developed SP$^2$Bench, a publicly available, language-specific SPARQL performance benchmark. SP$^2$Bench is settled in the DBLP scenario and comprises both a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. As a proof of concept, we apply SP$^2$Bench to existing engines and discuss ...

  12. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects, and m...... of the benchmark to three spatio-temporal indexes - the TPR-, TPR*-, and Bx-trees. Representative experimental results and consequent guidelines for the usage of these indexes are reported....

  13. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  14. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  15. Amide-Exchange-Rate-Edited NMR (AERE-NMR) Experiment:A Novel Method for Resolving Overlapping Resonances

    Institute of Scientific and Technical Information of China (English)

    LIU Xue-Hui; LIN Dong-Hai

    2007-01-01

    This paper describes an amide-exchange-rate-edited (AERE) NMR method that can effectively alleviate the problem of resonance overlap for proteins and peptides. This method exploits the diversity of amide proton exchange rates and consists of two complementary experiments: (1) SEA (solvent exposed amide)-type NMR experiments to map exchangeable surface residues whose amides are not involved in hydrogen bonding, and (2) presat-type NMR experiments to map solvent inaccessibly buried residues or nonexchangeable residues located in hydrogen-bonded secondary structures with properly controlled saturation transfer via amide proton exchanges with the solvent. This method separates overlapping resonances in a spectrum into two complementary spectra. The AERE-NMR method was demonstrated with a sample of 15N/13C/2H(70%) labeled ribosome-inactivating protein trichosanthin of 247 residues.

  16. Algorithm and Architecture Independent Benchmarking with SEAK

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  17. Iniciación educativa a la resistencia aeróbica. (III Canto en carrera: funciones y asignaciones

    Directory of Open Access Journals (Sweden)

    Antonio D. Galera

    2014-03-01

    Full Text Available El autor presenta un conjunto de factores didácticos sistemáticos para la iniciación educativa a la resistencia aeróbica, centrándose en métodos aplicables a la realidad escolar desde una perspectiva sostenible. La doctrina en que se sustenta fue desarrollada por el autor como resultado de su experiencia con un grupo de niños y niñas que participaron en un programa escolar de educación física cooperativa, uno de cuyos contenidos maestros era el desarrollo de la resistencia aeróbica. Los criterios de aplicación didáctica están contemplados desde el ámbito escolar, de primaria o de secundaria, si bien pueden adaptarse a otros ámbitos, como el entrenamiento a cualquier edad, actividades de tiempo libre, etc. El autor concluye su exposición doctrinal y presenta repertorios de cantos escolares aplicables a su propuesta de interrelación disciplinar entre educación física y educación musical para la iniciación educativa a la resistencia aeróbica desde una perspectiva sostenible.

  18. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  19. A Bio-Inspired AER Temporal Tri-Color Differentiator Pixel Array.

    Science.gov (United States)

    Farian, Łukasz; Leñero-Bardallo, Juan Antonio; Häfliger, Philipp

    2015-10-01

    This article investigates the potential of a bio-inspired vision sensor with pixels that detect transients between three primary colors. The in-pixel color processing is inspired by the retinal color opponency that are found in mammalian retinas. Color transitions in a pixel are represented by voltage spikes, which are akin to a neuron's action potential. These spikes are conveyed off-chip by the Address Event Representation (AER) protocol. To achieve sensitivity to three different color spectra within the visual spectrum, each pixel has three stacked photodiodes at different depths in the silicon substrate. The sensor has been fabricated in the standard TSMC 90 nm CMOS technology. A post-processing method to decode events into color transitions has been proposed and implemented as a custom interface to display real-time color changes in the visual scene. Experimental results are provided. Color transitions can be detected at high speed (up to 2.7 kHz). The sensor has a dynamic range of 58 dB and a power consumption of 22.5 mW. This type of sensor can be of use in industrial, robotics, automotive and other applications where essential information is contained in transient emissions shifts within the visual spectrum. PMID:26540694

  20. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  1. Specifications

    International Nuclear Information System (INIS)

    As part of the Danish RERTR Program, three fuel elements with LEU U3O8-Al fuel and three fuel elements with LEU U3Si2-Al fuel were manufactured by NUKEM for irradiation testing in the DR-3 reactor at the Risoe National Laboratory in Denmark. The specifications for the elements with U3O8-Al fuel are presented here as an illustration only. Specifications for the elements with U3Si2-Al fuel were very similar. In this example, materials, material numbers, documents numbers, and drawing numbers specific to a single fabricator have been deleted. (author)

  2. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  3. Results of the fifth three-dimensional dynamic atomic energy research benchmark problem calculation

    International Nuclear Information System (INIS)

    The pare gives a brief survey of the fifth three-dimensional dynamic atomic energy research benchmark calculation results received with the code DYN3D/ATHLET at NRI Rez. This benchmark was defined at the seventh AER Symposium. Its initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one stuck out control rot group. The calculations were performed with the externally coupled codes ATHLET Mod.1.1 Cycle C and DYN3DH1.1/M3. The Kasseta library was used for the generation of reactor core neutronic parameters. The standard WWER-440/213 input deck of ATHLET code was adopted for benchmark purposes and for coupling with the code DYN3D. The first part of paper contains a brief characteristics of NPP input deck and reactor core model. The second part shows the time dependencies of important global, fuel assembly and loops parameters.(Author)

  4. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  5. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  6. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  7. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  8. CAVIAR: A 45k neuron, 5M synapse, 12G connects/s AER hardware sensory-processing-learning-actuating system for high-speed visual object recognition and tracking

    OpenAIRE

    Linares-Barranco, Alejandro; Paz-Vicente, R.; Camuñas-Mesa, L.; Delbruck, Tobi; Jimenez-Moreno, Gabriel; Civit-Balcells, Antón; Serrano-Gotarredona, Teresa; Acosta, Antonio José; Linares-Barranco, Bernabé

    2009-01-01

    This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asychronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45k neurons (spiking cells), up to 5M synapses, performs 12G synap...

  9. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  10. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  11. Effects of tempering process on properties and microstructure of AerMet340 steel%回火工艺对 AerMet340钢性能和微观组织的影响

    Institute of Scientific and Technical Information of China (English)

    黄顺喆; 厉勇; 王春旭; 刘宪民; 田志凌

    2015-01-01

    The effects of tempering temperature and time on the mechanical properties and microstructure of a secondary hardening ultra-high strength steel AerMet340 were investigated by means of mechanical test, SEM, TEM and XRD respectively.The results show that the tempering curve of AerMet340 steel presents secondary hardening phenomenon obviously, and the tempering process to obtain the best properties is tempered at 482 ℃ for 5 h and then air cooled.The peak values of tensile strength and yield strength are 2460 MPa and 2061 MPa, corresponding to the peak temperature respectively of 450 ℃ and 468 ℃.When tempered at lower temperatures, AerMet340 steel is mainly composed of tempered martensite andε-carbides.When tempered above 468 ℃, the wee acicular M2 C carbides precipitate with distributing of dispersion in matrix, which is one of the main reasons for the high strength of the steel.Moreover, with tempering temperature rising, the contents of Fe, Cr and Mo, as the main components of M2 C carbides, increase significantly, which makes the lattice constant of M2 C change and the precipitates are divorced from a coherent relationship with the matrix gradually.%采用力学性能测试、SEM、TEM、XRD等试验方法研究了回火温度和时间对二次硬化型超高强度钢AerMet340的力学性能及微观组织的影响。结果表明,AerMet340钢的回火曲线呈现明显的二次硬化现象,获得最佳综合性能的回火工艺为482℃×5 h空冷;抗拉强度、规定塑性延伸强度峰值分别为2460 MPa、2061 MPa,对应的回火温度分别为450、468℃;在低温回火时,AerMet340钢主要由回火马氏体和ε-碳化物组成,高于468℃回火时,基体中弥散分布着细小针状M2 C碳化物,这是该钢获得高强韧性的主要原因之一;随着回火温度的上升,合金碳化物M2 C的主要合金成分Fe、Cr、Mo含量明显升高,使得M2 C的晶格常数发生变化,并逐渐脱离了与基体的共格关系。

  12. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  13. Quantum benchmarks for Gaussian states

    CERN Document Server

    Chiribella, Giulio

    2014-01-01

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.

  14. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  15. Fuel characteristics pertinent to the design of aircraft fuel systems, Supplement I : additional information on MIL-F-7914(AER) grade JP-5 fuel and several fuel oils

    Science.gov (United States)

    Barnett, Henry C; Hibbard, Robert R

    1953-01-01

    Since the release of the first NACA publication on fuel characteristics pertinent to the design of aircraft fuel systems (NACA-RM-E53A21), additional information has become available on MIL-F7914(AER) grade JP-5 fuel and several of the current grades of fuel oils. In order to make this information available to fuel-system designers as quickly as possible, the present report has been prepared as a supplement to NACA-RM-E53A21. Although JP-5 fuel is of greater interest in current fuel-system problems than the fuel oils, the available data are not as extensive. It is believed, however, that the limited data on JP-5 are sufficient to indicate the variations in stocks that the designer must consider under a given fuel specification. The methods used in the preparation and extrapolation of data presented in the tables and figures of this supplement are the same as those used in NACA-RM-E53A21.

  16. Benchmarking of collimation tracking using RHIC beam loss data.

    Energy Technology Data Exchange (ETDEWEB)

    Robert-Demolaize,G.; Drees, A.

    2008-06-23

    State-of-the-art tracking tools were recently developed at CERN to study the cleaning efficiency of the Large Hadron Collider (LHC) collimation system. In order to estimate the prediction accuracy of these tools, benchmarking studies can be performed using actual beam loss measurements from a machine that already uses a similar multistage collimation system. This paper reviews the main results from benchmarking studies performed with specific data collected from operations at the Relativistic Heavy Ion Collider (RHIC).

  17. Extraction of pure thermal neutron beam for the proposed PGNAA facility at the TRIGA research reactor of AERE, Savar, Bangladesh

    Science.gov (United States)

    Alam, Sabina; Zaman, M. A.; Islam, S. M. A.; Ahsan, M. H.

    1993-10-01

    A study on collimators and filters for the design of a spectrometer for prompt gamma neutron activation analysis (PGNAA) at one of the radial beamports of the TRIGA Mark II reactor at AERE, Savar has been carried out. On the basis of this study a collimator and a filter have been designed for the proposed PGNAA facility. Calculations have been done for measuring neutron flux at various positions of the core of the reactor using the computer code TRIGAP. Gamma dose in the core of the reactor has also been measured experimentally using TLD technique in the present work.

  18. Efeitos do treinamento aeróbio sobre o perfil lipídico de ratos com hipertireoidismo

    OpenAIRE

    Renata Valle Pedroso; Alexandre Konig Garcia Prado; Luiza Hermínia Gallo; Marcelo Costa Junior; Natália Oliveira Betolini; Rodrigo Augusto Dalia; Maria Alice Rostom de Mello; Eliete Luciano

    2012-01-01

    Há poucos estudos analisando a importante relação entre o exercício físico, agudo e crônico, e alterações metabólicas decorrentes do hipertireoidismo. O objetivo do presente estudo foi analisar o efeito de quatro semanas de treinamento aeróbio sobre o perfil lipídico de ratos com hipertireoidismo experimental. Foram utilizados 45 ratos da linhagem Wistar, divididos aleatoriamente em quatro grupos: Controle Sedentário (CS) - administrados com salina durante o período experimental, não praticar...

  19. Extraction of pure thermal neutron beam for the proposed PGNAA facility at the TRIGA research reactor of AERE, Savar, Bangladesh

    Energy Technology Data Exchange (ETDEWEB)

    Alam, S. (Physics Dept., Jahangirnagar Univ., Savar, Dhaka (Bangladesh)); Zaman, M.A. (Physics Dept., Jahangirnagar Univ., Savar, Dhaka (Bangladesh)); Islam, S.M.A. (Physics Dept., Jahangirnagar Univ., Savar, Dhaka (Bangladesh)); Ahsan, M.H. (Inst. of Nuclear Science and Technology (INST), AERE, Savar, Dhaka (Bangladesh))

    1993-10-01

    A study on collimators and filters for the design of a spectrometer for prompt gamma neutron activation analysis (PGNAA) at one of the radial beamports of the TRIGA Mark II reactor at AERE, Savar has been carried out. On the basis of this study a collimator and a filter have been designed for the proposed PGNAA facility. Calculations have been done for measuring neutron flux at various positions of the core of the reactor using the computer code TRIGAP. Gamma dose in the core of the reactor has also been measured experimentally using TLD technique in the present work. (orig.)

  20. Extraction of pure thermal neutron beam for the proposed PGNAA facility at the TRIGA research reactor of AERE, Savar, Bangladesh

    International Nuclear Information System (INIS)

    A study on collimators and filters for the design of a spectrometer for prompt gamma neutron activation analysis (PGNAA) at one of the radial beamports of the TRIGA Mark II reactor at AERE, Savar has been carried out. On the basis of this study a collimator and a filter have been designed for the proposed PGNAA facility. Calculations have been done for measuring neutron flux at various positions of the core of the reactor using the computer code TRIGAP. Gamma dose in the core of the reactor has also been measured experimentally using TLD technique in the present work. (orig.)

  1. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  2. BENCHMARKING ON-LINE SERVICES INDUSTRIES

    Institute of Scientific and Technical Information of China (English)

    John HAMILTON

    2006-01-01

    The Web Quality Analyser (WQA) is a new benchmarking tool for industry. It hasbeen extensively tested across services industries. Forty five critical success features are presented as measures that capture the user's perception of services industry websites. This tool differs to previous tools, in that it captures the information technology (IT) related driver sectors of website performance, along with the marketing-services related driver sectors. These driver sectors capture relevant structure, function and performance components.An 'on-off' switch measurement approach determines each component. Relevant component measures scale into a relative presence of the applicable feature, with a feature block delivering one of the sector drivers. Although it houses both measurable and a few subjective components, the WQA offers a proven and useful means to compare relevant websites.The WQA defines website strengths and weaknesses, thereby allowing for corrections to the website structure of the specific business. WQA benchmarking against services related business competitors delivers a position on the WQA index, facilitates specific website driver rating comparisons, and demonstrates where key competitive advantage may reside. This paper reports on the marketing-services driver sectors of this new benchmarking WQA tool.

  3. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  4. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  5. Benchmarking and energy management schemes in SMEs

    Energy Technology Data Exchange (ETDEWEB)

    Huenges Wajer, Boudewijn [SenterNovem (Netherlands); Helgerud, Hans Even [New Energy Performance AS (Norway); Lackner, Petra [Austrian Energy Agency (Austria)

    2007-07-01

    Many companies are reluctant to focus on energy management or to invest in energy efficiency measures. Nevertheless, there are many good examples proving that the right approach to implementing energy efficiency can very well be combined with the business-priorities of most companies. SMEs in particular can benefit from a facilitated European approach because they normally have a lack of resources and time to invest in energy efficiency. In the EU supported pilot project BESS, 60 SMEs from 11 European countries of the food and drink industries successfully tested a package of interactive instruments which offers such a facilitated approach. A number of pilot companies show a profit increase of 3 up to 10 %. The package includes a user-friendly and web based E-learning scheme for implementing energy management as well as a benchmarking module for company specific comparison of energy performance indicators. Moreover, it has several practical and tested tools to support the cycle of continuous improvement of energy efficiency in the company such as checklists, sector specific measure lists, templates for auditing and energy conservation plans. An important feature and also a key trigger for companies is the possibility for SMEs to benchmark anonymously their energy situation against others of the same sector. SMEs can participate in a unique web based benchmarking system to interactively benchmark in a way which fully guarantees confidentiality and safety of company data. Furthermore, the available data can contribute to a bottom-up approach to support the objectives of (national) monitoring and targeting and thereby also contributing to the EU Energy Efficiency and Energy Services Directive. A follow up project to expand the number of participating SMEs of various sectors is currently being developed.

  6. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  7. Building a knowledge base of severe adverse drug events based on AERS reporting data using semantic web technologies.

    Science.gov (United States)

    Jiang, Guoqian; Wang, Liwei; Liu, Hongfang; Solbrig, Harold R; Chute, Christopher G

    2013-01-01

    A semantically coded knowledge base of adverse drug events (ADEs) with severity information is critical for clinical decision support systems and translational research applications. However it remains challenging to measure and identify the severity information of ADEs. The objective of the study is to develop and evaluate a semantic web based approach for building a knowledge base of severe ADEs based on the FDA Adverse Event Reporting System (AERS) reporting data. We utilized a normalized AERS reporting dataset and extracted putative drug-ADE pairs and their associated outcome codes in the domain of cardiac disorders. We validated the drug-ADE associations using ADE datasets from SIDe Effect Resource (SIDER) and the UMLS. We leveraged the Common Terminology Criteria for Adverse Event (CTCAE) grading system and classified the ADEs into the CTCAE in the Web Ontology Language (OWL). We identified and validated 2,444 unique Drug-ADE pairs in the domain of cardiac disorders, of which 760 pairs are in Grade 5, 775 pairs in Grade 4 and 2,196 pairs in Grade 3.

  8. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    International Nuclear Information System (INIS)

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  9. Benchmark calculations of sodium fast critical experiments

    International Nuclear Information System (INIS)

    The high expectations from fast critical experiments impose the additional requirements on reliability of final reconstructed values, obtained in experiments at critical facility. Benchmark calculations of critical experiments are characterized by impossibility of complete experiment reconstruction, the large amounts of input data (dependent and independent) with very different reliability. It should also take into account different sensitivity of the measured and appropriate calculated characteristics to the identical changes of geometry parameters, temperature, and isotopic composition of individual materials. The calculations of critical facility experiments are produced for the benchmark models, generated by the specific reconstructing codes with its features when adjusting model parameters, and using the nuclear data library. The generated benchmark model, providing the agreed calculated and experimental values for one or more neutronic characteristics can lead to considerable differences for other key characteristics. The sensitivity of key neutronic characteristics to the extra steel allocation in the core, and ENDF/B nuclear data sources is performed using a few calculated models of BFS-62-3A and BFS1-97 critical assemblies. The comparative analysis of the calculated effective multiplication factor, spectral indices, sodium void reactivity, and radial fission-rate distributions leads to quite different models, providing the best agreement the calculated and experimental neutronic characteristics. This fact should be considered during the refinement of computational models and code-verification purpose. (author)

  10. AONBench: A Methodology for Benchmarking XML Based Service Oriented Applications

    Directory of Open Access Journals (Sweden)

    Abdul Waheed

    2007-09-01

    Full Text Available Service Oriented Architectures (SOA and applications increasingly rely on network infrastructure instead of back-end servers. Cisco system Application Oriented Networking (AON initiative exemplifies this trend. Benchmarking such infrastructure and their services is expected to play an important role in the networking industry. We present AONBech specifications and methodology to benchmark networked XML application servers and appliances. AONBench is not a benchmarking tool. It is a specification and methodology for performance measurements, which leverages from existing XML microbenchmarks and uses HTTP for end-to-end communication. We implement AONBench specifications for end-to-end performance measurements through public domain HTTP load generation tool ApacheBench and Apache web server. We present three case studies of using AONBench for architecting real application oriented networking products.

  11. Benchmark analysis of the TRIGA MARK II research reactor using Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Huda, M.Q. E-mail: quamrul@dhaka.net; Rahman, M.; Sarker, M.M.; Bhuiyan, S.I

    2004-07-01

    This study deals with the neutronic analysis of the current core configuration of a 3-MW TRIGA MARK II research reactor at Atomic Energy Research Establishment (AERE), Savar, Dhaka, Bangladesh and validation of the results by benchmarking with the experimental, operational and available Final Safety Analysis Report (FSAR) values. The 3-D continuous-energy Monte Carlo code MCNP4C was used to develop a versatile and accurate full-core model of the TRIGA core. The model represents in detail all components of the core with literally no physical approximation. All fresh fuel and control elements as well as the vicinity of the core were precisely described. Continuous energy cross-section data from ENDF/B-VI and ENDF/B-V and S({alpha},{beta}) scattering functions from the ENDF/B-VI library were used. The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics was established by benchmarking the TRIGA experiments. The effective multiplication factor, power distribution and peaking factors, neutron flux distribution, and reactivity experiments comprising control rod worths, critical rod height, excess reactivity and shutdown margin were used in the validation process. The MCNP predictions and the experimentally determined values are found to be in very good agreement, which indicates that the simulation of TRIGA reactor is treated adequately.

  12. Benchmark analysis of the TRIGA MARK II research reactor using Monte Carlo techniques

    International Nuclear Information System (INIS)

    This study deals with the neutronic analysis of the current core configuration of a 3-MW TRIGA MARK II research reactor at Atomic Energy Research Establishment (AERE), Savar, Dhaka, Bangladesh and validation of the results by benchmarking with the experimental, operational and available Final Safety Analysis Report (FSAR) values. The 3-D continuous-energy Monte Carlo code MCNP4C was used to develop a versatile and accurate full-core model of the TRIGA core. The model represents in detail all components of the core with literally no physical approximation. All fresh fuel and control elements as well as the vicinity of the core were precisely described. Continuous energy cross-section data from ENDF/B-VI and ENDF/B-V and S(α,β) scattering functions from the ENDF/B-VI library were used. The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics was established by benchmarking the TRIGA experiments. The effective multiplication factor, power distribution and peaking factors, neutron flux distribution, and reactivity experiments comprising control rod worths, critical rod height, excess reactivity and shutdown margin were used in the validation process. The MCNP predictions and the experimentally determined values are found to be in very good agreement, which indicates that the simulation of TRIGA reactor is treated adequately

  13. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  14. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  15. Benchmarking: a method for continuous quality improvement in health.

    Science.gov (United States)

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  16. 75 FR 27332 - AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC; Eagle Creek Water Resources, LLC; Eagle Creek Land...

    Science.gov (United States)

    2010-05-14

    ... Energy Regulatory Commission AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC; Eagle Creek Water Resources... Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, and Eagle Creek Land Resources, LLC.... For the transferee: Mr. Paul Ho, Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC,...

  17. 77 FR 13592 - AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources, LLC, Eagle Creek Land...

    Science.gov (United States)

    2012-03-07

    ... Energy Regulatory Commission AER NY-Gen, LLC; Eagle Creek Hydro Power, LLC, Eagle Creek Water Resources... Power, LLC, Eagle Creek Water Resources, LLC, and Eagle Creek Land Resources, LLC (transferees) filed an...) 805-1469. Transferees: Mr. Bernard H. Cherry, Eagle Creek Hydro Power, LLC, Eagle Creek...

  18. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  19. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  20. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  1. ENDF/B-V, LIB-V, and the CSEWG benchmarks

    International Nuclear Information System (INIS)

    A 70-group library, LIB-V, generated with the NJOY processing code from ENDF/B-V, is tested on most of the Cross Section Evaluation Working Group (CSEWG) fast reactor benchmarks. Every experimental measurement reported in the benchmark specifications is compared to both diffusion theory and transport theory calculations. Several comparisons with prior benchmark calculations attempt to assess the effects of data and code improvements

  2. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  3. Nominal GDP: Target or Benchmark?

    OpenAIRE

    Hetzel, Robert L.

    2015-01-01

    Some observers have argued that the Federal Reserve would best fulfill its mandate by adopting a target for nominal gross domestic product (GDP). Insights from the monetarist tradition suggest that nominal GDP targeting could be destabilizing. However, adopting benchmarks for both nominal and real GDP could offer useful information about when monetary policy is too tight or too loose.

  4. Benchmark calculations for EGS5

    International Nuclear Information System (INIS)

    In the past few years, EGS4 has undergone an extensive upgrade to EGS5, in particularly in the areas of low-energy electron physics, low-energy photon physics, PEGS cross section generation, and the coding from Mortran to Fortran programming. Benchmark calculations have been made to assure the accuracy, reliability and high quality of the EGS5 code system. This study reports three benchmark examples that show the successful upgrade from EGS4 to EGS5 based on the excellent agreements among EGS4, EGS5 and measurements. The first benchmark example is the 1969 Crannell Experiment to measure the three-dimensional distribution of energy deposition for 1-GeV electrons shower in water and aluminum tanks. The second example is the 1995 Compton-scattered spectra measurements for 20-40 keV, linearly polarized photon by Namito et. al., in KEK, which was a main part of the low-energy photon expansion work for both EGS4 and EGS5. The third example is the 1986 heterogeneity benchmark experiment by Shortt et. al., who used a monoenergetic 20-MeV electron beam to hit the front face of a water tank containing both air and aluminum cylinders and measured spatial depth dose distribution using a small solid-state detector. (author)

  5. Benchmarking biodiversity performances of farmers

    NARCIS (Netherlands)

    Snoo, de G.R.; Lokhorst, A.M.; Dijk, van J.; Staats, H.; Musters, C.J.M.

    2010-01-01

    Farmers are the key players when it comes to the enhancement of farmland biodiversity. In this study, a benchmark system that focuses on improving farmers’ nature conservation was developed and tested among Dutch arable farmers in different social settings. The results show that especially tailored

  6. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  7. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  8. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  9. CAVIAR: a 45k neuron, 5M synapse, 12G connects/s AER hardware sensory-processing- learning-actuating system for high-speed visual object recognition and tracking.

    Science.gov (United States)

    Serrano-Gotarredona, Rafael; Oster, Matthias; Lichtsteiner, Patrick; Linares-Barranco, Alejandro; Paz-Vicente, Rafael; Gomez-Rodriguez, Francisco; Camunas-Mesa, Luis; Berner, Raphael; Rivas-Perez, Manuel; Delbruck, Tobi; Liu, Shih-Chii; Douglas, Rodney; Hafliger, Philipp; Jimenez-Moreno, Gabriel; Civit Ballcels, Anton; Serrano-Gotarredona, Teresa; Acosta-Jimenez, Antonio J; Linares-Barranco, Bernabé

    2009-09-01

    This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asychronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45k neurons (spiking cells), up to 5M synapses, performs 12G synaptic operations per second, and achieves millisecond object recognition and tracking latencies. PMID:19635693

  10. Niveles de intensidad de la música durante un torneo de resistencia aeróbica en Costa Rica Music intensity levels during an aerobics endurance tournament in Costa Rica

    OpenAIRE

    Yamileth Chacón Araya; José Moncada Jiménez

    2008-01-01

    El propósito del presente artículo es describir los niveles de ruido generados en una competencia de resistencia aeróbica y se analizan las posibles implicaciones para la salud de la contaminación por ruido. La danza aeróbica es un modo de ejercicio que se ha extendido por todo el mundo, con el fin de posibilitar la práctica de una actividad física que combina música y movimiento. Al incluir el elemento musical en las clases de danza aeróbica, se expone a las personas que practican esta modal...

  11. Validation of CENDL and JEFF evaluated nuclear data files for TRIGA calculations through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors

    Energy Technology Data Exchange (ETDEWEB)

    Uddin, M.N. [Department of Physics, Jahangirnagar University, Dhaka (Bangladesh); Sarker, M.M. [Reactor Physics and Engineering Division, Institute of Nuclear Science and Technology, Atomic Energy Research Establishment, Savar, GPO Box 3787, Dhaka 1000 (Bangladesh); Khan, M.J.H. [Reactor Physics and Engineering Division, Institute of Nuclear Science and Technology, Atomic Energy Research Establishment, Savar, GPO Box 3787, Dhaka 1000 (Bangladesh)], E-mail: jahirulkhan@yahoo.com; Islam, S.M.A. [Department of Physics, Jahangirnagar University, Dhaka (Bangladesh)

    2009-10-15

    The aim of this paper is to present the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through the analysis of the integral parameters of TRX and BAPL benchmark lattices of thermal reactors for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. In this process, the 69-group cross-section library for lattice code WIMS was generated using the basic evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 with the help of nuclear data processing code NJOY99.0. Integral measurements on the thermal reactor lattices TRX-1, TRX-2, BAPL-UO{sub 2}-1, BAPL-UO{sub 2}-2 and BAPL-UO{sub 2}-3 served as standard benchmarks for testing nuclear data files and have also been selected for this analysis. The integral parameters of the said lattices were calculated using the lattice transport code WIMSD-5B based on the generated 69-group cross-section library. The calculated integral parameters were compared to the measured values as well as the results of Monte Carlo Code MCNP. It was found that in most cases, the values of integral parameters show a good agreement with the experiment and MCNP results. Besides, the group constants in WIMS format for the isotopes U-235 and U-238 between two data files have been compared using WIMS library utility code WILLIE and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through benchmarking the integral parameters of TRX and BAPL lattices and can also be essential to implement further neutronic analysis of TRIGA Mark-II research reactor at AERE, Dhaka, Bangladesh.

  12. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  13. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  14. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  15. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    Science.gov (United States)

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project. PMID:24825693

  16. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  17. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Experimental results of pulse parameters and control rod worth measurements at TRIGA Mark 2 reactor in Ljubljana are presented. The measurements were performed with a completely fresh, uniform, and compact core. Only standard fuel elements with 12 wt% uranium were used. Special efforts were made to get reliable and accurate results at well-defined experimental conditions, and it is proposed to use the results as a benchmark test case for TRIGA reactors

  18. Multisensor benchmark data for riot control

    Science.gov (United States)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  19. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  20. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  1. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past the doorstep and is now in early trials in the project and construction environment....

  2. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  3. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  4. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  5. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  6. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  7. Etileno e peróxido de hidrogênio na formação de aerênquima em milho tolerante a alagamento intermitente

    Directory of Open Access Journals (Sweden)

    Marinês Ferreira Pires

    2015-09-01

    Full Text Available Resumo:O objetivo deste trabalho foi avaliar o papel do etileno e do peróxido de hidrogênio (H2O2 na formação do aerênquima em ciclos de seleção genética da cultivar de milho BRS 4154, sob alagamento. Plantas dos ciclos C1 e C18 foram submetidas a alagamento por 7 dias, com coleta das raízes aos 0 (controle, sem alagamento, 1 e 7 dias. Foram analisados: a expressão gênica das enzimas ACC sintase (ACS, ACC oxidase (ACO, dismutase do superóxido (SOD e peroxidase do ascorbato (APX; a produção de etileno e o conteúdo de H2O2; a atividade da enzima ACO; e a proporção de aerênquima no córtex. Não houve expressão de ACS e ACO. Houve variação na atividade de ACO e na produção de etileno. A expressão da SOD foi maior em plantas C1 e a da APX, em C18, com redução aos 7 dias. O conteúdo de H2O2 não diferiu entre os tratamentos. A proporção de aerênquima aumentou com o tempo, tendo sido maior em plantas C18 e relacionada à taxa de formação do aerênquima. O tempo de alagamento e o nível de tolerância do ciclo de seleção influenciam a produção do etileno. A expressão da APX indica maior produção de H2O2 no início do alagamento.

  8. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  9. Microstructure and Mechanical Properties of AerMet 100 Ultra-high Strength Steel Joints by Laser Welding

    Institute of Scientific and Technical Information of China (English)

    LIU Fencheng; YU Xiaobin; HUANG Chunping; HE Lihua; CHEN Yuhua; BU Wende

    2015-01-01

    AerMet100 ultra-high strength steel plates with a thickness of 2 mm were welded using a CO2 laser welding system. The inlfuences of the welding process parameters on the morphology and microstructure of the welding joints were investigated, and the mechanical property of the welding joints was analyzed. The experimental results showed that the fusion zone of welding joint mainly consisted of columnar grains and a ifne dendrite substructure grew epitaxially from the matrix. With the other conditions remaining unchanged, a finer weld microstructure was along with the scanning speed increase. The solidification microstructure gradually transformed from cellular crystal into dendrite crystal and the spaces of dendrite secondary arms rose from the fusion line to the center of the fusion zone. In the fusion zone of the weld, the rapid cooling caused the formation of martensite, which led the microhardness of the fusion zone higher than that of the matrix and the heat affected zone. The tensile strength of the welding joints was tested as 1 700 MPa, which was about 87% of the matrix. However, the tensile strength of the welding joints without defects existed was tested as 1832 MPa, which was about 94% of the matrix.

  10. Gaia FGK benchmark stars: Metallicity

    Science.gov (United States)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  11. Benchmarking: A tool for conducting self-assessment

    International Nuclear Information System (INIS)

    There is more information on nuclear plant performance available than can reasonably be assimilated and used effectively by plant management or personnel responsible for self-assessment. Also, it is becoming increasingly more important that an effective self-assessment program uses internal parameters not only to evaluate performance, but to incorporate lessons learned from other plants. Because of the quantity of information available, it is important to focus efforts and resources in areas where safety or performance is a concern and where the most improvement can be realized. One of the techniques that is being used to effectively accomplish this is benchmarking. Benchmarking involves the use of various sources of information to self-identify a plant's strengths and weaknesses, identify which plants are strong performers in specific areas, evaluate what makes a top performer, and incorporate the success factors into existing programs. The formality with which benchmarking is being implemented varies widely depending on the objective. It can be as simple as looking at a single indicator, such as systematic assessment of licensee performance (SALP) in engineering and technical support, then surveying the top performers with specific questions. However, a more comprehensive approach may include the performance of a detailed benchmarking study. Both operational and economic indicators may be used in this type of evaluation. Some of the indicators that may be considered and the limitations of each are discussed

  12. A comprehensive benchmarking system for evaluating global vegetation models

    Directory of Open Access Journals (Sweden)

    D. I. Kelley

    2012-11-01

    Full Text Available We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM, and the Lund-Potsdam-Jena (LPJ and Land Processes and eXchanges (LPX dynamic global vegetation models (DGVMs. SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP is too high. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2, but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

  13. Benchmarking of human resources management

    OpenAIRE

    David M. Akinnusi

    2008-01-01

    This paper reviews the role of human resource management (HRM) which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HR...

  14. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  15. NFS Tricks and Benchmarking Traps

    OpenAIRE

    Seltzer, Margo; Ellard, Daniel

    2003-01-01

    We describe two modi cations to the FreeBSD 4.6 NFS server to increase read throughput by improving the read-ahead heuristic to deal with reordered requests and stride access patterns. We show that for some stride access patterns, our new heuristics improve end-to-end NFS throughput by nearly a factor of two. We also show that benchmarking and experimenting with changes to an NFS server can be a subtle and challenging task, and that it is often difficult to distinguish the impact of a new ...

  16. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    The experimental results of startup tests after reconstruction and modification of the TRIGA Mark II reactor in Ljubljana are presented. The experiments were performed with a completely fresh, compact, and uniform core. The operating conditions were well defined and controlled, so that the results can be used as a benchmark test case for TRIGA reactor calculations. Both steady-state and pulse mode operation were tested. In this paper, the following steady-state experiments are treated: critical core and excess reactivity, control rod worths, fuel element reactivity worth distribution, fuel temperature distribution, and fuel temperature reactivity coefficient

  17. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  18. CFD Simulation of Thermal-Hydraulic Benchmark V1000CT-2 Using ANSYS CFX

    Directory of Open Access Journals (Sweden)

    Thomas Höhne

    2009-01-01

    Full Text Available Plant measured data from VVER-1000 coolant mixing experiments were used within the OECD/NEA and AER coupled code benchmarks for light water reactors to test and validate computational fluid dynamic (CFD codes. The task is to compare the various calculations with measured data, using specified boundary conditions and core power distributions. The experiments, which are provided for CFD validation, include single loop cooling down or heating-up by disturbing the heat transfer in the steam generator through the steam valves at low reactor power and with all main coolant pumps in operation. CFD calculations have been performed using a numerical grid model of 4.7 million tetrahedral elements. The Best Practice Guidelines in using CFD in nuclear reactor safety applications has been used. Different advanced turbulence models were utilized in the numerical simulation. The results show a clear sector formation of the affected loop at the downcomer, lower plenum and core inlet, which corresponds to the measured values. The maximum local values of the relative temperature rise in the calculation are in the same range of the experiment. Due to this result, it is now possible to improve the mixing models which are usually used in system codes.

  19. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  20. Rethinking benchmark dates in international relations

    OpenAIRE

    Buzan, Barry; Lawson, George

    2014-01-01

    International Relations (IR) has an ‘orthodox set’ of benchmark dates by which much of its research and teaching is organized: 1500, 1648, 1919, 1945 and 1989. This article argues that IR scholars need to question the ways in which these orthodox dates serve as internal and external points of reference, think more critically about how benchmark dates are established, and generate a revised set of benchmark dates that better reflects macro-historical international dynamics. The first part of t...

  1. Benchmarking for Excellence and the Nursing Process

    Science.gov (United States)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  2. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  3. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  4. Method and system for benchmarking computers

    Science.gov (United States)

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  5. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  6. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  7. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  8. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  9. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  10. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  11. Benchmark scenarios for the NMSSM

    CERN Document Server

    Djouadi, A; Ellwanger, U; Godbole, R; Hugonie, C; King, S F; Lehti, S; Moretti, S; Nikitenko, A; Rottlander, I; Schumacher, M; Teixeira, A

    2008-01-01

    We discuss constrained and semi--constrained versions of the next--to--minimal supersymmetric extension of the Standard Model (NMSSM) in which a singlet Higgs superfield is added to the two doublet superfields that are present in the minimal extension (MSSM). This leads to a richer Higgs and neutralino spectrum and allows for many interesting phenomena that are not present in the MSSM. In particular, light Higgs particles are still allowed by current constraints and could appear as decay products of the heavier Higgs states, rendering their search rather difficult at the LHC. We propose benchmark scenarios which address the new phenomenological features, consistent with present constraints from colliders and with the dark matter relic density, and with (semi--)universal soft terms at the GUT scale. We present the corresponding spectra for the Higgs particles, their couplings to gauge bosons and fermions and their most important decay branching ratios. A brief survey of the search strategies for these states a...

  12. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  13. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  14. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  15. Synergetic effect of benchmarking competitive advantages

    Directory of Open Access Journals (Sweden)

    N.P. Tkachova

    2011-12-01

    Full Text Available It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  16. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  17. Evaluating software verification systems: benchmarks and competitions

    NARCIS (Netherlands)

    Beyer, Dirk; Huisman, Marieke; Klebanov, Vladimir; Monahan, Rosemary

    2014-01-01

    This report documents the program and the outcomes of Dagstuhl Seminar 14171 “Evaluating Software Verification Systems: Benchmarks and Competitions”. The seminar brought together a large group of current and future competition organizers and participants, benchmark maintainers, as well as practition

  18. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  19. Benchmark Assessment for Improved Learning. AACC Report

    Science.gov (United States)

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  20. The Linked Data Benchmark Council Project

    NARCIS (Netherlands)

    Boncz, P.A.; Fundulaki, I.; Gubichev, A.; Larriba-Pey, J.; Neumann, T.

    2013-01-01

    Despite the fast growth and increasing popularity, the broad field of RDF and Graph database systems lacks an independent authority for developing benchmarks, and for neutrally assessing benchmark results through industry-strength auditing which would allow to quantify and compare the performance of

  1. Benchmarking implementations of lazy functional languages

    NARCIS (Netherlands)

    Hartel, P.H.; Langendoen, K.G.

    1993-01-01

    Five implementations of different lazy functional languages are compared using a common benchmark of a dozen medium size programs. The benchmarking procedure has been designed such that one set of programs can be translated automatically into different languages, thus allowing a fair comparison of t

  2. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  3. Benchmark 1 - Nonlinear strain path forming limit of a reverse draw: Part A: Benchmark description

    Science.gov (United States)

    Benchmark-1 Committee

    2013-12-01

    The objective of this benchmark is to demonstrate the predictability of forming limits under nonlinear strain paths for a draw panel with a non-axisymmetric reversed dome-shape at the center. It is important to recognize that treating strain forming limits as though they were static during the deformation process may not lead to successful predictions of this benchmark, due to the nonlinearity of the strain paths involved in this benchmark. The benchmark tool is designed to enable a two-stage draw/reverse draw continuous forming process. Three typical sheet materials, AA5182-O Aluminum, and DP600 and TRIP780 Steels, are selected for this benchmark study.

  4. Benchmarking in healthcare using aggregated indicators

    DEFF Research Database (Denmark)

    Traberg, Andreas; Jacobsen, Peter

    2010-01-01

    Benchmarking has become a fundamental part of modern health care systems, but unfortunately, no benchmarking framework is unanimously accepted for assessing both quality and performance. The aim of this paper is to present a benchmarking model that is able to take different stakeholder perspectives...... into account. By presenting performance as a function of a patient perspective, an operations management perspective, and an employee perspective a more holistic approach to benchmarking is proposed. By collecting statistical information from several national and regional agencies and internal databases......, the model is constructed as a comprehensive hierarchy of indicators. By aggregating the outcome of each indicator, the model is able to benchmark healthcare providing units. By assessing performance deeper in the hierarchy, a more detailed view of performance is obtained. The validity test of the model...

  5. Mejora de defensas antioxidantes mediante ejercicio aeróbico en mujeres con síndrome metabólico

    Directory of Open Access Journals (Sweden)

    Manuel Rosety-Rodríguez

    2012-02-01

    Full Text Available En la actualidad se acepta que el daño oxidativo juega un papel esencial en la patogénesis del síndrome metabólico. Estudios recientes proponen al daño oxidativo como diana terapéutica frente al síndrome metabólico. Precisamente nuestro objetivo fue mejorar el estatus total antioxidante (TAS de mujeres con síndrome metabólico mediante ejercicio aeróbico. Participaron voluntariamente 100 mujeres con síndrome metabólico de acuerdo con los criterios del National Cholesterol Educational Program (Adult-Treatment-Panel-III distribuidas aleatoriamente en grupo experimental (n = 60 y control (n = 40. El grupo experimental desarrolló un programa de entrenamiento aeróbico sobre tapiz rodante de intensidad ligera/moderada de 12 semanas (5 sesiones/semana. La determinación del TAS plasmático se realizó mediante espectrofotometría utilizando kits comercializados por Randox Lab. Este protocolo fue aprobado por un Comité de Etica Institucional. Tras completar el programa de entrenamiento se incrementó significativamente el TAS (0.79 ± 0.05 vs.1.01 ± 0.03 mmol/l; p = 0.027. No hubo cambios en grupo control. El ejercicio aeróbico de intensidad ligera/moderada aumenta las defensas antioxidantes en mujeres con síndrome metabólico. Son necesarios futuros estudios longitudinales para conocer su impacto en la evolución clínica.

  6. A proposed benchmark problem for cargo nuclear threat monitoring

    Science.gov (United States)

    Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.

  7. A proposed benchmark problem for cargo nuclear threat monitoring

    International Nuclear Information System (INIS)

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, ). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.x4 in.x16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.x16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.

  8. A proposed benchmark problem for cargo nuclear threat monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, Thomas Wesley, E-mail: twholmes@ncsu.edu [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States); Calderon, Adan; Peeples, Cody R.; Gardner, Robin P. [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States)

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, ). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.x4 in.x16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.x16 in. side facing the system. The two sources used in the benchmark are {sup 137}Cs and {sup 235}U.

  9. Influencia del ritmo circadiano sobre el rendimiento físico en ejercicios aeróbicos y anaeróbicos. Una revisión

    OpenAIRE

    Bueno Pérez, Ángel Javier

    2015-01-01

    La cronobiología es la ciencia que se encarga del estudio de los cambios fisiológicos dependientes de los ritmos circadianos, estos se refieren a las variaciones internas que se repetirán cada 24 horas. El objetivo del presente estudio fue realizar una revisión sistemática sobre la influencia provocada por la variabilidad circadiana en el rendimiento cardiorrespiratorio y motor tanto aeróbico como anaeróbico. Los resultados de esta revisión indican que el rendimiento deportivo se ve afectado ...

  10. Efecto del ejercicio físico aeróbico sobre los niveles séricos de adiponectina y leptina en mujeres posmenopáusicas

    OpenAIRE

    Aranzález, Luz Helena; Mockus Sivickas, Ismena; Ramírez, Doris; Mancera, Erica; García, Óscar

    2011-01-01

    Antecedentes. Las variaciones de peso corporal se acompañan de modificaciones en los niveles circulantes de adipocinas como la adiponectina y la leptina.  Durante la posmenopausia se presenta una tendencia al incremento de peso. Se recomienda ejercicio físico, que tiene efectos sobre tejido adiposo y los factores de riesgo cardiovascular, como parte del tratamiento del sobrepeso y obesidad. Objetivo. Determinar los efectos del ejercicio físico aeróbico controlado sobre los niveles séricos de ...

  11. Mejora de defensas antioxidantes mediante ejercicio aeróbico en mujeres con síndrome metabólico

    OpenAIRE

    Manuel Rosety-Rodríguez; Antonio Díaz-Ordoñez; Ignacio Rosety; Gabriel Fornieles; Alejandra Camacho-Molina; Natalia García; Miguel Angel Rosety; Francisco J. Ordoñez

    2012-01-01

    En la actualidad se acepta que el daño oxidativo juega un papel esencial en la patogénesis del síndrome metabólico. Estudios recientes proponen al daño oxidativo como diana terapéutica frente al síndrome metabólico. Precisamente nuestro objetivo fue mejorar el estatus total antioxidante (TAS) de mujeres con síndrome metabólico mediante ejercicio aeróbico. Participaron voluntariamente 100 mujeres con síndrome metabólico de acuerdo con los criterios del National Cholesterol Educational Program ...

  12. Increased Uptake of HCV Testing through a Community-Based Educational Intervention in Difficult-to-Reach People Who Inject Drugs: Results from the ANRS-AERLI Study

    Science.gov (United States)

    Roux, Perrine; Rojas Castro, Daniela; Ndiaye, Khadim; Debrus, Marie; Protopopescu, Camélia; Le Gall, Jean-Marie; Haas, Aurélie; Mora, Marion; Spire, Bruno; Suzan-Monti, Marie; Carrieri, Patrizia

    2016-01-01

    Aims The community-based AERLI intervention provided training and education to people who inject drugs (PWID) about HIV and HCV transmission risk reduction, with a focus on drug injecting practices, other injection-related complications, and access to HIV and HCV testing and care. We hypothesized that in such a population where HCV prevalence is very high and where few know their HCV serostatus, AERLI would lead to increased HCV testing. Methods The national multisite intervention study ANRS-AERLI consisted in assessing the impact of an injection-centered face-to-face educational session offered in volunteer harm reduction (HR) centers (“with intervention”) compared with standard HR centers (“without intervention”). The study included 271 PWID interviewed on three occasions: enrolment, 6 and 12 months. Participants in the intervention group received at least one face-to-face educational session during the first 6 months. Measurements The primary outcome of this analysis was reporting to have been tested for HCV during the previous 6 months. Statistical analyses used a two-step Heckman approach to account for bias arising from the non-randomized clustering design. This approach identified factors associated with HCV testing during the previous 6 months. Findings Of the 271 participants, 127 and 144 were enrolled in the control and intervention groups, respectively. Of the latter, 113 received at least one educational session. For the present analysis, we selected 114 and 88 participants eligible for HCV testing in the control and intervention groups, respectively. In the intervention group, 44% of participants reported having being tested for HCV during the previous 6 months at enrolment and 85% at 6 months or 12 months. In the control group, these percentages were 51% at enrolment and 78% at 12 months. Multivariable analyses showed that participants who received at least one educational session during follow-up were more likely to report HCV testing

  13. LITMUS: An Open Extensible Framework for Benchmarking RDF Data Management Solutions

    OpenAIRE

    Thakkar, Harsh; Dubey, Mohnish; Sejdiu, Gezim; Ngomo, Axel-Cyrille Ngonga; Debattista, Jeremy; Lange, Christoph; Lehmann, Jens; Auer, Sören; Vidal, Maria-Esther

    2016-01-01

    Developments in the context of Open, Big, and Linked Data have led to an enormous growth of structured data on the Web. To keep up with the pace of efficient consumption and management of the data at this rate, many data Management solutions have been developed for specific tasks and applications. We present LITMUS, a framework for benchmarking data management solutions. LITMUS goes beyond classical storage benchmarking frameworks by allowing for analysing the performance of frameworks across...

  14. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  15. Benchmarking Measures of Network Influence

    Science.gov (United States)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  16. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  17. Validation of gadolinium burnout using PWR benchmark specification

    Energy Technology Data Exchange (ETDEWEB)

    Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2014-07-01

    Graphical abstract: - Highlights: • We present methodology for validation of gadolinium burnout in PWR. • We model 17 × 17 PWR fuel assembly using MCB code. • We demonstrate C/E ratios of measured and calculated concentrations of Gd isotopes. • The C/E for Gd154, Gd156, Gd157, Gd158 and Gd160 shows good agreement of ±10%. • The C/E for Gd152 and Gd155 shows poor agreement below ±10%. - Abstract: The paper presents comparative analysis of measured and calculated concentrations of gadolinium isotopes in spent nuclear fuel from the Japanese Ohi-2 PWR. The irradiation of the 17 × 17 fuel assembly containing pure uranium and gadolinia bearing fuel pins was numerically reconstructed using the Monte Carlo Continuous Energy Burnup Code – MCB. The reference concentrations of gadolinium isotopes were measured in early 1990s at Japan Atomic Energy Research Institute. It seems that the measured concentrations were never used for validation of gadolinium burnout. In our study we fill this gap and assess quality of both: applied numerical methodology and experimental data. Additionally we show time evolutions of infinite neutron multiplication factor K{sub inf}, FIMA burnup, U235 and Gd155–Gd158. Gadolinium-based materials are commonly used in thermal reactors as burnable absorbers due to large neutron absorption cross-section of Gd155 and Gd157.

  18. Validation of gadolinium burnout using PWR benchmark specification

    International Nuclear Information System (INIS)

    Graphical abstract: - Highlights: • We present methodology for validation of gadolinium burnout in PWR. • We model 17 × 17 PWR fuel assembly using MCB code. • We demonstrate C/E ratios of measured and calculated concentrations of Gd isotopes. • The C/E for Gd154, Gd156, Gd157, Gd158 and Gd160 shows good agreement of ±10%. • The C/E for Gd152 and Gd155 shows poor agreement below ±10%. - Abstract: The paper presents comparative analysis of measured and calculated concentrations of gadolinium isotopes in spent nuclear fuel from the Japanese Ohi-2 PWR. The irradiation of the 17 × 17 fuel assembly containing pure uranium and gadolinia bearing fuel pins was numerically reconstructed using the Monte Carlo Continuous Energy Burnup Code – MCB. The reference concentrations of gadolinium isotopes were measured in early 1990s at Japan Atomic Energy Research Institute. It seems that the measured concentrations were never used for validation of gadolinium burnout. In our study we fill this gap and assess quality of both: applied numerical methodology and experimental data. Additionally we show time evolutions of infinite neutron multiplication factor Kinf, FIMA burnup, U235 and Gd155–Gd158. Gadolinium-based materials are commonly used in thermal reactors as burnable absorbers due to large neutron absorption cross-section of Gd155 and Gd157

  19. Deliverable 1.2 Specification of industrial benchmark tests

    DEFF Research Database (Denmark)

    Arentoft, Mogens; Ravn, Bjarne Gottlieb

    Technical report for the Growth project: IMPRESS, Improvement of precision in forming by simultaneous modelling of deflections in workpiece-die-press system - Output from WP1: Numerical simulation of deflections in workpiece-die-press system....

  20. Earnings Benchmarks in International hotel firms

    Directory of Open Access Journals (Sweden)

    Laura Parte Esteban

    2011-11-01

    Full Text Available This paper focuses on earnings management around earnings benchmarks (avoiding losses and earnings decreases hypothesis in international firms and non international firms belonging to the Spanish hotel industry. First, frequency histograms are used to determine the existence of a discontinuity in earnings in both segments. Second, the use of discretionary accruals as a tool to meet earnings benchmarks is analysed in international and non international firms. Empirical evidence shows that international and non international firms meet earnings benchmarks. It is also noted different behaviour between international and non international firms.

  1. LAPUR-K BWR stability benchmark

    International Nuclear Information System (INIS)

    This paper documents the stability benchmark of the LAPUR-K code using the measurements taken at the Ringhals Unit 1 plant over four cycles of operation. This benchmark was undertaken to demonstrate the ability of LAPUR-K to calculate the decay ratios for both core-wide and regional mode oscillations. This benchmark contributes significantly to assuring that LAPUR-K can be used to define the exclusion region for the Monticello Plant in response to recent US Nuclear Regulatory Commission notices concerning oscillation observed at Boiling Water Reactor plants. Stability is part of Northern States Power Reload Safety Evaluation of the Monticello Plant

  2. Big Data in AER

    Science.gov (United States)

    Kregenow, Julia M.

    2016-01-01

    Penn State University teaches Introductory Astronomy to more undergraduates than any other institution in the U.S. Using a standardized assessment instrument, we have pre-/post- tested over 20,000 students in the last 8 years in both resident and online instruction. This gives us a rare opportunity to look for long term trends in the performance of our students during a period in which online instruction has burgeoned.

  3. The implementation of benchmarking process in marketing education services by Ukrainian universities

    Directory of Open Access Journals (Sweden)

    G.V. Okhrimenko

    2016-03-01

    Full Text Available The aim of the article. The consideration of theoretical and practical aspects of benchmarking at universities is the main task of this research. At first, the researcher identified the essence of benchmarking. It involves comparing the characteristics of college or university leading competitors in the industry and copying of proven designs. Benchmarking tries to eliminate the fundamental problem of comparison – the impossibility of being better than the one from whom they borrow solution. Benchmarking involves, therefore, self-evaluation including systematic collection of data and information with the view to making relevant comparisons of strengths and weaknesses of performance aspects. Benchmarking identifies gaps in performance, seeks new approaches for improvements, monitors progress, reviews benefits and assures adoption of good practices. The results of the analysis. There are five types of benchmarking: internal, competitive, functional, procedural and general. Benchmarking is treated as a process of systematically applied and has specific stages: 1 identification of study object; 2 identification of businesses for comparison; 3 selection of data collection methods; 4 determining variations in terms of efficiency and determination of the levels of future results; 5 communicating of the results of benchmarking; 6 development of implementation plan, initiating the implementation, monitoring implementation; 7 new benchmarks definition. The researcher gave the results of practical use of the benchmarking algorithm at universities. In particular, the monitoring and SWOT-analysis were identified competitive practices used at Ukrainian universities. The main criteria for determining the potential for benchmarking of universities were: 1 the presence of new teaching methods at universities; 2 the involvement of foreign lecturers, partners of other universities for cooperation; 3 promoting education services for target groups; 4 violation of

  4. Statistical benchmark for BosonSampling

    Science.gov (United States)

    Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas

    2016-03-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.

  5. Resistance and uptake of cadmium by yeast, Pichia hampshirensis 4Aer, isolated from industrial effluent and its potential use in decontamination of wastewater.

    Science.gov (United States)

    Khan, Zaman; Rehman, Abdul; Hussain, Syed Z

    2016-09-01

    Pichia hampshirensis 4Aer is first ever used yeast for the bioremediation of environmental cadmium (Cd(+2)) which could maximally remove 22 mM/g and 28 mM/g Cd(+2) from aqueous medium at lab and large scales, respectively. The biosorption was found to be the function of temperature, pH of solution, initial Cd(+2) concentration and biomass dosage. Competitive biosorption was investigated in binary and multi-metal system which indicated the decrease in Cd(+2) biosorption with increasing the competitive metal ions attributed to their higher electronegativity and larger radius. FTIR analysis revealed the active participation of amide and carbonyl moieties in Cd(+2) adsorption confirmed by EDX analysis. Electron micrographs summoned further surface adsorption and increased cell size due to intracellular Cd(+2) accumulation. Cd(+2) was the causative agent of some metal binding proteins as well as prodigious increase in glutathione and other non-protein thiols levels which is the crucial for the yeast to thrive oxidative stress generated by Cd(+2). Our experimental data were consistent with Langmuir as well as Freundlich isotherm models. The yeast obeyed pseudo second order kinetic model which makes it an effective biosorbent for Cd(+2). High bioremediation potential and spontaneity and feasibility of the process make P. hampshirensis 4Aer an impending foundation for green chemistry to exterminate environmental Cd(+2). PMID:27268792

  6. Resistance and uptake of cadmium by yeast, Pichia hampshirensis 4Aer, isolated from industrial effluent and its potential use in decontamination of wastewater.

    Science.gov (United States)

    Khan, Zaman; Rehman, Abdul; Hussain, Syed Z

    2016-09-01

    Pichia hampshirensis 4Aer is first ever used yeast for the bioremediation of environmental cadmium (Cd(+2)) which could maximally remove 22 mM/g and 28 mM/g Cd(+2) from aqueous medium at lab and large scales, respectively. The biosorption was found to be the function of temperature, pH of solution, initial Cd(+2) concentration and biomass dosage. Competitive biosorption was investigated in binary and multi-metal system which indicated the decrease in Cd(+2) biosorption with increasing the competitive metal ions attributed to their higher electronegativity and larger radius. FTIR analysis revealed the active participation of amide and carbonyl moieties in Cd(+2) adsorption confirmed by EDX analysis. Electron micrographs summoned further surface adsorption and increased cell size due to intracellular Cd(+2) accumulation. Cd(+2) was the causative agent of some metal binding proteins as well as prodigious increase in glutathione and other non-protein thiols levels which is the crucial for the yeast to thrive oxidative stress generated by Cd(+2). Our experimental data were consistent with Langmuir as well as Freundlich isotherm models. The yeast obeyed pseudo second order kinetic model which makes it an effective biosorbent for Cd(+2). High bioremediation potential and spontaneity and feasibility of the process make P. hampshirensis 4Aer an impending foundation for green chemistry to exterminate environmental Cd(+2).

  7. Avaliação da biotratabilidade do efluentede branqueamento de polpa celulósicapor processos aeróbios e anaeróbios

    Directory of Open Access Journals (Sweden)

    Míriam Cristina Santos Amaral

    2013-09-01

    Full Text Available Os efluentes da planta de branqueamento da produção de pasta celulósica apresentam, além de elevadas concentrações de matéria orgânica em termos de Demanda Química de Oxigênio (DQO e Demanda Bioquímica de Oxigênio (DBO e cor, compostos com elevada toxicidade, o que torna o tratamento destes efluentes problemático. O objetivo do presente artigo é avaliar a biotratabilidade dos efluentes de branqueamento ácido e alcalino de polpa celulósica kraft por processos aeróbios e anaeróbios por meio da caracterização, utilizando parâmetros convencionais e coletivos. Os resultados de DQO inerte, biodegradabilidade aeróbia e anaeróbia, distribuição de massas molares, produtos microbianos solúveis e substâncias poliméricas extracelulares indicaram a baixa biotratabilidade dos efluentes

  8. Entrenamiento de la capacidad aeróbica por medio de la terapia acuática en niños con parálisis cerebral tipo diplejía espástica

    OpenAIRE

    Nandy Fajardo-López; Fabiola Moscoso-Alvarado

    2013-01-01

    Antecedentes. La parálisis cerebral tipo diplejía espática genera cambios en el sistema cardiovascular que afectan la capacidad aeróbica. La terapia acuática es una estrategia terapéutica óptima tanto para el manejo de la población como para el entrenamiento de la capacidad aeróbica, por las respuestas fisiológicas que genera y porque brinda la facilidad de generar mayores cargas al sistema cardiovascular con menores riesgos que en tierra. Objetivo. Identificar las características que debe te...

  9. Efectos de la distribución y secuencia en la organización de distintas tareas de entrenamiento para la mejora de la resistencia aeróbica

    OpenAIRE

    Clemente Suárez, Vicente Javier

    2010-01-01

    Numerosos autores han investigado el efecto de diferentes entrenamientos en el rendimiento de deportistas de resistencia, pero poco se ha estudiado sobre el efecto de la distribución y la secuenciación de las tareas de entrenamiento en la mejora de la resistencia aeróbica tanto en el rendimiento aeróbico, como en variables espirométricas, parámetros de fuerza explosiva e isocinética de piernas, recuperación y fatiga del sistema nervioso central. Por ello, esta tesis doctoral pretende analizar...

  10. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and...

  11. Potência atada na máxima fase estável de lactato e índices do desempenho aeróbio de nado

    Directory of Open Access Journals (Sweden)

    Dalton Müller Pessôa Filho

    2014-10-01

    Full Text Available INTRODUÇÃO: A perspectiva do nado atado constituir um contexto válido para a avaliação aeróbia de nadadores foi investigada no presente estudo. OBJETIVO: Analisar a relação entre a potência em máxima fase estável de lactato no nado atado (PAtadaMFEL com seu respectivo índice em nado desimpedido (velocidade em MFEL, vMFEL e com outros índices da aptidão aeróbia e desempenho de nado crawl. MÉTODOS: Dez nadadores (16,6 ± 1,4 anos foram submetidos às estimativas de: (a PAtadoCrítica (transformação da assíntota do modelo carga-tempo limite hiperbólico, CargaCríticaAtada, (b PAtadaMFEL e vMFEL (3 ou 4 esforços de 30 minutos entre 95 a 105% da Carga CríticaAtada e entre 85 a 95% da velocidade máxima nos 400 m, respectivamente, (c teste progressivo (79-100% da v400m, com incrementos de 3% para a determinação da velocidade no ponto de inflexão (vPI, e (d testes de desempenho nas distâncias de 400 (v400m, 800 (v800m e 1500 (v1500m metros. Os coeficientes de Pearson e de variância analisaram as correlações entre os parâmetros aeróbios e destes com o desempenho. O teste de Bland-Altman foi utilizado para analisar a concordância entre as concentrações de lactato nas avaliações aeróbias. RESULTADOS: O valor de PAtadaMFEL (89,2 ± 15,1 W apresentou potencial similar de explicação da variância nos desempenhos em v400m (1,29 ± 0,11 m.s-1, R2 = 0,700, v800m (1,23 ± 0,12 m.s-1, R2 = 0,770 e v1500m (1,21 ± 0,12 m.s-1, R2 = 0,698 tal como vMFEL (1,17 ± 0,11 m.s-1 e vPI (1,19 ± 0,11 m.s-1. As concentrações de lactato sanguíneo em PAtadaMFEL, vMFEL e vPI não diferem entre si e distribuíram-se dentro dos limites de concordância. CONCLUSÃO: Pode-se concluir que a aplicação de MFEL em nado atado mostrou-se válida e promissora para a avaliação aeróbia de nadadores.

  12. Benchmarking carbon emissions performance in supply chains

    OpenAIRE

    Acquaye, Adolf; Genovese, Andrea; Barrett, John W.; Koh, Lenny

    2014-01-01

    Purpose – The paper aims to develop a benchmarking framework to address issues such as supply chain complexity and visibility, geographical differences and non-standardized data, ensuring that the entire supply chain environmental impact (in terms of carbon) and resource use for all tiers, including domestic and import flows, are evaluated. Benchmarking has become an important issue in supply chain management practice. However, challenges such as supply chain complexity and visibility, geogra...

  13. MPI Benchmarking Revisited: Experimental Design and Reproducibility

    OpenAIRE

    Hunold, Sascha; Carpen-Amarie, Alexandra

    2015-01-01

    The Message Passing Interface (MPI) is the prevalent programming model used on today's supercomputers. Therefore, MPI library developers are looking for the best possible performance (shortest run-time) of individual MPI functions across many different supercomputer architectures. Several MPI benchmark suites have been developed to assess the performance of MPI implementations. Unfortunately, the outcome of these benchmarks is often neither reproducible nor statistically sound. To overcome th...

  14. Benchmark Two-Good Utility Functions

    OpenAIRE

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticity. It is shown how each of these utility functions arises from a simple graphical construction based on a single given indifference curve. Also, it is shown that possessors of such utility function...

  15. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  16. Under Pressure Benchmark for DDBMS Availability

    OpenAIRE

    Fior, Alessandro Gustavo; Meira, Jorge Augusto; Cunha De Almeida, Eduardo; Coelho, Ricardo Gonçalves; Didonet Del Fabro, Marcos; Le Traon, Yves

    2013-01-01

    The availability of Distributed Database Management Systems (DDBMS) is related to the probability of being up and running at a given point in time, and managing failures. One well-known and widely used mechanism to ensure availability is replication, which includes performance impact on maintaining data replicas across the DDBMS's machine nodes. Benchmarking can be used to measure such impact. In this article, we present a benchmark that evaluates the performance of DDBMS, considering availab...

  17. Benchmarking implementations of lazy functional languages

    OpenAIRE

    Hartel, P.H.; Langendoen, K. G.

    1993-01-01

    Five implementations of different lazy functional languages are compared using a common benchmark of a dozen medium size programs. The benchmarking procedure has been designed such that one set of programs can be translated automatically into different languages, thus allowing a fair comparison of the quality of compilers for different lazy functional languages. Aspects studied include compile time, execution time, ease of programmingdetermined by the availability of certain key features

  18. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  19. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  20. Lesson learned from the SARNET wall condensation benchmarks

    International Nuclear Information System (INIS)

    Highlights: • The results of the benchmarking activity on wall condensation are reported. • The work was performed in the frame of SARNET. • General modelling techniques for condensation are discussed. • Results of University of Pisa and of other benchmark participants are discussed. • The lesson learned is drawn. - Abstract: The prediction of condensation in the presence of noncondensable gases has received continuing attention in the frame of the Severe Accident Research Network of Excellence, both in the first (2004–2008) and in the second (2009–2013) EC integrated projects. Among the different reasons for considering so relevant this basic phenomenon, coped with by classical treatments dated in the first decades of the last century, there is the interest for developing updated CFD models for reactor containment analysis, requiring validating at a different level the available modelling techniques. In the frame of SARNET, benchmarking activities were undertaken taking advantage of the work performed at different institutions in setting up and developing models for steam condensation in conditions of interest for nuclear reactor containment. Four steps were performed in the activity, involving: (1) an idealized problem freely inspired at the actual conditions occurring in an experimental facility, CONAN, installed at the University of Pisa; (2) a first comparison with experimental data purposely collected by the CONAN facility; (3) a second comparison with data available from experimental campaigns performed in the same apparatus before the inclusion of the activities in SARNET; (4) a third exercise involving data obtained at lower mixture velocity than in previous campaigns, aimed at providing conditions closer to those addressed in reactor containment analyses. The last step of the benchmarking activity required to change the configuration of the experimental apparatus to achieve the lower flow rates involved in the new test specifications. The

  1. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  2. Clinically meaningful performance benchmarks in MS

    Science.gov (United States)

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (10 seconds) ranges of performance. PMID:24174581

  3. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  4. The Development of a Benchmark Tool for NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2013-07-01

    Full Text Available The aim of this article is to describe a proposed benchmark methodology and software application targeted at measuring the performance of both SQL and NoSQL databases. These represent the results obtained during PhD research (being actually a part of a larger application intended for NoSQL database management. A reason for aiming at this particular subject is the complete lack of benchmarking tools for NoSQL databases, except for YCBS [1] and a benchmark tool made specifically to compare Redis to RavenDB. While there are several well-known benchmarking systems for classical relational databases (starting with the canon TPC-C, TPC-E and TPC-H, on the other side of databases world such tools are mostly missing and seriously needed.

  5. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  6. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind...

  7. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  8. Organizational and economic aspects of benchmarking innovative products at the automobile industry enterprises

    Directory of Open Access Journals (Sweden)

    L.M. Taraniuk

    2016-06-01

    Full Text Available The aim of the article. The aim of the article is to determine the nature and characteristics of the use of benchmarking in the activity of domestic enterprises of automobile industry under current economic conditions. The results of the analysis. The article identified the concept of benchmarking, examining the stages of benchmarking, determination the efficiency of benchmarking in work automakers. It is considered the historical aspects of the emergence of benchmarking method in world economics. It is determined the economic aspects of the benchmarking in the work of enterprise automobile industry. The analysis on the stages of benchmarking of innovative products in the modern development of the productive forces and the impact of market factors on the economic activities of companies, including in the enterprise of automobile industry. The attention is focused on the specifics of implementing benchmarking at companies of automobile industry. It is considered statistics number of owners of electric vehicles worldwide. The authors researched market of electric vehicles in Ukraine. Also, it is considered the need of benchmarking using to improve the competitiveness of the national automobile industry especially CJSC “Zaporizhia Automobile Building Plant”. Authors suggested reasonable steps for its improvement. The authors improved methodical approach to assessing the selection of vehicles with the best technical parameters based on benchmarking, which, unlike the existing ones, based on the calculation of the integral factor of technical specifications of vehicles in order to establish better competitive products of companies automobile industry among evaluated. The main indicators of the national production of electric vehicles are shown. Attention is paid to the development of important ways of CJSC “Zaporizhia Automobile Building Plant”, where authors established the aspects that need to pay attention in the management of the

  9. Cause-specific long-term mortality in survivors of childhood cancer in Switzerland: A population-based study.

    Science.gov (United States)

    Schindler, Matthias; Spycher, Ben D; Ammann, Roland A; Ansari, Marc; Michel, Gisela; Kuehni, Claudia E

    2016-07-15

    Survivors of childhood cancer have a higher mortality than the general population. We describe cause-specific long-term mortality in a population-based cohort of childhood cancer survivors. We included all children diagnosed with cancer in Switzerland (1976-2007) at age 0-14 years, who survived ≥5 years after diagnosis and followed survivors until December 31, 2012. We obtained causes of death (COD) from the Swiss mortality statistics and used data from the Swiss general population to calculate age-, calendar year-, and sex-standardized mortality ratios (SMR), and absolute excess risks (AER) for different COD, by Poisson regression. We included 3,965 survivors and 49,704 person years at risk. Of these, 246 (6.2%) died, which was 11 times higher than expected (SMR 11.0). Mortality was particularly high for diseases of the respiratory (SMR 14.8) and circulatory system (SMR 12.7), and for second cancers (SMR 11.6). The pattern of cause-specific mortality differed by primary cancer diagnosis, and changed with time since diagnosis. In the first 10 years after 5-year survival, 78.9% of excess deaths were caused by recurrence of the original cancer (AER 46.1). Twenty-five years after diagnosis, only 36.5% (AER 9.1) were caused by recurrence, 21.3% by second cancers (AER 5.3) and 33.3% by circulatory diseases (AER 8.3). Our study confirms an elevated mortality in survivors of childhood cancer for at least 30 years after diagnosis with an increased proportion of deaths caused by late toxicities of the treatment. The results underline the importance of clinical follow-up continuing years after the end of treatment for childhood cancer. PMID:26950898

  10. Atividade do sistema antioxidante e desenvolvimento de aerênquima em raízes de milho 'Saracura' Antioxidant system activity and aerenchyma formation in 'Saracura' maize roots

    Directory of Open Access Journals (Sweden)

    Fabricio José Pereira

    2010-05-01

    Full Text Available Este trabalho teve como objetivo avaliar a influência de sucessivos ciclos de seleção do milho 'Saracura' na atividade das enzimas do sistema antioxidante, e a relação dessas enzimas com a capacidade dessa variedade em desenvolver aerênquima. Sementes de 18 ciclos de seleção intercalados do milho 'Saracura' e da cultivar BR 107, sensível à hipoxia, foram semeadas em vasos e em casa de vegetação. As plantas foram submetidas ao alagamento intermitente de dois em dois dias. As amostras de raízes foram coletadas após 60 dias e analisaram-se as atividades das enzimas peroxidase do guaiacol, peroxidase do ascorbato e catalase, além da capacidade das plantas de cada ciclo desenvolverem aerênquima. Ao longo dos ciclos, as plantas apresentaram modificações na atividade das enzimas, com aumento na de peroxidase do ascorbato e diminuição na de catalase e de peroxidase do guaiacol. Observou-se, ainda, maior capacidade de desenvolver aerênquima nos últimos ciclos de seleção. A redução na atividade das enzimas do sistema antioxidante parece estar relacionada a um desbalanço na decomposição de H2O2.This work aimed to assess the influence of successive selection cycles in 'Saracura' maize on the enzyme activity of the antioxidant system and the relationship of these enzymes with the aerenchyma development capacity of this variety. Seeds of 18 intercalated selection cycles of the 'Saracura' maize and of the cultivar BR 107, sensitive to hipoxia, were sown in pots in the greenhouse. Plants were submitted to intermittent soil flooding each two days. After 60 days, the roots were sampled and analysis were done for the guaiacol peroxidase, ascorbate peroxidase, and catalase activities and for the capacity of the plants of each cycle to develop aerenchyma. The plants showed modifications in enzyme activity along the cycles, increasing the ascorbate peroxidase activity and decreasing the catalase and guaiacol peroxidase ones. A greater

  11. Aptidão aeróbia e amplitude dos domínios de intensidade de exercício no ciclismo

    Directory of Open Access Journals (Sweden)

    Renato Aparecido Corrêa Caritá

    2013-08-01

    Full Text Available INTRODUÇÃO: A determinação dos domínios de intensidade de exercício tem importantes implicações na prescrição do treino aeróbio e na elaboração de delineamentos experimentais. OBJETIVO: Analisar os efeitos do nível de aptidão aeróbia sobre a amplitude dos domínios de intensidade de exercício durante o ciclismo. MÉTODOS: Doze ciclistas (CIC, 11 corredores (COR e oito indivíduos não treinados (NT foram submetidos aos seguintes protocolos em diferentes dias: 1 teste progressivo para determinação do limiar de lactato (LL, consumo máximo de oxigênio (VO2máx e sua respectiva intensidade (IVO2máx; 2 três testes de carga constante até a exaustão a 95, 100 e 110% IVO2máx para a determinação da potência crítica (PC; 3 testes até a exaustão para determinar a intensidade superior do domínio severo (Isup. As amplitudes dos domínios (moderado pesado severo < Isup foram expressas como percentual da Isup (VO2. RESULTADOS: A amplitude do domínio moderado foi similar entre CIC (52 ± 8% e COR (47 ± 4% e significantemente maior no CIC em relação ao NT (41 ± 7%. O domínio pesado foi significantemente menor no CIC (17 ± 6% em relação ao COR (27 ± 6% e NT (27 ± 9%. Em relação ao domínio severo não foram encontradas diferenças significantes entre os CIC (31 ± 7%, COR (26 ± 5% e NT (31 ± 7%. CONCLUSÃO: O domínio pesado de exercício é mais sensível a mudanças determinadas pelo nível de aptidão aeróbia, existindo a necessidade de que se atenda ao princípio da especificidade do movimento, quando se pretende obter um elevado grau de adaptação fisiológica.

  12. Treinamento aeróbico prévio à compressão nervosa: análise da morfometria muscular de ratos

    Directory of Open Access Journals (Sweden)

    Elisangela Lourdes Artifon

    2013-02-01

    Full Text Available INTRODUÇÃO: Ciatalgia origina-se da compressão do nervo isquiático e implica em dor, parestesia, diminuição da força muscular e hipotrofia. O exercício físico é reconhecido na prevenção e reabilitação de lesões, porém quando em sobrecargas pode aumentar o risco de lesões e consequente déficit funcional. OBJETIVO: Avaliar efeitos de treinamento aeróbico prévio a modelo experimental de ciatalgia em relação a parâmetros morfométricos dos músculos sóleos de ratos. MATERIAIS E MÉTODOS: 18 ratos divididos em três grupos: simulacro (mergulho, 30 segundos; exercício regular (natação, dez minutos diários; e treinamento aeróbico progressivo (natação em tempos progressivos de dez a 60 minutos diários. Ao final de seis semanas de exercício, os ratos foram submetidos ao modelo experimental da ciatalgia. No terceiro dia após a lesão, foram eutanasiados e tiveram seus músculos sóleos dissecados, pesados e preparados para análise histológica. Variáveis analisadas: peso muscular, área de secção transversa e diâmetro médio das fibras musculares. RESULTADOS: Observou-se diferença estatisticamente significativa para todos os grupos quando se comparou músculo controle e aquele submetido à lesão isquiática. A análise intergrupos não apresentou diferença estatisticamente significativa para nenhuma das variáveis analisadas. CONCLUSÃO: Tanto o exercício físico regular quanto o treinamento aeróbico não produziram efeitos preventivos ou agravantes às consequências musculares da inatividade funcional após ciatalgia.

  13. Validation study of SRAC2006 code system based on evaluated nuclear data libraries for TRIGA calculations by benchmarking integral parameters of TRX and BAPL lattices of thermal reactors

    International Nuclear Information System (INIS)

    Highlights: ► To validate the SRAC2006 code system for TRIGA neutronics calculations. ► TRX and BAPL lattices are treated as standard benchmarks for this purpose. ► To compare the calculated results with experiment as well as MCNP values in this study. ► The study demonstrates a good agreement with the experiment and the MCNP results. ► Thus, this analysis reflects the validation study of the SRAC2006 code system. - Abstract: The goal of this study is to present the validation study of the SRAC2006 code system based on evaluated nuclear data libraries ENDF/B-VII.0 and JENDL-3.3 for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. This study is achieved through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors. In integral measurements, the thermal reactor lattices TRX-1, TRX-2, BAPL-UO2-1, BAPL-UO2-2 and BAPL-UO2-3 are treated as standard benchmarks for validating/testing the SRAC2006 code system as well as nuclear data libraries. The integral parameters of the said lattices are calculated using the collision probability transport code PIJ of the SRAC2006 code system at room temperature 20 °C based on the above libraries. The calculated integral parameters are compared to the measured values as well as the MCNP values based on the Chinese evaluated nuclear data library CENDL-3.0. It was found that in most cases, the values of integral parameters demonstrate a good agreement with the experiment and the MCNP results. In addition, the group constants in SRAC format for TRX and BAPL lattices in fast and thermal energy range respectively are compared between the above libraries and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation study of the SRAC2006 code system based on evaluated nuclear data libraries JENDL-3.3 and ENDF/B-VII.0 and can also be essential to implement further neutronics calculations of

  14. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    Science.gov (United States)

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  15. Benchmark 2 - Springback of a draw / re-draw panel: Part C: Benchmark analysis

    Science.gov (United States)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing

    2013-12-01

    Benchmark analysis is summarized for DP600 and AA 5182-O. Nine simulation results submitted for this benchmark study are compared to the physical measurement results. The details on the codes, friction parameters, mesh technology, CPU, and material models are also summarized at the end of this report with the participant information details.

  16. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  17. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  18. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  19. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  20. Benchmarks and statistics of entanglement dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tiersch, Markus

    2009-09-04

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  1. Benchmarks and statistics of entanglement dynamics

    International Nuclear Information System (INIS)

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  2. Analysis of ANS LWR physics benchmark problems.

    Energy Technology Data Exchange (ETDEWEB)

    Taiwo, T. A.

    1998-07-29

    Various Monte Carlo and deterministic solutions to the three PWR Lattice Benchmark Problems recently defined by the ANS Ad Hoc Committee on Reactor Physics Benchmarks are presented. These solutions were obtained using the VIM continuous-energy Monte Carlo code and the DIF3D/WIMS-D4M code package implemented at the Argonne National Laboratory. The code results for the K{sub eff} and relative pin power distribution are compared to measured values. Additionally, code results for the three benchmark-prescribed infinite lattice configurations are also intercompared. The results demonstrate that the codes produce very good estimates of both the K{sub eff} and power distribution for the critical core and the lattice parameters of the infinite lattice configuration.

  3. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador;

    2016-01-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision......-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  4. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  5. Standardized benchmarking in the quest for orthologs.

    Science.gov (United States)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  6. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  7. Efecto de una intervención ACT sobre la resistencia aeróbica y evitación experiencial en marchistas

    Directory of Open Access Journals (Sweden)

    María Clara Rodríguez Salazar

    2015-12-01

    Full Text Available El presente estudio tuvo como propósito identificar el efecto de la intervención en Terapia de Aceptación y Compromiso (ACT sobre la resistencia aeróbica y conducta de evitación experiencial en un grupo de marchistas de Bogotá. Se utilizó un diseño pretest-postest con grupo control. La muestra estuvo compuesta por diez marchistas de ambos sexos, con un promedio de edad de 16.70 y un rango entre los 15 y 20 años de edad, pertenecientes a la Liga de Atletismo de Bogotá. Se eligieron por conveniencia. Se emplearon como instrumentos de medición el test de los 3000 m y el Cuestionario de Aceptación Acción (AAQ. La intervención en ACT se realizó en cuatro sesiones en las que se desarrollaron los contenidos definidos por los autores de la intervención (Wilson y Luciano, 2002. Para el análisis de los datos, se empleó estadística no paramétrica a través de la prueba U de Mann-Whitney. Los resultados señalan una mayor resistencia aeróbica en la prueba de los 3000 m en el postest del grupo experimental con respecto al grupo control, así como una mayoraceptación de los eventos internos negativos.

  8. Influência do treinamento aeróbio nos mecanismos fisiopatológicos da hipertensão arterial sistêmica

    Directory of Open Access Journals (Sweden)

    Francisco Luciano Pontes Júnior

    2010-12-01

    Full Text Available O objetivo da presente revisão foi discutir as principais influências do exercício aeróbio nos mecanismos fisiopatológicos da hipertensão sistêmica. A hipotensão pós-exercício (HPE resulta de uma redução persistente na resistência vascular periférica (RVP, mediada pelo sistema nervoso autônomo e por substâncias vasodilatadoras. A diminuição da pressão arterial com o treinamento crônico ocorre pela diminuição da RVP e do débito cardíaco em repouso, por meio da redução da atividade neural simpática e do aumento da sensibilidade barorreflexa. Além disso, o exercício crônico pode promover redução da concentração de catecolaminas, melhora do perfil metabólico, afetar a atividade funcional do endotélio vascular e promover mudanças positivas na composição corporal. Desse modo, a inclusão do exercício físico aeróbio é fortemente recomendada como estratégia não farmacológica para o tratamento da hipertensão, não apenas pelo efeito benéfico na pressão arterial, bem como na redução de fatores de risco cardiovasculares.

  9. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  10. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  11. International Benchmarking of Electricity Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2014-01-01

    TSO operating in each jurisdiction. The solution for European regulators has been found in international regulatory benchmarking, organized in collaboration with the Council of European Energy Regulators (CEER) in 2008 and 2012 for 22 and 23 TSOs, respectively. The frontier study provides static cost...... efficiency estimates for each TSO, as well as dynamic results in terms of technological improvement rate and efficiency catch-up speed. In this paper, we provide the methodology for the benchmarking, using non-parametric DEA under weight restrictions, as well as an analysis of the static cost efficiency...

  12. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  13. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li17Pb83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li17Pb83 blankets. (author)

  14. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...... organizational relations, behaviors and actions. In closing it is briefly considered how to study the calculative practices of benchmarking....

  15. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    solutions to the problem have been proposed so far including, for instance, evolutionary techniques, swarm intelligence or ad hoc solutions. However, the large diversity of the solutions and the lack of a common benchmark, made any comparative analysis of the different solutions extremely difficult...

  16. Benchmark Generation and Simulation at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Lagadapati, Mahesh [North Carolina State University (NCSU), Raleigh; Mueller, Frank [North Carolina State University (NCSU), Raleigh; Engelmann, Christian [ORNL

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  17. First CSNI numerical benchmark problem: comparison report

    International Nuclear Information System (INIS)

    In order to be able to make valid statements about a model's ability to describe a certain physical situation, it is indispensable that the numerical errors are much smaller than the modelling errors; otherwise, numerical errors could compensate or over pronounce model errors in an uncontrollable way. Therefore, knowledge about the numerical errors dependence on discretization parameters (e.g. size of spatial and temporal mesh) is required. In recognition of this need, numerical benchmark problems have been introduced. In the area of transient two-phase flow, numerical benchmarks are rather new. In June 1978, the CSNI Working Group on Emergency Core Cooling of Water Reactors has proposed to ICD /CSNI to sponsor a First CSNI Numerical Benchmark exercise. By the end of October 1979, results of the computation had been received from 10 organisations in 10 different countries. Based on these contributions, a preliminary comparison report has been prepared and distributed to the members of the CSNI Working Group on Emergency Core Cooling of Water Reactors, and to the contributors to the benchmark exercise. Comments on the preliminary comparison report by some contributors have subsequently been received. They have been considered in writing this final comparison report

  18. FinPar: A Parallel Financial Benchmark

    DEFF Research Database (Denmark)

    Andreetta, Christian; Begot, Vivien; Berthold, Jost;

    2016-01-01

    sensitive to the input dataset and therefore requires multiple code versions that are optimized differently, which also raises maintainability problems. This article presents three array-based applications from the financial domain that are suitable for gpgpu execution. Common benchmark-design practice has...

  19. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  20. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    Angles Rojas, R.; Pham, M.D.; Boncz, P.A.

    2014-01-01

    With inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics in industrial-st

  1. Benchmarking in radiation protection in pharmaceutical industries

    International Nuclear Information System (INIS)

    A benchmarking on radiation protection in seven pharmaceutical companies in Germany and Switzerland was carried out. As the result relevant parameters describing the performance and costs of radiation protection were acquired and compiled and subsequently depicted in figures in order to make these data comparable. (orig.)

  2. Alberta K-12 ESL Proficiency Benchmarks

    Science.gov (United States)

    Salmon, Kathy; Ettrich, Mike

    2012-01-01

    The Alberta K-12 ESL Proficiency Benchmarks are organized by division: kindergarten, grades 1-3, grades 4-6, grades 7-9, and grades 10-12. They are descriptors of language proficiency in listening, speaking, reading, and writing. The descriptors are arranged in a continuum of seven language competences across five proficiency levels. Several…

  3. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  4. Efeitos do treinamento de corrida em diferentes intensidades sobre a capacidade aeróbia e produção de lactato pelo músculo de ratos Wistar Running training effects in different intensities on the aerobic capacity and lactate production by the muscle of Wistar rats

    OpenAIRE

    Michel Barbosa de Araújo; Fúlvia de Barros Manchado-Gobatto; Fabrício Azevedo Voltarelli; Carla Ribeiro; Clécia Soares de Alencar Mota; Claudio Alexandre Gobatto; Maria Alice Rostom de Mello

    2009-01-01

    São raros os estudos que associam indicadores de capacidade aeróbia e os substratos produzidos pelo metabolismo muscular em ratos. Dessa forma, o objetivo do presente estudo foi verificar o efeito do treinamento de corrida em duas diferentes intensidades sobre a capacidade aeróbia e a produção de lactato pelo músculo sóleo isolado de ratos. Ratos Wistar (90 dias) tiveram a transição metabólica aeróbio-anaeróbia determinada pelo teste de máxima fase estável de lactato (MFEL). Em seguida, os ra...

  5. Benchmarking Declarative Approximate Selection Predicates

    CERN Document Server

    Hassanzadeh, Oktie

    2009-01-01

    Declarative data quality has been an active research topic. The fundamental principle behind a declarative approach to data quality is the use of declarative statements to realize data quality primitives on top of any relational data source. A primary advantage of such an approach is the ease of use and integration with existing applications. Several similarity predicates have been proposed in the past for common quality primitives (approximate selections, joins, etc.) and have been fully expressed using declarative SQL statements. In this thesis, new similarity predicates are proposed along with their declarative realization, based on notions of probabilistic information retrieval. Then, full declarative specifications of previously proposed similarity predicates in the literature are presented, grouped into classes according to their primary characteristics. Finally, a thorough performance and accuracy study comparing a large number of similarity predicates for data cleaning operations is performed.

  6. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  7. Influência do treinamento aeróbio e anaeróbio na massa de gordura corporal de adolescentes obesos Influencia del entrenamiento aeróbico y anaeróbico en la masa grasa corporal de adolescentes obesos Influence of the aerobic and anaerobic training on the body fat mass in obese adolescents

    Directory of Open Access Journals (Sweden)

    Ana Cláudia Fernandez

    2004-06-01

    Full Text Available O objetivo deste estudo foi verificar as influências do exercício aeróbio e anaeróbio na composição corporal de adolescentes obesos do sexo masculino. A amostra foi constituída de 28 adolescentes com idades entre 15 e 19 anos, que apresentavam obesidade grave. Os voluntários foram distribuídos aleatoriamente em três grupos: grupo I: exercício anaeróbio; grupo II: exercício aeróbio; e grupo III: controle. O grupo I realizou treinamento intervalado em cicloergômetro que consistiu de 12 "tiros" de 30" com máxima força e velocidade, pedalando com carga alta (0,8% do massa corporal x 25 watts e recuperação ativa de 3'; o grupo II realizou treinamento aeróbio em cicloergômetro pedalando com carga relativa ao limiar ventilatório por 50 minutos. Já o terceiro grupo funcionou como controle, sem atividade física. Todos os grupos tiveram orientação nutricional e o período de intervenção foi de 12 semanas (três meses. Os voluntários realizaram densitometria óssea com análise da composição corporal (DEXA e avaliações médicas e de aptidão física. Quando comparados os períodos inicial e final de intervenção foram observadas reduções nas variáveis massa corporal, IMC, na massa de gordura corporal total e de membros inferiores e na percentagem de gordura corporal de tronco nos grupos de exercício. Diferenças foram observadas entre os grupos I e III para os deltas percentuais de massa de gordura corporal total e de membros inferiores e na percentagem de gordura de membros inferiores. Os dados sugerem que o exercício físico, tanto aeróbio como anaeróbio, aliado à orientação nutricional, promove maior redução ponderal, quando comparado com a orientação nutricional somente, e que, neste estudo, o exercício anaeróbio foi mais eficiente para promover a diminuição da gordura corporal e da percentagem de gordura e o exercício aeróbio foi mais eficaz no sentido de preservar e/ou aumentar a massa magra e a

  8. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    Science.gov (United States)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  9. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  10. Effects of Exposure Imprecision on Estimation of the Benchmark Dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose......Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose...

  11. Pre Managed Earnings Benchmarks and Earnings Management of Australian Firms

    Directory of Open Access Journals (Sweden)

    Subhrendu Rath

    2012-03-01

    Full Text Available This study investigates benchmark beating behaviour and circumstances under which managers inflate earnings to beat earnings benchmarks. We show that two benchmarks, positive earnings and positive earnings change, are associated with earnings manipulation. Using a sample ofAustralian firms from 2000 to 2006, we find that when the underlying earnings are negative or below prior year’s earnings, firms are more likely to use discretionary accruals to inflate earnings to beat benchmarks.

  12. Benchmarking of corporate social responsibility: Methodological problems and robustness.

    OpenAIRE

    Graafland, J.J.; Eijffinger, S.C.W.; Smid, H.

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  13. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, and inefficient portfolio versions. The covariance between the of...

  14. Towards a Benchmark Suite for Modelica Compilers: Large Models

    OpenAIRE

    Frenkel, Jens; Schubert, Christian; Kunze, Günter; Fritzson, Peter; Sjölund, Martin; Pop, Adrian

    2011-01-01

    The paper presents a contribution to a Modelica benchmark suite. Basic ideas for a tool independent benchmark suite based on Python scripting along with models for testing the performance of Modelica compilers regarding large systems of equation are given. The automation of running the benchmark suite is demonstrated followed by a selection of benchmark results to determine the current limits of Modelica tools and how they scale for an increasing number of equations.

  15. 47 CFR 69.108 - Transport rate benchmark.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  16. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  17. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  19. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  20. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  1. 29 CFR 1952.203 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.203 Section 1952.203... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  2. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  3. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  4. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  5. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  6. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  7. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  8. Benchmarking the True Random Number Generator of TPM Chips

    CERN Document Server

    Suciu, Alin

    2010-01-01

    A TPM (trusted platform module) is a chip present mostly on newer motherboards, and its primary function is to create, store and work with cryptographic keys. This dedicated chip can serve to authenticate other devices or to protect encryption keys used by various software applications. Among other features, it comes with a True Random Number Generator (TRNG) that can be used for cryptographic purposes. This random number generator consists of a state machine that mixes unpredictable data with the output of a one way hash function. According the specification it can be a good source of unpredictable random numbers even without having to require a genuine source of hardware entropy. However the specification recommends collecting entropy from any internal sources available such as clock jitter or thermal noise in the chip itself, a feature that was implemented by most manufacturers. This paper will benchmark the random number generator of several TPM chips from two perspectives: the quality of the random bit s...

  9. Paired cost comparison, a benchmarking technique for identifying areas of cost improvement in environmental restoration projects and waste management activities

    International Nuclear Information System (INIS)

    This paper provides an overview of benchmarking and how the Department of Energy's Office of Environmental Restoration and Waste Management used benchmarking techniques, specifically the Paired Cost Comparison, to identify cost disparities and their causes. The paper includes a discussion of the project categories selected for comparison and the criteria used to select the projects. Results are presented and factors that contribute to cost differences are discussed. Also, conclusions and the application of the Paired Cost Comparison are presented

  10. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions

    Science.gov (United States)

    Sae-Khow, Jirasak

    2014-01-01

    This study was the development of e-learning indicators used as an e-learning benchmarking model for higher education institutes. Specifically, it aimed to: 1) synthesize the e-learning indicators; 2) examine content validity by specialists; and 3) explore appropriateness of the e-learning indicators. Review of related literature included…

  11. Adaptive Cruise Control for a SMART Car: A Comparison Benchmark for MPC-PWA Control Methods

    NARCIS (Netherlands)

    Corona, D.; De Schutter, B.

    2008-01-01

    The design of an adaptive cruise controller for a SMART car, a type of small car, is proposed as a benchmark setup for several model predictive control methods for nonlinear and piecewise affine systems. Each of these methods has been already applied to specific case studies, different from method t

  12. Benchmark calculations on residue production within the EURISOL DS project; Part II: thick targets

    CERN Document Server

    David, J.-C; Boudard, A; Doré, D; Leray, S; Rapp, B; Ridikas, D; Thiollière, N

    Benchmark calculations on residue production using MCNPX 2.5.0. Calculations were compared to mass-distribution data for 5 different elements measured at ISOLDE, and to specific activities of 28 radionuclides in different places along the thick target measured in Dubna.

  13. Método de Cuckow para la calibración de aerómetros : diseño y puesta a punto del equipo

    Directory of Open Access Journals (Sweden)

    Joselaine Cáceres

    2011-05-01

    Full Text Available El método utilizado en el Laboratorio Tecnológico del Uruguay (LATU para calibración de aerómetros es el “Método de Cuckow para calibración de aerómetros utilizando agua adicionada con un tensoactivo como fluido patrón”. En este método se usa un tensoactivo para bajar la tensión superficial del agua destilada utilizada como fluido patrón y se verifica que la variación en la densidad de la misma, producto del agregado del tensoactivo, contribuye en forma despreciable a la incertidumbre del método y puede ser contemplada dentro de la incertidumbre aceptada para la densidad del agua destilada. Se planificó una estrategia para bajar la incertidumbre de calibración del método; para ello se diseñó un recipiente con un sistema de termostatización con recirculación de agua por camisa, lo que aporta estabilidad térmica al sistema. Se utilizóagitación mecánica para favorecer la termostatización y evitar gradientes de densidad asociados a inhomogeneidad en la temperatura. También se realizaron mejoras en el sistema de ajuste de la lectura y formación del menisco. Luego de estas mejoras se validó nuevamente el método y se demostró que se alcanza la meta propuesta de disminuir las incertidumbres de calibración, estando en condiciones de aplicarlo a calibración de aerómetros de precisión. En el presente artículo se detallan las consideraciones realizadas en el diseño y puesta a punto del equipo mejorado, fundamentalmente el recipiente y el sistema de termostatización. Se plantea la construcción del recipiente a partir de tubos de vidriodisponibles en el mercado, bajando así sus costos.AbstracThe method used in Laboratorio Tecnológico del Uruguay (LATU for hydrometers calibration is the “Cuckow method for hydrometers calibration using water added with a surfactant as a standard”. A surfactant is used to decrease the surface tension of distilated water used as a standar and is verified that the density variation

  14. Aspectos relacionados com a otimização do treinamento aeróbio para o alto rendimento Related aspects of aerobic training optimization for high performance

    Directory of Open Access Journals (Sweden)

    Mariana Fernandes Mendes de Oliveira

    2010-02-01

    Full Text Available O objetivo deste trabalho foi apresentar recomendações visando à otimização do treinamento aeróbio, a partir do conhecimento dos índices de aptidão funcional e seus mecanismos fisiológicos. Em atletas altamente treinados, a precisão na elaboração do treinamento pode ser o meio mais seguro para a melhora do rendimento, pois nesses indivíduos é comum a carga de treinamento oscilar entre o estimulo insuficiente e o aparecimento do excesso de treinamento. Existe, portanto, uma variedade muito grande de fatores que devem ser considerados na elaboração de um programa de treinamento. O entendimento dos mecanismos de fadiga e das respostas fisiológicas associadas às diferentes durações e intensidades de exercício é essencial para uma correta elaboração das sessões de treinamento. Além disso, treinos intervalados de alta intensidade são imprescindíveis para melhora de rendimento em atletas altamente treinados, porém, é recomendado que ele seja realizado após um razoável período de recuperação das sessões de treino anteriores. Assim, o contato entre o atleta e o treinador é importante para um planejamento cuidadoso dos períodos de recuperação antes da ocorrência de fadiga excessiva. O treinador deveria arquivar um histórico das cargas de treino e recuperações, aprendendo com a própria experiência os tipos de cargas que podem ser toleradas individualmente. Entre os fatores que podem afetar o rendimento aeróbio, o planejamento de um aquecimento apropriado e as condições ambientais adversas são aspectos muito importantes. Após reunir todas essas informações, é possível elaborar as bases do treinamento (frequência, volume, intensidade e recuperação visando melhora contínua do rendimento aeróbio.The objective of this work was to present recommendations aiming the aerobic training optimization, from the knowledge of the indexes of functional fitness and their physiological mechanisms. Concerning highly

  15. Review of the GMD Benchmark Event in TPL-007-1

    Energy Technology Data Exchange (ETDEWEB)

    Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-07-21

    Los Alamos National Laboratory (LANL) examined the approaches suggested in NERC Standard TPL-007-1 for defining the geo-electric field for the Benchmark Geomagnetic Disturbance (GMD) Event. Specifically; 1. Estimating 100-year exceedance geo-electric field magnitude; The scaling of the GMD Benchmark Event to geomagnetic latitudes below 60 degrees north; and 3. The effect of uncertainties in earth conductivity data on the conversion from geomagnetic field to geo-electric field. This document summarizes the review and presents recommendations for consideration

  16. CFD validation in OECD/NEA t-junction benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E. (Mathematics and Computer Science); (Cambridge Univ.); (Moscow Institute of Nuclar Energy Safety)

    2011-08-23

    When streams of rapidly moving flow merge in a T-junction, the potential arises for large oscillations at the scale of the diameter, D, with a period scaling as O(D/U), where U is the characteristic flow velocity. If the streams are of different temperatures, the oscillations result in experimental fluctuations (thermal striping) at the pipe wall in the outlet branch that can accelerate thermal-mechanical fatigue and ultimately cause pipe failure. The importance of this phenomenon has prompted the nuclear energy modeling and simulation community to establish a benchmark to test the ability of computational fluid dynamics (CFD) codes to predict thermal striping. The benchmark is based on thermal and velocity data measured in an experiment designed specifically for this purpose. Thermal striping is intrinsically unsteady and hence not accessible to steady state simulation approaches such as steady state Reynolds-averaged Navier-Stokes (RANS) models.1 Consequently, one must consider either unsteady RANS or large eddy simulation (LES). This report compares the results for three LES codes: Nek5000, developed at Argonne National Laboratory (USA), and Cabaret and Conv3D, developed at the Moscow Institute of Nuclear Energy Safety at (IBRAE) in Russia. Nek5000 is based on the spectral element method (SEM), which is a high-order weighted residual technique that combines the geometric flexibility of the finite element method (FEM) with the tensor-product efficiencies of spectral methods. Cabaret is a 'compact accurately boundary-adjusting high-resolution technique' for fluid dynamics simulation. The method is second-order accurate on nonuniform grids in space and time, and has a small dispersion error and computational stencil defined within one space-time cell. The scheme is equipped with a conservative nonlinear correction procedure based on the maximum principle. CONV3D is based on the immersed boundary method and is validated on a wide set of the experimental

  17. Measurements and ALE3D Simulations for Violence in a Scaled Thermal Explosion Experiment with LX-10 and AerMet 100 Steel

    Energy Technology Data Exchange (ETDEWEB)

    McClelland, M A; Maienschein, J L; Yoh, J J; deHaven, M R; Strand, O T

    2005-06-03

    We completed a Scaled Thermal Explosion Experiment (STEX) and performed ALE3D simulations for the HMX-based explosive, LX-10, confined in an AerMet 100 (iron-cobalt-nickel alloy) vessel. The explosive was heated at 1 C/h until cookoff at 182 C using a controlled temperature profile. During the explosion, the expansion of the tube and fragment velocities were measured with strain gauges, Photonic-Doppler-Velocimeters (PDVs), and micropower radar units. These results were combined to produce a single curve describing 15 cm of tube wall motion. A majority of the metal fragments were captured and cataloged. A fragment size distribution was constructed, and a typical fragment had a length scale of 2 cm. Based on these results, the explosion was considered to be a violent deflagration. ALE3D models for chemical, thermal, and mechanical behavior were developed for the heating and explosive processes. A four-step chemical kinetics model is employed for the HMX while a one-step model is used for the Viton. A pressure-dependent deflagration model is employed during the expansion. The mechanical behavior of the solid constituents is represented by a Steinberg-Guinan model while polynomial and gamma-law expressions are used for the equation of state of the solid and gas species, respectively. A gamma-law model is employed for the air in gaps, and a mixed material model is used for the interface between air and explosive. A Johnson-Cook model with an empirical rule for failure strain is used to describe fracture behavior. Parameters for the kinetics model were specified using measurements of the One-Dimensional-Time-to-Explosion (ODTX), while measurements for burn rate were employed to determine parameters in the burn front model. The ALE3D models provide good predictions for the thermal behavior and time to explosion, but the predicted wall expansion curve is higher than the measured curve. Possible contributions to this discrepancy include inaccuracies in the chemical models

  18. Treinamento Físico Aeróbico como Tratamento não Farmacológico da Síncope Neurocardiogênica

    Directory of Open Access Journals (Sweden)

    Vanessa Cristina Miranda Takahagi

    2014-03-01

    Full Text Available Fundamento: Caracterizada por perda súbita e transitória da consciência e do tônus postural, com recuperação rápida e espontânea, a síncope é causada por uma redução aguda da pressão arterial sistêmica e, por conseguinte, do fluxo sanguíneo cerebral. Os resultados insatisfatórios com o uso de fármacos permitiu que o tratamento não farmacológico da síncope neurocardiogênica fosse contemplado como primeira opção terapêutica. Objetivos: Comparar, em pacientes com síncope neurocardiogênica, o impacto do Treinamento Físico Aeróbico (TFA de moderada intensidade e de uma intervenção controle, na positividade do Teste de Inclinação Passiva (TIP e no tempo de tolerância ortostática. Métodos: Foram estudados 21 pacientes com história de síncope neurocardiogênica recorrente e TIP positivo. Esses foram aleatorizados em: Grupo Treinado (GT, n = 11, e Grupo Controle (GC, n = 10. O GT foi submetido a 12 semanas de TFA supervisionado, em cicloergômetro, e o GC, a um procedimento controle que consistia na realização de 15 minutos de alongamentos e 15 minutos de caminhada leve. Resultados: O GT apresentou efeito positivo ao treinamento físico, com aumento significativo do consumo de oxigênio-pico. Já o GC não apresentou nenhuma mudança estatisticamente significante, antes e após a intervenção. Após o período de intervenção, 72,7% da amostra do GT apresentou resultado negativo ao TIP, não apresentando síncope na reavaliação. Conclusão: O programa de treinamento físico aeróbico supervisionado por 12 semanas foi capaz de reduzir o número de TIP positivos, assim como foi capaz de aumentar o tempo de tolerância na posição ortostática durante o teste após o período de intervenção.

  19. Seleção de substratos padrões para ensaios respirométricos aeróbios com biomassa de sistemas de lodo ativado

    Directory of Open Access Journals (Sweden)

    Heraldo Antunes Silva Filho

    2015-03-01

    Full Text Available Nesta pesquisa investigou-se a influência de diferentes substratos na determinação da taxa específica de consumo de oxigênio de biomassa com cultura mista heterotrófica e autotrófica nitrificante, visando à caracterização do substrato mais adequado no desenvolvimento de ensaios respirométricos aeróbios. Foram utilizadas diferentes biomassas derivadas de quatro variantes de sistemas de lodo ativado. Os grupos heterotróficos e autotróficos nitrificantes foram avaliados em relação à sua velocidade de consumo dos substratos testados, sendo utilizada a técnica da respirometria aeróbia aberta semi-contínua de distintos pulsos, descrita em Van Haandel e Catunda (1982. Um respirometro automático acoplado a um computador foi utilizado em todos os testes respirométricos. Para identificar a taxa de consumo dos organismos heterotróficos, os substratos de fonte de carbono selecionados foram acetato de sódio (C2H3NaO2, acetato de etila (C4H8O2, etanol (C2H6O, glicose (C6H12O6 e fenol (C6H6O. Para o grupo autotrófico nitrificante foram utilizados bicarbonato de amônio (NH4HCO3, cloreto de amônio (NH4Cl e nitrito de sódio (NaNO2. Os resultados referentes ao grupo heterotrófico indicaram significativa diferença da taxa metabólica desses organismos na utilização dos substratos avaliados, exercendo maiores taxas de consumo de oxigênio para o acetato de sódio, enquanto para o grupo nitrificante o bicarbonato de amônio mostrou-se mais adequado. Comparando todos os sistemas estudados, observa-se a mesma tendência de maior biodegradabilidade ou afinidade aos substratos acetato de sódio e bicarbonato de amônio.

  20. Measuring NUMA effects with the STREAM benchmark

    CERN Document Server

    Bergstrom, Lars

    2011-01-01

    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

  1. The PROOF benchmark suite measuring PROOF performance

    Science.gov (United States)

    Ryu, S.; Ganis, G.

    2012-06-01

    The PROOF benchmark suite is a new utility suite of PROOF to measure performance and scalability. The primary goal of the benchmark suite is to determine optimal configuration parameters for a set of machines to be used as PROOF cluster. The suite measures the performance of the cluster for a set of standard tasks as a function of the number of effective processes. Cluster administrators can use the suite to measure the performance of the cluster and find optimal configuration parameters. PROOF developers can also utilize the suite to help them measure, identify problems and improve their software. In this paper, the new tool is explained in detail and use cases are presented to illustrate the new tool.

  2. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  3. Active vibration control of nonlinear benchmark buildings

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xing-de; CHEN Dao-zheng

    2007-01-01

    The present nonlinear model reduction methods unfit the nonlinear benchmark buildings as their vibration equations belong to a non-affine system. Meanwhile,the controllers designed directly by the nonlinear control strategy have a high order, and they are difficult to be applied actually. Therefore, a new active vibration control way which fits the nonlinear buildings is proposed. The idea of the proposed way is based on the model identification and structural model linearization, and exerting the control force to the built model according to the force action principle. This proposed way has a better practicability as the built model can be reduced by the balance reduction method based on the empirical Grammian matrix. A three-story benchmark structure is presented and the simulation results illustrate that the proposed method is viable for the civil engineering structures.

  4. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  5. Non-judgemental Dynamic Fuel Cycle Benchmarking

    CERN Document Server

    Scopatz, Anthony Michael

    2015-01-01

    This paper presents a new fuel cycle benchmarking analysis methodology by coupling Gaussian process regression, a popular technique in Machine Learning, to dynamic time warping, a mechanism widely used in speech recognition. Together they generate figures-of-merit that are applicable to any time series metric that a benchmark may study. The figures-of-merit account for uncertainty in the metric itself, utilize information across the whole time domain, and do not require that the simulators use a common time grid. Here, a distance measure is defined that can be used to compare the performance of each simulator for a given metric. Additionally, a contribution measure is derived from the distance measure that can be used to rank order the importance of fuel cycle metrics. Lastly, this paper warns against using standard signal processing techniques for error reduction. This is because it is found that error reduction is better handled by the Gaussian process regression itself.

  6. Argonne Code Center: benchmark problem book

    International Nuclear Information System (INIS)

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification

  7. KENO-IV code benchmark calculation, (6)

    International Nuclear Information System (INIS)

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO3)4 aqueous solution, Pu metal or PuO2-polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)

  8. Argonne Code Center: benchmark problem book

    Energy Technology Data Exchange (ETDEWEB)

    1977-06-01

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification. (RWR)

  9. Overview and Discussion of the OECD/NRC Benchmark Based on NUPEC PWR Subchannel and Bundle Tests

    Directory of Open Access Journals (Sweden)

    M. Avramova

    2013-01-01

    Full Text Available The Pennsylvania State University (PSU under the sponsorship of the US Nuclear Regulatory Commission (NRC has prepared, organized, conducted, and summarized the Organisation for Economic Co-operation and Development/US Nuclear Regulatory Commission (OECD/NRC benchmark based on the Nuclear Power Engineering Corporation (NUPEC pressurized water reactor (PWR subchannel and bundle tests (PSBTs. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency (NEA of OECD and the Japan Nuclear Energy Safety Organization (JNES, Japan. The OECD/NRC PSBT benchmark was organized to provide a test bed for assessing the capabilities of various thermal-hydraulic subchannel, system, and computational fluid dynamics (CFDs codes. The benchmark was designed to systematically assess and compare the participants’ numerical models for prediction of detailed subchannel void distribution and department from nucleate boiling (DNB, under steady-state and transient conditions, to full-scale experimental data. This paper provides an overview of the objectives of the benchmark along with a definition of the benchmark phases and exercises. The NUPEC PWR PSBT facility and the specific methods used in the void distribution measurements are discussed followed by a summary of comparative analyses of submitted final results for the exercises of the two benchmark phases.

  10. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    International Nuclear Information System (INIS)

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of keff. Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for keff, f28/f25, c28/f25, and βeff. These limited results demonstrate the importance of studying other integral parameters in addition to keff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files

  11. OCB: A Generic Benchmark to Evaluate the Performances of Object-Oriented Database Systems

    CERN Document Server

    Darmont, Jérôme; Schneider, Michel

    1998-01-01

    We present in this paper a generic object-oriented benchmark (the Object Clustering Benchmark) that has been designed to evaluate the performances of clustering policies in object-oriented databases. OCB is generic because its sample database may be customized to fit the databases introduced by the main existing benchmarks (e.g., OO1). OCB's current form is clustering-oriented because of its clustering-oriented workload, but it can be easily adapted to other purposes. Lastly, OCB's code is compact and easily portable. OCB has been implemented in a real system (Texas, running on a Sun workstation), in order to test a specific clustering policy called DSTC. A few results concerning this test are presented.

  12. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    ' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... state during the computation. We ran the system with two servers doing the secure computation using a database with information on about 2500 users. Answers arrived in about 25 seconds....

  13. Benchmarking Performance of Web Service Operations

    OpenAIRE

    Zhang, Shuai

    2011-01-01

    Web services are often used for retrieving data from servers providing information of different kinds. A data providing web service operation returns collections of objects for a given set of arguments without any side effects. In this project a web service benchmark (WSBENCH) is developed to simulate the performance of web service calls. Web service operations are specified as SQL statements. The function generator of WSBENCH converts user specified SQL queries into functions and automatical...

  14. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  15. WIDER FACE: A Face Detection Benchmark

    OpenAIRE

    Yang, Shuo; Luo, Ping; Loy, Chen Change; Tang, Xiaoou

    2015-01-01

    Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categori...

  16. Benchmarking Nature Tourism between Zhangjiajie and Repovesi

    OpenAIRE

    Wu, Zhou

    2014-01-01

    Since nature tourism became a booming business in modern society, more and more tourists choose nature-based tourism destination for their holidays. To find ways to promote Repovesi national park is quite significant, in a bid to reinforce the competitiveness of Repovesi national park. The topic of this thesis is both to find good marketing strategies used by the Zhangjiajie national park, via benchmarking and to provide some suggestions to Repovesi national park. The Method used in t...

  17. Benchmarking polish basic metal manufacturing companies

    Directory of Open Access Journals (Sweden)

    P. Pomykalski

    2014-01-01

    Full Text Available Basic metal manufacturing companies are undergoing substantial strategic changes resulting from global changes in demand. During such periods managers should closely monitor and benchmark the financial results of companies operating in their section. Proper and timely identification of the consequences of changes in these areas may be crucial as managers seek to exploit opportunities and avoid threats. The paper examines changes in financial ratios of basic metal manufacturing companies operating in Poland in the period 2006-2011.

  18. BN-600 full MOX core benchmark analysis

    International Nuclear Information System (INIS)

    As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.

  19. Direct Simulation of a Solidification Benchmark Experiment

    OpenAIRE

    Carozzani, Tommy; Gandin, Charles-André; Digonnet, Hugues; Bellet, Michel; Zaidat, Kader; Fautrelle, Yves

    2013-01-01

    International audience A solidification benchmark experiment is simulated using a three-dimensional cellular automaton-finite element solidification model. The experiment consists of a rectangular cavity containing a Sn-3 wt pct Pb alloy. The alloy is first melted and then solidified in the cavity. A dense array of thermocouples permits monitoring of temperatures in the cavity and in the heat exchangers surrounding the cavity. After solidification, the grain structure is revealed by metall...

  20. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  1. Efeitos de uma única sessão de atividade motora na atenção visual de pessoas idosas: comparação entre atividade aeróbica e neuromotora

    OpenAIRE

    Canelas, Dora Cristina Calção

    2014-01-01

    Efeitos de uma única sessão de exercício na atenção visual de pessoas idosas: comparação entre exercício aeróbico e neuromotora Resumo Objetivo: O principal objetivo deste estudo foi avaliar os efeitos agudos de uma única sessão de exercício aeróbico e uma única sessão de exercício neuromotor sobre a atenção visual de pessoas idosas. Métodos: Participaram 87 indivíduos de ambos os sexos, com idades acima dos 55 anos (65,65 ± 6,64 anos), residentes no distrito de Évora, in...

  2. Performance Comparison of HPF and MPI Based NAS Parallel Benchmarks

    Science.gov (United States)

    Saini, Subhash

    1997-01-01

    Compilers supporting High Performance Form (HPF) features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR), Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI) combinations will be compared, based on latest NAS Parallel Benchmark results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition, we would also present NPB, (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu CAPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, and SGI Origin2000. We would also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  3. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  4. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  5. Introduction to the HPC Challenge Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  6. Benchmarking and accounting for the (private) cloud

    Science.gov (United States)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  7. List of unclassified documents by the staff of Metallurgy Division, AERE Harwell from January 1979 to July 1980

    International Nuclear Information System (INIS)

    This list constitutes unclassified material published or presented between January 1979 and July 1980, by the staff of Metallurgy Division. It covers reports, memoranda, articles in periodicals, conference papers, books and patent specifications. (author)

  8. State of the art: benchmarking microprocessors for embedded automotive applications

    Directory of Open Access Journals (Sweden)

    Adnan Shaout

    2016-09-01

    Full Text Available Benchmarking microprocessors provides a way for consumers to evaluate the performance of the processors. This is done by using either synthetic or real world applications. There are a number of benchmarks that exist today to assist consumers in evaluating the vast number of microprocessors that are available in the market. In this paper an investigation of the various benchmarks available for evaluating microprocessors for embedded automotive applications will be performed. We will provide an overview of the following benchmarks: Whetstone, Dhrystone, Linpack, standard performance evaluation corporation (SPEC CPU2006, embedded microprocessor benchmark consortium (EEMBC AutoBench and MiBench. A comparison of existing benchmarks will be given based on relevant characteristics of automotive applications which will give the proper recommendation when benchmarking processors for automotive applications.

  9. BDGS: A Scalable Big Data Generator Suite in Big Data Benchmarking

    OpenAIRE

    Ming, Zijian; Luo, Chunjie; Gao, Wanling; Han, Rui; Yang, Qiang; Wang, Lei; Zhan, Jianfeng

    2014-01-01

    Data generation is a key issue in big data benchmarking that aims to generate application-specific data sets to meet the 4V requirements of big data. Specifically, big data generators need to generate scalable data (Volume) of different types (Variety) under controllable generation rates (Velocity) while keeping the important characteristics of raw data (Veracity). This gives rise to various new challenges about how we design generators efficiently and successfully. To date, most existing tec...

  10. Desempenho de reator anaeróbio-aeróbio de leito fixo no tratamento de esgoto sanitário Performance of anaerobic-aerobic packed-bed reactor in the treatment of domestic sewage

    Directory of Open Access Journals (Sweden)

    Sérgio Brasil Abreu

    2008-06-01

    Full Text Available Este artigo relata a avaliação do desempenho de um reator anaeróbio-aeróbio, preenchido com espuma de poliuretano, para tratamento de esgoto sanitário. Inicialmente, foram testados diferentes tempos de detenção hidráulica (TDH no reator que operou apenas em condições anaeróbias. Em seguida, foi operado o reator combinado anaeróbio-aeróbio. O melhor resultado para o reator em operação exclusivamente anaeróbia foi para o TDH de 10 horas, no qual se conseguiu reduzir a DQO de 389 ± 70 mg/L para 137 ± 16 mg/L. Para o reator anaeróbio-aeróbio, a DQO foi reduzida de 259 ± 69 mg/L para 93 ± 31 mg/L para TDH de 12 h (6 h no estágio anaeróbio e 6 h no aeróbio. A comparação de todos os resultados obtidos evidenciou a importância do pós-tratamento aeróbio na remoção de parcela de matéria orgânica não removida em tratamento unicamente anaeróbio.This paper reports on the performance evaluation of an upflow anaerobic-aerobic reactor, filled with polyurethane matrices, for domestic sewage treatment. Initially, different hydraulic retention times were assayed with the reactor operating exclusively in anaerobic condition. Afterwards, anaerobic-aerobic combined reactor was operated. The anaerobic operation with HRT of 10 h provided the best organic matter removal with COD reduction from 389 ± 70 mg/L to 137 ± 16 mg/L. Under anaerobic-aerobic condition, the COD dropped from 259 ± 69 mg/L to 93 ± 31 mg/L with HRT of 12 h (6 h in anaerobic and 6 h in aerobic stages. Finally, comparing all the obtained results, it was possible to verify the importance of the aerobic post treatment in the removal of part of the organic matter not removed in an exclusively anaerobic treatment.

  11. Elaboración de cartas aeronáuticas OACI: planos de obstáculos de aeródromo, a partir de imágenes aéreas digitales de pequeño formato

    Directory of Open Access Journals (Sweden)

    Jorge Prado Molina

    2012-01-01

    Full Text Available Las cartas aeronáuticas y los planos de obstáculos de aeródromo proporcionan información sobre las obstrucciones alrededor del aeropuerto, para que el controlador aéreo y los pilotos cumplan con los procedimientos y limitaciones de su utilización. La seguridad en aviación exige la producción de cartas aeronáuticas actualizadas y precisas, adoptando los estándares de la Organización de Aviación Civil Internacional (OACI. En este artículo se describe la metodología utilizada para generar los planos de obstáculos de cinco aeródromos en México, a partir de imágenes aéreas digitales, obtenidas con cámaras de formato pequeño. A través de dos levantamientos aéreos en cada aeropuerto, a 10 000 y 5 000 pies de altura sobre el nivel del terreno, se generaron dos ortomosaicos cubriendo la zona de influencia de la terminal aérea, el aeródromo y las superficies de aproximación, de ascenso en el despegue, de transición, horizontal interna y cónica. A través de la fotointerpretación de estéreopares se identificó la mayoría de los obstáculos, y con los ortomosaicos se generaron los planos de aeródromo y de obstáculos de aeródromo tipos A, B y C. Mediante receptores satelitales geodésicos se obtuvieron 18 puntos en cada pista, para obtener sus dimensiones y establecer puntos de control terrestre para la formación de los ortomosaicos. A través de detallado trabajo de campo se comprobó la localización y altura de los obstáculos y, finalmente, después de varios procedimientos de revisión por las autoridades aeronáuticas, se concluyó la generación de las cartas OACI al integrar todos los mapas de los aeropuertos, en un sistema de información geográfica (SIG.

  12. Nuclear knowledge management experience of the international criticality safety benchmark evaluation project

    International Nuclear Information System (INIS)

    accuracy and completeness of the descriptive information given in the evaluation by comparison with original documentation (published and unpublished). 2. The benchmark specification can be derived from the descriptive information given in the evaluation. 3. The completeness of the benchmark specification. 4. The results and conclusions. 5. Adherence to format. In addition, each evaluation has undergoes an independent peer review by another working group member at a different facility. Starting with the evaluator's submittal in the appropriate format, the independent peer reviewers verifies: 1. That the benchmark specification can be derived from the descriptive information given in the evaluation. 2. The completeness of the benchmark specification. 3. The results and conclusions. 4. Adherence to format. A third review by the Working Group verifies that the benchmark specification and the conclusions are adequately supported. The work of the ICSBEP is documented as an International Handbook of Evaluated Criticality Safety Benchmark Experiments. Over 250 scientists from around the world have combined their efforts to produce this handbook, which currently spans over 30,000 pages and contains benchmark specifications for over 3350 critical configurations. The handbook is intended for use by criticality safety analysts to perform necessary validations of their calculation techniques and is expected to be a valuable tool for decades to come. The handbook is currently in use in 58 different countries. (author)

  13. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  14. Towards Systematic Benchmarking of Climate Model Performance

    Science.gov (United States)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  15. List of unclassified documents by the staff of Metallurgy Division, AERE Harwell from January 1972 to July 1977

    International Nuclear Information System (INIS)

    This list constitutes unclassified material published or presented between January 1972 and July 1977 by the staff of Metallurgy Division. It covers reports, memoranda, bibliographies, articles in periodicals, conference papers, books, theses and patent specifications. It is planned to issue a list annually. The publications are listed under the following titles of the research teams concerned: fast reactor fuels, advanced reactor systems, fracture studies, structural materials, radiation effects, composite materials, high voltage microscope and metals fabrication, management and administration. (U.K.)

  16. Cálcio e o desenvolvimento de aerênquimas e atividade de celulase em plântulas de milho submetidas a hipoxia Calcium and the development of aerenchyma and celulase activity in corn seedlings subjected to hypoxia

    Directory of Open Access Journals (Sweden)

    Bárbara França Dantas

    2001-06-01

    Full Text Available A formação de aerênquimas é conhecida como uma das mais importantes adaptações anatômicas pelas quais as plantas passam quando são submetidas à deficiência de oxigênio. Esse tecido se desenvolve pela ação de enzimas de degradação ou afrouxamento da parede celular. Este trabalho foi conduzido com o objetivo de verificar o desenvolvimento de aerênquima em plântulas de milho cv. Saracura- BRS 4154, submetidas à hipoxia. Associou-se, ao desenvolvimento dessa estrutura, a atividade da celulase. Para tanto, plântulas com 4 dias de idade foram submetidas aos tratamentos de hipoxia, pela imersão em tampão de alagamento, na ausência e presença de cálcio. Após 0, 1, 2, 3 e 4 dias da aplicação dos tratamentos, foram feitos cortes anatômicos na região apical dos coleóptiles e na região intermediária da raiz para a avaliação da formação de aerênquimas, e coletado o material para os ensaios enzimáticos de celulase. A atividade celulase foi medida através de método viscosimétrico. Nas raízes, a formação de aerênquima aumentou logo após a hipoxia e atingiu 50% do total do córtex ao quarto dia de hipoxia. Este órgão apresentou uma área cortical com aerênquima em média sete vezes maior que nos coleóptiles, onde a área de espaços intercelulares atingiu 15% do córtex. A atividade da celulase em coleóptiles e raízes sofreu, inicialmente, um decréscimo devido ao estresse, aumentando em seguida, acompanhando os resultados de aerênquima. Na presença de cálcio o desenvolvimento de aerênquima foi inibido; no entanto, a atividade enzimática foi induzida.Aerenchyma formation is known as one of the most important anatomical adaptations of plants submitted to oxygen shortage. This tissue develops by action of degrading enzymes and resulting in cell wall loosening. This work was conducted with the objective of verifying aerenchyma development in corn seedlings cv. Saracura - BRS 4154, submitted to hypoxia. The

  17. BEGAFIP. Programming service, development and benchmark calculations

    International Nuclear Information System (INIS)

    This report summarizes improvements to BEGAFIP (the Swedish equivalent to the Oak Ridge computer code ORIGEN). The improvements are: addition of a subroutine making it possible to calculate neutron sources, exchange of fission yields and branching ratios in the data library to those published by Meek and Rider in 1978. In addition, BENCHMARK-calculations have been made with BEGAFIP as well as with ORIGEN regarding the build-up of actinides for a fuel burnup of 33 MWd/kg U. The results were compared to those arrived upon from the more sophisticated code CASMO. (author)

  18. An OpenMP Compiler Benchmark

    Directory of Open Access Journals (Sweden)

    Matthias S. Müller

    2003-01-01

    Full Text Available The purpose of this benchmark is to propose several optimization techniques and to test their existence in current OpenMP compilers. Examples are the removal of redundant synchronization constructs, effective constructs for alternative code and orphaned directives. The effectiveness of the compiler generated code is measured by comparing different OpenMP constructs and compilers. If possible, we also compare with the hand coded "equivalent" solution. Six out of seven proposed optimization techniques are already implemented in different compilers. However, most compilers implement only one or two of them.

  19. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich;

    2015-01-01

    In multicomponent electrolyte solutions, the tendency of ions to diffuse at different rates results in a charge imbalance that is counteracted by the electrostatic coupling between charged species leading to a process called “electrochemical migration” or “electromigration.” Although not commonly...... not been published to date. This contribution provides a set of three benchmark problems that demonstrate the effect of electric coupling during multicomponent diffusion and electrochemical migration and at the same time facilitate the intercomparison of solutions from existing reactive transport codes...

  20. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...... of an undergraduate business school education. This paper presents case analysis of the research-oriented participatory education curriculum developed at Copenhagen Business School because it appears uniquely suited, by a curious mix of Danish education tradition and deliberate innovation, to offer an educational...

  1. COVE 2A Benchmarking calculations using NORIA

    International Nuclear Information System (INIS)

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs

  2. ABM11 parton distributions and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, Sergey [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institut Fiziki Vysokikh Ehnergij, Protvino (Russian Federation); Bluemlein, Johannes; Moch, Sven-Olaf [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-08-15

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant {alpha}{sub s} at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS scheme for {alpha}{sub s} and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  3. Building with Benchmarks: The Role of the District in Philadelphia's Benchmark Assessment System

    Science.gov (United States)

    Bulkley, Katrina E.; Christman, Jolley Bruce; Goertz, Margaret E.; Lawrence, Nancy R.

    2010-01-01

    In recent years, interim assessments have become an increasingly popular tool in districts seeking to improve student learning and achievement. Philadelphia has been at the forefront of this change, implementing a set of Benchmark assessments aligned with its Core Curriculum district-wide in 2004. In this article, we examine the overall context…

  4. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  5. Benchmark results in vector atmospheric radiative transfer

    International Nuclear Information System (INIS)

    In this paper seven vector radiative transfer codes are inter-compared for the case of underlying black surface. They include three techniques based on the discrete ordinate method (DOM), two Monte-Carlo methods, the successive orders scattering method, and a modified doubling-adding technique. It was found that all codes give very similar results. Therefore, we were able to produce benchmark results for the Stokes parameters both for reflected and transmitted light in the cases of molecular, aerosol and cloudy multiply scattering media. It was assumed that the single scattering albedo is equal to one. Benchmark results have been provided by several studies before, including Coulson et al., Garcia and Siewert, Wauben and Hovenier, and Natraj et al. among others. However, the case of the elongated phase functions such as for a cloud and with a high angular resolution is presented here for the first time. Also in difference with other studies, we make inter-comparisons using several codes for the same input dataset, which enables us to quantify the corresponding errors more accurately.

  6. Benchmark analysis of KRITZ-2 critical experiments

    International Nuclear Information System (INIS)

    In the KRITZ-2 critical experiments, criticality and pin power distributions were measured at room temperature and high temperature (about 245 degC) for three different cores (KRITZ-2:1, KRITZ-2:13, KRITZ-2:19) loading slightly enriched UO2 or MOX fuels. Recently, international benchmark problems were provided by ORNL and OECD/NEA based on the KRITZ-2 experimental data. The published experimental data for the system with slightly enriched fuels at high temperature are rare in the world and they are valuable for nuclear data testing. Thus, the benchmark analysis was carried out with a continuous-energy Monte Carlo code MVP and its four nuclear data libraries based on JENDL-3.2, JENDL-3.3, JEF-2.2 and ENDF/B-VI.8. As a result, fairly good agreements with the experimental data were obtained with any libraries for the pin power distributions. However, the JENDL-3.3 and ENDF/B-VI.8 give under-prediction of criticality and too negative isothermal temperature coefficients for slightly enriched UO2 cores, although the older nuclear data JENDL-3.2 and JEF-2.2 give rather good agreements with the experimental data. From the detailed study with an infinite unit cell model, it was found that the differences among the results with different libraries are mainly due to the different fission cross section of U-235 in the energy range below 1.0 eV. (author)

  7. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  8. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  9. Swiss electricity grid - Benchmarking pilot project

    International Nuclear Information System (INIS)

    This article is a short version of the ENET number 210369. This report for the Swiss Federal Office of Energy (SFOE) describes a benchmarking pilot project carried out as a second phase in the development of a formula for the regulation of an open electricity market in Switzerland. It follows on from an initial phase involving the definition of a 'blue print' and a basic concept. The aims of the pilot project - to check out the practicability of the concept - are discussed. The collection of anonymised data for the benchmarking model from over 30 electricity utilities operating on all 7 Swiss grid levels and their integration in the three areas 'Technology', 'Grid Costs' and 'Capital Invested' are discussed in detail. In particular, confidentiality and data protection aspects are looked at. The methods used in the analysis of the data are described and the results of an efficiency analysis of various utilities are presented. The report is concluded with a listing of questions concerning data collection and analysis as well as operational and capital costs that are still to be answered

  10. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  11. Direct data access protocols benchmarking on DPM

    Science.gov (United States)

    Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina

    2015-12-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.

  12. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc.

  13. THE IMPORTANCE OF BENCHMARKING IN MAKING MANAGEMENT DECISIONS

    OpenAIRE

    Adriana-Mihaela IONESCU; Cristina Elena BIGIOI

    2016-01-01

    Launching a new business or project leads managers to make decisions and choose strategies that will then apply in their company. Most often, they take decisions only on instinct, but there are also companies that use benchmarking studies. Benchmarking is a highly effective management tool and is useful in the new competitive environment that has emerged from the need of organizations to constantly improve their performance in order to be competitive. Using this benchmarking process, organiza...

  14. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  15. Indian Management Education and Benchmarking Practices: A Conceptual Framework

    OpenAIRE

    Dr. Dharmendra MEHTA; Er. Sunayana SONI; Dr. Naveen K MEHTA; Dr. Rajesh K MEHTA

    2015-01-01

    Benchmarking can be defined as a process through which practices are analyzed to provide a standard measurement (‘benchmark’) of effective performance within an organization (such as a university/institute). Benchmarking is also used to compare performance with other organizations and other sectors. As management education is passing through challenging times so some modern management tool like benchmarking is required to improve the quality of management education and to overcome the challen...

  16. The importance of an accurate benchmark choice: the spanish case

    OpenAIRE

    Ruiz Campo, Sofía; Monjas Barroso, Manuel

    2012-01-01

    The performance of a fund cannot be judged unless it is first measured, and measurement is not possible without an objective frame of reference. A benchmark serves as a reliable and consistent gauge of the multiple dimensions of performance: return, risk and correlation. The benchmark must be a fair target for investment managers and be representative of the relevant opportunity set. The objective of this paper is to analyse whether the different benchmarks generally used to me...

  17. BENCHMARKING FOR THE ROMANIAN HEAVY COMMERCIAL VEHICLES MARKET

    Directory of Open Access Journals (Sweden)

    Pop Nicolae Alexandru

    2014-07-01

    Full Text Available The globalization has led to a better integration of international markets of goods, services and capital markets, fact which leads to a significant increase of investments in those regions with low labor cost and with access to commercial routes. The development of international trade has imposed a continuous growth of the volumes of transported goods and the development of a transport system, able to stand against the new pressure exercised by cost, time and space. The solution to efficient transport is the intermodal transportation relying on state-of-the-art technological platforms, which integrates the advantages specific to each means of transportation: flexibility for road transportation, high capacity for railway, low costs for sea, and speed for air transportation. Romania’s integration in the pan-European transport system alongside with the EU’s enlargement towards the east will change Romania’s positioning into a central one. The integrated governmental program of improving the intermodal infrastructure will ensure fast railway, road and air connections. For the Danube harbors and for the sea ports, EU grants and allowances will be used thus increasing Romania’s importance in its capacity as one of Europe’s logistical hubs. The present paper intends to use benchmarking, the management and strategic marketing tool, in order to realize an evaluation of the Romanian heavy commercial vehicles market, within European context. Benchmarking encourages change in a complex and dynamic context where a permanent solution cannot be found. The different results stimulate the use of benchmarking as a solution to reduce gaps. MAN’s case study shows the dynamics of the players on the Romanian market for heavy commercial vehicles, when considering the strong growth of Romanian exported goods but with a modest internal demand, a limited but developing road infrastructure, and an unfavorable international economical context together with

  18. Efecto del ejercicio aeróbico y la estimulación ambiental sobre la reducción de los niveles de ansiedad en el envejecimiento

    Directory of Open Access Journals (Sweden)

    Patricia Sampedro Piquero

    2015-08-01

    Full Text Available El enriquecimiento ambiental (EAM y el ejercicio aeróbico (EJ son intervenciones capaces de reducir la ansiedad durante el envejecimiento, pero poco se sabe acerca de cómo modulan las proyecciones cerebrales hacia el eje hipotálamo-pituitario adrenal (HPA. Estudiamos el efecto de un programa de EAM y de EJ durante 2 meses en ratas Wistar macho de 18 meses de edad asignadas aleatoriamente a tres grupos: control (CO, n=6, EAM (n=8 y EJ (n=8. El programa de ejercicio fue llevado a cabo 15min/día y el grupo EAM fue estabulado en una jaula de grandes dimensiones con diferentes objetos renovados frecuentemente. Mediante la histoquímica de la citocromo c oxidasa (COx estudiamos la actividad metabólica de regiones cerebrales implicadas en la respuesta de ansiedad. El EAM redujo la actividad metabólica de regiones cerebrales implicadas en activar al eje HPA (corteza infralímbica, amígdala basolateral y núcleo paravetricular hipotalámico, (p<0.05. Por otro lado, el EJ aumentó la actividad de regiones implicadas en la inhibición (corteza cingulada, núcleo del lecho de la estría terminal e hipocampo dorsal, (p<0.05. En conclusión, parece que el EAM y el EJ modulan de forma diferente la actividad de regiones cerebrales que proyectan al eje HPA y podrían constituir intervenciones eficaces para reducir los niveles de ansiedad durante el envejecimiento.

  19. Análise da potência aeróbia de futebolistas por meio de teste de campo e teste laboratorial

    Directory of Open Access Journals (Sweden)

    Cristian Javier Ramirez Lizana

    2014-12-01

    Full Text Available Introdução: Há métodos diretos e indiretos que são utilizados pelos clubes de futebol para avaliar, acompanhar e determinar o VO2max dos jogadores, sendo este muito importante para o rendimento e a recuperação dos atletas durante uma partida. Objetivo: Avaliar o nível de correlação entre as medidas de VO2max por meio da análise direta de gases e do teste de campo Yo-Yo Intermitente Recovery Level 1 (Yo-YoIR1. Métodos: Participaram do estudo 24 jogadores de futebol da categoria SUB-20 de um clube do estado de São Paulo, Brasil, com estatura 1,72±0,08 m e massa corporal 61,17±9,18 kg, com no mínimo cinco anos de prática na modalidade. Os atletas realizaram o teste de análise direta dos gases em esteira ergométrica e após 48 horas foi realizado o Yo-Yo IR1. Resultados: Os resultados apontaram uma correlação significante entre os testes (r=0,524; p<0,01, porém o Yo-Yo IR1 subestimou as medidas de análise direta dos gases em laboratório (44,98ml/kg/min e 48,14ml/kg/min, respectivamente. Conclusão: Os resultados apontaram uma correlação moderada entre as medidas de VO2max, dessa forma pode-se utilizar ambos os testes para análise da potência aeróbia dos jogadores de futebol, desde que seja repetido o mesmo protocolo nas avaliações subsequentes.

  20. Benchmarking in national health service procurement in Scotland.

    Science.gov (United States)

    Walker, Scott; Masson, Ron; Telford, Ronnie; White, David

    2007-11-01

    The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland. PMID:17958971

  1. Hospital Energy Benchmarking Guidance - Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  2. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  3. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    Science.gov (United States)

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  4. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  5. Synthetic neuronal datasets for benchmarking directed functional connectivity metrics

    Directory of Open Access Journals (Sweden)

    João Rodrigues

    2015-05-01

    Full Text Available Background. Datasets consisting of synthetic neural data generated with quantifiable and controlled parameters are a valuable asset in the process of testing and validating directed functional connectivity metrics. Considering the recent debate in the neuroimaging community concerning the use of these metrics for fMRI data, synthetic datasets that emulate the BOLD signal dynamics have played a central role by supporting claims that argue in favor or against certain choices. Generative models often used in studies that simulate neuronal activity, with the aim of gaining insight into specific brain regions and functions, have different requirements from the generative models for benchmarking datasets. Even though the latter must be realistic, there is a tradeoff between realism and computational demand that needs to be contemplated and simulations that efficiently mimic the real behavior of single neurons or neuronal populations are preferred, instead of more cumbersome and marginally precise ones. Methods. This work explores how simple generative models are able to produce neuronal datasets, for benchmarking purposes, that reflect the simulated effective connectivity and, how these can be used to obtain synthetic recordings of EEG and fMRI BOLD signals. The generative models covered here are AR processes, neural mass models consisting of linear and nonlinear stochastic differential equations and populations with thousands of spiking units. Forward models for EEG consist in the simple three-shell head model while the fMRI BOLD signal is modeled with the Balloon-Windkessel model or by convolution with a hemodynamic response function. Results. The simulated datasets are tested for causality with the original spectral formulation for Granger causality. Modeled effective connectivity can be detected in the generated data for varying connection strengths and interaction delays. Discussion. All generative models produce synthetic neuronal data with

  6. Benchmarking novel approaches for modelling species range dynamics

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  7. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  8. Benchmarking farmer performance as an incentive for sustainable farming: environmental impacts of pesticides.

    Science.gov (United States)

    Kragten, S; De Snoo, G R

    2003-01-01

    Pesticide use in The Netherlands is very high, and pesticides are found across all environmental compartments. Among individual farmers, though, there is wide variation in both pesticide use and the potential environmental impact of that use, providing policy leverage for environmental protection. This paper reports on a benchmarking tool with which farmers can compare their environmental and economic performance with that of other farmers, thereby serving as an incentive for them to adopt more sustainable methods of food production methods. The tool is also designed to provide farmers with a more detailed picture of the environmental impacts of their methods of pest management. It is interactive and available on the internet: www.agriwijzer.nl. The present version has been developed specifically for arable farmers, but it is to be extended to encompass other agricultural sectors, in particular horticulture (bulb flowers, stem fruits), as well as various other aspects of sustainability (nutrient inputs, 'on-farm' biodiversity, etc.). The benchmarking methodology was tested on a pilot group of 20 arable farmers, whose general response was positive. They proved to be more interested in comparative performance in terms of economic rather than environmental indicators. In their judgment the benchmarking tool can serve a useful purpose in steering them towards more sustainable forms of agricultural production. The benchmarking results can also be used by other actors in the agroproduction chain, such as food retailers and the food industry. PMID:15151309

  9. LHC benchmarks from flavored gauge mediation

    Science.gov (United States)

    Ierushalmi, N.; Iwamoto, S.; Lee, G.; Nepomnyashy, V.; Shadmi, Y.

    2016-07-01

    We present benchmark points for LHC searches from flavored gauge mediation models, in which messenger-matter couplings give flavor-dependent squark masses. Our examples include spectra in which a single squark — stop, scharm, or sup — is much lighter than all other colored superpartners, motivating improved quark flavor tagging at the LHC. Many examples feature flavor mixing; in particular, large stop-scharm mixing is possible. The correct Higgs mass is obtained in some examples by virtue of the large stop A-term. We also revisit the general flavor and CP structure of the models. Even though the A-terms can be substantial, their contributions to EDM's are very suppressed, because of the particular dependence of the A-terms on the messenger coupling. This holds regardless of the messenger-coupling texture. More generally, the special structure of the soft terms often leads to stronger suppression of flavor- and CP-violating processes, compared to naive estimates.

  10. Plasma Waves as a Benchmark Problem

    CERN Document Server

    Kilian, Patrick; Schreiner, Cedric; Spanier, Felix

    2016-01-01

    A large number of wave modes exist in a magnetized plasma. Their properties are determined by the interaction of particles and waves. In a simulation code, the correct treatment of field quantities and particle behavior is essential to correctly reproduce the wave properties. Consequently, plasma waves provide test problems that cover a large fraction of the simulation code. The large number of possible wave modes and the freedom to choose parameters make the selection of test problems time consuming and comparison between different codes difficult. This paper therefore aims to provide a selection of test problems, based on different wave modes and with well defined parameter values, that is accessible to a large number of simulation codes to allow for easy benchmarking and cross validation. Example results are provided for a number of plasma models. For all plasma models and wave modes that are used in the test problems, a mathematical description is provided to clarify notation and avoid possible misunderst...

  11. Development of solutions to benchmark piping problems

    Energy Technology Data Exchange (ETDEWEB)

    Reich, M; Chang, T Y; Prachuktam, S; Hartzman, M

    1977-12-01

    Benchmark problems and their solutions are presented. The problems consist in calculating the static and dynamic response of selected piping structures subjected to a variety of loading conditions. The structures range from simple pipe geometries to a representative full scale primary nuclear piping system, which includes the various components and their supports. These structures are assumed to behave in a linear elastic fashion only, i.e., they experience small deformations and small displacements with no existing gaps, and remain elastic through their entire response. The solutions were obtained by using the program EPIPE, which is a modification of the widely available program SAP IV. A brief outline of the theoretical background of this program and its verification is also included.

  12. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  13. SARNET benchmark on QUENCH-11. Final report

    International Nuclear Information System (INIS)

    The QUENCH out-of-pile experiments at Forschungszentrum Karlsruhe (Karlsruhe Research Center) are set up to investigate the hydrogen source term that results from the water or steam injection into an uncovered core of a Light-Water Reactor, to examine the behavior of overheated fuel elements under different flooding conditions, and to create a database for model development and improvement of Severe Fuel Damage (SFD) code packages. The boil-off experiment QUENCH-11 was performed on December 8, 2005 as the second of two experiments in the frame of the EC-supported LACOMERA program. It was to simulate ceasing pumps in case of a small break LOCA or a station blackout with a late depressurization of the primary system, starting with boil-down of a test bundle that was partially filled with water. It is the first test to investigate the whole sequence of an anticipated reactor accident from the boil-off phase to delayed reflood of the bundle with a low water injection rate. The test is characterized by an interaction of thermal-hydraulics and material interactions that is even stronger than in previous QUENCH tests. It was proposed by INRNE Sofia (Bulgarian Academy of Sciences) and defined together with Forschungszentrum Karlsruhe. After the test, QUENCH-11 was chosen as a SARNET code benchmark exercise. Its task is a comparison between experimental data and analytical results to assess the reliability of the code prediction for different phases of an accident and the experiment. The SFD codes used were ASTEC, ATHLET-CD, ICARE-CATHARE, MELCOR, RATEG/SVECHA, RELAP/-SCDAPSIM, and SCDAP/RELAP5. The INRNE took responsibility as benchmark coordinator to compare the code results with the experimental data. As a basis of the present work, histories of temperatures, hydrogen production and other important variables were used. Besides, axial profiles at quench initiation and the final time of 7000 s, above all of temperatures, are presented. For most variables a mainstream of

  14. SARNET benchmark on QUENCH-11. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Stefanova, A. [Bylgarska Akademiya na Naukite, Sofia (Bulgaria). Inst. for Nuclear Research and Nuclear Energy; Drath, T. [Ruhr-Univ. Bochum (Germany). Energy Systems and Energy Economics; Duspiva, J. [Nuclear Research Inst., Rez (CZ). Dept. of Reactor Technology] (and others)

    2008-03-15

    The QUENCH out-of-pile experiments at Forschungszentrum Karlsruhe (Karlsruhe Research Center) are set up to investigate the hydrogen source term that results from the water or steam injection into an uncovered core of a Light-Water Reactor, to examine the behavior of overheated fuel elements under different flooding conditions, and to create a database for model development and improvement of Severe Fuel Damage (SFD) code packages. The boil-off experiment QUENCH-11 was performed on December 8, 2005 as the second of two experiments in the frame of the EC-supported LACOMERA program. It was to simulate ceasing pumps in case of a small break LOCA or a station blackout with a late depressurization of the primary system, starting with boil-down of a test bundle that was partially filled with water. It is the first test to investigate the whole sequence of an anticipated reactor accident from the boil-off phase to delayed reflood of the bundle with a low water injection rate. The test is characterized by an interaction of thermal-hydraulics and material interactions that is even stronger than in previous QUENCH tests. It was proposed by INRNE Sofia (Bulgarian Academy of Sciences) and defined together with Forschungszentrum Karlsruhe. After the test, QUENCH-11 was chosen as a SARNET code benchmark exercise. Its task is a comparison between experimental data and analytical results to assess the reliability of the code prediction for different phases of an accident and the experiment. The SFD codes used were ASTEC, ATHLET-CD, ICARE-CATHARE, MELCOR, RATEG/SVECHA, RELAP/-SCDAPSIM, and SCDAP/RELAP5. The INRNE took responsibility as benchmark coordinator to compare the code results with the experimental data. As a basis of the present work, histories of temperatures, hydrogen production and other important variables were used. Besides, axial profiles at quench initiation and the final time of 7000 s, above all of temperatures, are presented. For most variables a mainstream of

  15. Development of an MPI benchmark program library

    International Nuclear Information System (INIS)

    Distributed parallel simulation software with message passing interfaces has been developed to realize large-scale and high performance numerical simulations. The most popular API for message communication is an MPI. The MPI will be provided on the Earth Simulator. It is known that performance of message communication using the MPI libraries gives a significant influence on a whole performance of simulation programs. We developed an MPI benchmark program library named MBL in order to measure the performance of message communication precisely. The MBL measures the performance of major MPI functions such as point-to-point communications and collective communications and the performance of major communication patterns which are often found in application programs. In this report, the description of the MBL and the performance analysis of the MPI/SX measured on the SX-4 are presented. (author)

  16. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  17. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  18. Benchmarking and testing the "Sea Level Equation

    Science.gov (United States)

    Spada, G.; Barletta, V. R.; Klemann, V.; van der Wal, W.; James, T. S.; Simon, K.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2012-04-01

    The study of the process of Glacial Isostatic Adjustment (GIA) and of the consequent sea level variations is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying GIA can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated. Following the example of the mantle convection community and our recent successful Benchmark for Post Glacial Rebound codes (Spada et al., 2011, doi: 10.1111/j.1365-246X.2011.04952.x), here we present the results of a benchmark study of independently developed codes designed to solve the SLE. This study has taken place within a collaboration facilitated through the European Cooperation in Science and Technology (COST) Action ES0701. The tests involve predictions of past and current sea level variations, and 3D deformations of the Earth surface. In spite of the signi?cant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found, which can be often attributed to the different numerical algorithms employed within the community, help to constrain the intrinsic errors in model predictions. These are of fundamental importance for a correct interpretation of the geodetic variations observed today, and

  19. Information-Theoretic Benchmarking of Land Surface Models

    Science.gov (United States)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  20. International benchmarking of telecommunications prices and price changes

    OpenAIRE

    Productivity Commission

    2002-01-01

    The report, a series of international benchmarking studies conducted by the Productivity Commission, compares Australian telecommunications prices, price changes and regulatory arrangements with those in nine other OECD countries, updating a similar study, International Benchmarking of Australian Telecommunications Services, released in March 1999.

  1. A timesharing benchmark on an IBM/370-168

    International Nuclear Information System (INIS)

    For the preparation of a planned exchange of an IBM/370-158 CPU with an 168 CPU, a benchmark was run. Five different configurations were investigated using two types of benchmark scripts and the two timesharing operating systems, OS/VS2 Rel. 3 and TSS/370 Rel. 2. The description of the measurements and the results are presented. (orig.)

  2. Computational benchmark problem for deep penetration in iron

    International Nuclear Information System (INIS)

    A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab

  3. A Competitive Benchmarking Study of Noncredit Program Administration.

    Science.gov (United States)

    Alstete, Jeffrey W.

    1996-01-01

    A benchmarking project to measure administrative processes and financial ratios received 57 usable replies from 300 noncredit continuing education programs. Programs with strong financial surpluses were identified and their processes benchmarked (including response to inquiries, registrants, registrant/staff ratio, new courses, class size,…

  4. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    International Nuclear Information System (INIS)

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  5. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the...

  6. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    Science.gov (United States)

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  7. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  8. 42 CFR 457.420 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 457.420 Section 457.420 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... State Plan Requirements: Coverage and Benefits § 457.420 Benchmark health benefits coverage....

  9. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    Science.gov (United States)

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  10. Quality indicators for international benchmarking of mental health care

    DEFF Research Database (Denmark)

    Hermann, Richard C; Mattke, Soeren; Somekh, David;

    2006-01-01

    To identify quality measures for international benchmarking of mental health care that assess important processes and outcomes of care, are scientifically sound, and are feasible to construct from preexisting data.......To identify quality measures for international benchmarking of mental health care that assess important processes and outcomes of care, are scientifically sound, and are feasible to construct from preexisting data....

  11. Selecting indicators for international benchmarking of radiotherapy centres

    NARCIS (Netherlands)

    Lent, van W.A.M.; Beer, de R. D.; Triest, van B.; Harten, van W.H.

    2013-01-01

    Introduction: Benchmarking can be used to improve hospital performance. It is however not easy to develop a concise and meaningful set of indicators on aspects related to operations management. We developed an indicator set for managers and evaluated its use in an international benchmark of radiothe

  12. BIM quickscan: benchmark of BIM performance in the Netherlands

    NARCIS (Netherlands)

    Berlo, L.A.H.M. van; Dijkmans, T.J.A.; Hendriks, H.; Spekkink, D.; Pel, W.

    2012-01-01

    In 2009 a “BIM QuickScan” for benchmarking BIM performance was created in the Netherlands (Sebastian, Berlo 2010). This instrument aims to provide insight into the current BIM performance of a company. The benchmarking instrument combines quantitative and qualitative assessments of the ‘hard’ and ‘s

  13. A Benchmark Evaluation of Fault Tolerant Wind Turbine Control Concepts

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2015-01-01

    . A benchmark model for wind turbine fault detection and isolation, and FTC has previously been proposed. Based on this benchmark, an international competition on wind turbine FTC was announced. In this brief, the top three solutions from that competition are presented and evaluated. The analysis shows that all...

  14. 29 CFR 1952.103 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.103 Section 1952.103... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  15. Developing Benchmarks to Measure Teacher Candidates' Performance

    Science.gov (United States)

    Frazier, Laura Corbin; Brown-Hobbs, Stacy; Palmer, Barbara Martin

    2013-01-01

    This paper traces the development of teacher candidate benchmarks at one liberal arts institution. Begun as a classroom assessment activity over ten years ago, the benchmarks, through collaboration with professional development school partners, now serve as a primary measure of teacher candidates' performance in the final phases of the…

  16. 29 CFR 1952.263 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.263 Section 1952.263... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  17. What Are the ACT College Readiness Benchmarks? Information Brief

    Science.gov (United States)

    ACT, Inc., 2013

    2013-01-01

    The ACT College Readiness Benchmarks are the minimum ACT® college readiness assessment scores required for students to have a high probability of success in credit-bearing college courses--English Composition, social sciences courses, College Algebra, or Biology. This report identifies the College Readiness Benchmarks on the ACT Compass scale…

  18. 29 CFR 1952.363 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.363 Section 1952.363... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  19. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.153 Section 1952.153....153 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were...

  20. Benchmarking Mentoring Practices: A Case Study in Turkey

    Science.gov (United States)

    Hudson, Peter; Usak, Muhammet; Savran-Gencer, Ayse

    2010-01-01

    Throughout the world standards have been developed for teaching in particular key learning areas. These standards also present benchmarks that can assist to measure and compare results from one year to the next. There appears to be no benchmarks for mentoring. An instrument devised to measure mentees' perceptions of their mentoring in primary…

  1. Benchmarking with the BLASST Sessional Staff Standards Framework

    Science.gov (United States)

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  2. The Use of Educational Standards and Benchmarks in Indicator Publications

    Science.gov (United States)

    Thomas, Sally; Peng, Wen-Jung

    2004-01-01

    This paper examines the use of educational standards and benchmarks in international indicator and other relevant policy publications, particularly those originating in the UK. The authors first examine what is meant by educational standards and benchmarks and how these concepts are defined. Then, they address the use of standards and benchmarks…

  3. Benchmarking ~(232)Th Evaluations With KBR and Thor Experiments

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The n+232Th evaluations from CENDL-3.1, ENDF/B-Ⅶ.0, JENDL-3.3 and JENDL-4.0 were tested with KBR series and THOR benchmark from ICSBEP Handbook. THOR is Plutonium-Metal-Fast (PMF) criticality benchmark reflected with metal thorium.

  4. EU and OECD benchmarking and peer review compared

    NARCIS (Netherlands)

    Groenendijk, Nico

    2009-01-01

    Benchmarking and peer review are essential elements of the so-called EU open method of coordination (OMC) which has been contested in the literature for lack of effectiveness. In this paper we compare benchmarking and peer review procedures as used by the EU with those used by the OECD. Different ty

  5. A Protein Classification Benchmark collection for machine learning

    NARCIS (Netherlands)

    Sonego, P.; Pacurar, M.; Dhir, S.; Kertész-Farkas, A.; Kocsor, A.; Gáspári, Z.; Leunissen, J.A.M.; Pongor, S.

    2007-01-01

    Protein classification by machine learning algorithms is now widely used in structural and functional annotation of proteins. The Protein Classification Benchmark collection (http://hydra.icgeb.trieste.it/benchmark) was created in order to provide standard datasets on which the performance of machin

  6. Logistics Cost Modeling in Strategic Benchmarking Project : cases: CEL Consulting & Cement Group A

    OpenAIRE

    Nguyen Cong Minh, Vu

    2010-01-01

    This thesis deals with logistics cost modeling for a benchmarking project as consulting service from CEL Consulting for Cement Group A. The project aims at providing flows and cost comparison of bagged cement of all cement players to relevant markets in Indonesia. The results of the project yielded strategic elements for Cement Group A in planning their penetration strategy with new investments. Due to the specific needs, Cement Group A requested a flexible costing approach taking into ...

  7. Benchmarking on the management of radioactive waste; Benchmarking sobre la gestion de los residuos radiactivos

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Gomez, M. a.; Gonzalez Gandal, R.; Gomez Castano, N.

    2013-09-01

    In this project, an evaluation of the practices carried out in the waste management field at the Spanish nuclear power plants has been done following the Benchmarking methodology. This process has allowed the identification of aspects to improve waste treatment processes; to reduce the volume of waste; to reduce management costs and to establish ways of management for the waste stream which do not have. (Author)

  8. Entrenamiento de la capacidad aeróbica por medio de la terapia acuática en niños con parálisis cerebral tipo diplejía espástica

    Directory of Open Access Journals (Sweden)

    Nandy Fajardo-López

    2013-12-01

    Full Text Available Antecedentes. La parálisis cerebral tipo diplejía espática genera cambios en el sistema cardiovascular que afectan la capacidad aeróbica. La terapia acuática es una estrategia terapéutica óptima tanto para el manejo de la población como para el entrenamiento de la capacidad aeróbica, por las respuestas fisiológicas que genera y porque brinda la facilidad de generar mayores cargas al sistema cardiovascular con menores riesgos que en tierra. Objetivo. Identificar las características que debe tener una propuesta de intervención fisioterapéutica para el entrenamiento de la capacidad aérobica en niños y niñas entre los 8 y los 12 años, con parálisis cerebral tipo diplejía espástica, empleando la terapia acuática. Materiales y métodos. Se desarrolló un estudio de tipo descriptivo-propositivo, en el cual se formuló una propuesta de intervención basada en información recolectada a través de referencias bibliográficas. Resultados. Se presentan en forma de propuesta de intervención, describiendo en detalle las fases del entrenamiento de la capacidad aeróbica, mediante los principios del entrenamiento y la prescripción de ejercicio físico, teniendo en cuenta las respuestas fisiológicas ante la carga, así como las características propias de la población. Conclusión. La parálisis cerebral tipo diplejía espástica, genera cambios en la capacidad aeróbica; por esto, el fisioterapeuta debe incluirla en los procesos de rehabilitación como uno de sus objetivos. Para lograrlo, la terapia acuática es una modalidad de tratamiento óptima, puesto que genera mayor seguridad de movimiento y respuestas fisiológicas favorables en el sistema cardiovascular.

  9. Efeitos do treinamento aeróbio sobre o broncoespasmo induzido pelo execicio, parâmetros metabólicos e inflamatórios em adoslescentes com excesso de peso

    OpenAIRE

    Cieslak, Fabrício

    2013-01-01

    Resumo: Estudos têm avaliado a relação entre asma e obesidade em crianças e adolescentes, entretanto, investigações sobre os efeitos do exercício físico nos parâmetros inflamatórios e o impacto sobre o broncoespasmo induzido pelo exercício (BIE) são limitadas. O objetivo desse estudo foi avaliar os efeitos de 12 semanas de treinamento aeróbio sobre o BIE, parâmetros metabólicos e inflamatórios em adolescentes com excesso de peso. Estudo longitudinal, experimental, comparativo e correlacional ...

  10. SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI

    Directory of Open Access Journals (Sweden)

    Ralf Reussner

    2002-01-01

    Full Text Available The main objective of the MPI communication library is to enable portable parallel programming with high performance within the message-passing paradigm. Since the MPI standard has no associated performance model, and makes no performance guarantees, comprehensive, detailed and accurate performance figures for different hardware platforms and MPI implementations are important for the application programmer, both for understanding and possibly improving the behavior of a given program on a given platform, as well as for assuring a degree of predictable behavior when switching to another hardware platform and/or MPI implementation. We term this latter goal performance portability, and address the problem of attaining performance portability by benchmarking. We describe the SKaMPI benchmark which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability. SKaMPI is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.

  11. Benchmark 2 - Springback of a draw / re-draw panel: Part A: Benchmark description

    Science.gov (United States)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing; Chen, Zhong

    2013-12-01

    Numerical methods have been effectively implemented to predict springback behavior of complex stampings to reduce die tryout through compensation and produce dimensionally accurate products after forming and trimming. However, accurate prediction of the sprung shape of a panel formed with an initial draw followed with a restrike forming step remains a difficult challenge. The objective of this benchmark was to predict the sprung shape after stamping, restriking and trimming a sheet metal panel. A simple, rectangular draw die was used to draw sheet metal to a set depth with a "larger" tooling radius, followed by additional drawing to a greater depth with a "smaller" tooling radius. Panels were sectioned along a centerline and released to allow measurement of thickness strain and position of the trim line in the sprung condition. Smaller radii were used in the restrike step in order to significantly alter the deformation and the sprung shape. These measurements were used to evaluate numerical analysis predictions submitted by benchmark participants. Additional panels were drawn to "failure" during both the first draw and the re-draw in order to set the parameters for the springback trials and to demonstrate that a sheet metal going through a re-strike operation can exceed conventional forming limits of that under a simple draw operation. Two sheet metals were used for this benchmark study: DP600 steel sheet and aluminum alloy 5182-O.

  12. A new numerical benchmark of a freshwater lens

    Science.gov (United States)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  13. Quantum benchmarks for pure single-mode Gaussian states.

    Science.gov (United States)

    Chiribella, Giulio; Adesso, Gerardo

    2014-01-10

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large-scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian single-mode states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments. PMID:24483875

  14. Intra and inter-organizational learning from benchmarking IS services

    DEFF Research Database (Denmark)

    Mengiste, Shegaw Anagaw; Kræmmergaard, Pernille; Hansen, Bettina

    2016-01-01

    This paper reports a case study of benchmarking IS services in Danish municipalities. Drawing on Holmqvist’s (2004) organizational learning model of exploration and exploitation, the paper explores intra and inter-organizational learning dynamics among Danish municipalities that are involved...... in benchmarking their IS services and functions since 2006. Particularly, this research tackled existing IS benchmarking approaches and methods by turning to a learning-oriented perspective and by empirically exploring the dynamic process of intra and inter-organizational learning from benchmarking IS/IT services....... The paper also makes a contribution by emphasizing the importance of informal cross-municipality consortiums to facilitate learning and experience sharing across municipalities. The findings of the case study demonstrated that the IS benchmarking scheme is relatively successful in sharing good practices...

  15. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    Science.gov (United States)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  16. Influência do exercício aeróbio na renina de portadores de hipertensão arterial com sobrepeso Influencia del ejercicio aeróbico en la renina de portadores de hipertensión arterial con sobrepeso Effect of aerobic exercise on plasma renin in overweight patients with hypertension

    Directory of Open Access Journals (Sweden)

    Bruno Martinelli

    2010-07-01

    Full Text Available FUNDAMENTO: A atividade do sistema renina-angiotensina-aldosterona tem relação direta com sobrepeso e sedentarismo, e essas variáveis se associam à hipertensão arterial (HA. O exercício aeróbio propicia melhor controle da pressão arterial (PA por agir nos mecanismos da regulação pressórica, dentre eles, a atividade de renina plasmática (ARP. OBJETIVO: Avaliar a influência do exercício aeróbio sobre ARP em portadores de HA com sobrepeso. MÉTODOS: Foram avaliados níveis pressóricos, bioquímicos e antropométricos pré e pós-treinamento de 16 semanas, três vezes por semana, a 60%-80% da frequência cardíaca máxima. Os dados foram expressos em média ± desvio padrão ou mediana e intervalo interquartílico, e analisados pelo teste "t", Mann-Withney e ANOVA (p FUNDAMENTO: La actividad del sistema renina-angiotensina-aldosterona tiene relación directa con sobrepeso y sedentarismo, y esas variables se asocian a la hipertensión arterial (HA. El ejercicio aeróbico propicia mejor control de la presión arterial (PA por actuar en los mecanismos de la regulación presórica, entre ellos, la actividad de renina plasmática (ARP. OBJETIVO: Evaluar la influencia del ejercicio aeróbico sobre ARP en portadores de HA con sobrepeso. MÉTODOS: Fueron evaluados niveles presóricos, bioquímicos y antropométricos pre y post-entrenamiento de 16 semanas, tres veces por semana, a 60%-80% da frecuencia cardíaca máxima. Los datos fueron expresados en media ± desvío estándar o media e intervalo intercuartílico, y analizados por el teste "t", Mann-Withney y ANOVA (p BACKGROUND: The activity of the renin-angiotensin-aldosterone system (RAAS is directly related to overweight and sedentary lifestyles, both of which are associated with hypertension. Aerobic exercise helps control blood pressure (BP by acting on mechanisms of blood pressure regulation, such as plasma renin activity (PRA. OBJECTIVE: To assess the effect of aerobic exercise on

  17. Capacidade aeróbia de ratos alimentados com dieta rica em frutose Aerobic capacity of rats fed with fructose rich diet

    Directory of Open Access Journals (Sweden)

    Rodrigo Ferreira de Moura

    2008-10-01

    Full Text Available INTRODUÇÃO: Evidências apontam que a ingestão exacerbada de frutose pode desencadear distúrbios característicos da síndrome metabólica. OBJETIVOS: Analisar os efeitos da ingestão de dieta rica em frutose sobre aspectos metabólicos de ratos da linhagem Wistar. Adicionalmente, verificar a capacidade aeróbia através da identificação da máxima fase estável de lactato (MFEL. MÉTODOS: Dezesseis ratos foram separados em dois grupos de oito animais: a controle, alimentados com dieta balanceada, e b frutose, alimentados com dieta rica em frutose. Foram analisadas a tolerância à glicose (área sob a curva de glicose durante teste de tolerância à glicose, sensibilidade à insulina (taxa de remoção da glicose sérica após sobrecarga exógena de insulina, perfil lipídico sérico e concentração de lactato sanguíneo [lac]s durante exercício na intensidade da MFEL. RESULTADOS: Teste t não pareado (p INTRODUCTION: Evidence points that exacerbated ingestion of fructose may trigger disturbs characteristic of the metabolic syndrome. OBJECTIVES: To analyze the effects of a fructose rich diet on metabolic aspects of Wistar lineage rats. Additionally, to verify the aerobic capacity, through the identification of the maximal lactate steady state (MSSL. PROCEDURES: Sixteen rats were separated in two groups of eight animals: a Control, fed a balanced diet, and b fructose, fed a fructose-rich diet. The glucose tolerance, (area under serum glucose during a glucose tolerance test, insulin sensibility (glucose disappearance rate after exogenous insulin administration, serum lipid profile and blood lactate concentration [lac]b during exercise at MSSL intensity, have been analyzed. RESULTS: Non-paired t test (p<0.05 revealed difference between groups in the area under the curve of glucose and serum triglycerides, no difference in insulin sensibility or in [lac]b was detected, though. One-way ANOVA with Newman Keuls post hoc revealed difference in

  18. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  19. Guidebook for Using the Tool BEST Cement: Benchmarking and Energy Savings Tool for the Cement Industry

    Energy Technology Data Exchange (ETDEWEB)

    Galitsky, Christina; Price, Lynn; Zhou, Nan; Fuqiu , Zhou; Huawen, Xiong; Xuemin, Zeng; Lan, Wang

    2008-07-30

    The Benchmarking and Energy Savings Tool (BEST) Cement is a process-based tool based on commercially available efficiency technologies used anywhere in the world applicable to the cement industry. This version has been designed for use in China. No actual cement facility with every single efficiency measure included in the benchmark will likely exist; however, the benchmark sets a reasonable standard by which to compare for plants striving to be the best. The energy consumption of the benchmark facility differs due to differences in processing at a given cement facility. The tool accounts for most of these variables and allows the user to adapt the model to operational variables specific for his/her cement facility. Figure 1 shows the boundaries included in a plant modeled by BEST Cement. In order to model the benchmark, i.e., the most energy efficient cement facility, so that it represents a facility similar to the user's cement facility, the user is first required to input production variables in the input sheet (see Section 6 for more information on how to input variables). These variables allow the tool to estimate a benchmark facility that is similar to the user's cement plant, giving a better picture of the potential for that particular facility, rather than benchmarking against a generic one. The input variables required include the following: (1) the amount of raw materials used in tonnes per year (limestone, gypsum, clay minerals, iron ore, blast furnace slag, fly ash, slag from other industries, natural pozzolans, limestone powder (used post-clinker stage), municipal wastes and others); the amount of raw materials that are preblended (prehomogenized and proportioned) and crushed (in tonnes per year); (2) the amount of additives that are dried and ground (in tonnes per year); (3) the production of clinker (in tonnes per year) from each kiln by kiln type; (4) the amount of raw materials, coal and clinker that is ground by mill type (in tonnes per

  20. Quantitative consistency testing of thermal benchmark lattice experiments

    International Nuclear Information System (INIS)

    The paper sets forth a general method to demonstrate the quantitative consistency (or inconsistency) of results of thermal reactor lattice experiments. The method is of particular importance in selecting standard ''benchmark'' experiments for comparison testing of lattice analysis codes and neutron cross sections. ''Benchmark'' thermal lattice experiments are currently selected by consensus, which usually means the experiment is geometrically simple, well-documented, reasonably complete, and qualitatively consistent. A literature search has not revealed any general quantitative test that has been applied to experimental results to demonstrate consistency, although some experiments must have been subjected to some form or other of quantitative test. The consistency method is based on a two-group neutron balance condition that is capable of revealing the quantitative consistency (or inconsistency) of reported thermal benchmark lattice integral parameters. This equation is used in conjunction with a second equation in the following discussion to assess the consistency (or inconsistency) of: (1) several Cross Section Evaluation Working Group (CSEWG) defined thermal benchmark lattices, (2) SRL experiments on the Mark 5R and Mark 15 lattices, and (3) several D2O lattices encountered as proposed thermal benchmark lattices. Nineteen thermal benchmark lattice experiments were subjected to a quantitative test of consistency between the reported experimental integral parameters. Results of this testing showed only two lattice experiments to be generally useful as ''benchmarks,'' three lattice experiments to be of limited usefulness, three lattice experiments to be potentially useful, and 11 lattice experiments to be not useful. These results are tabulated with the lattices identified

  1. Benchmarking risk management within the international water utility sector. Part I: Design of a capability maturity methodology.

    OpenAIRE

    MacGillivray, Brian H.; Sharp, J. V.; Strutt, J.E.; Hamilton, Paul D.; Pollard, Simon J. T.

    2007-01-01

    Risk management in the water utility sector is becoming increasingly explicit. However, due to the novelty and complexity of the discipline, utilities are encountering difficulties in defining and institutionalising their risk management processes. In response, the authors have developed a sector specific capability maturity methodology for benchmarking and improving risk management. The research, conducted in consultation with water utility practitioners, has distilled risk...

  2. Consistency and Magnitude of Differences in Reading Curriculum-Based Measurement Slopes in Benchmark versus Strategic Monitoring

    Science.gov (United States)

    Mercer, Sterett H.; Keller-Margulis, Milena A.

    2015-01-01

    Differences in oral reading curriculum-based measurement (R-CBM) slopes based on two commonly used progress monitoring practices in field-based data were compared in this study. Semester-specific R-CBM slopes were calculated for 150 Grade 1 and 2 students who completed benchmark (i.e., 3 R-CBM probes collected 3 times per year) and strategic…

  3. LHC Benchmarks from Flavored Gauge Mediation

    CERN Document Server

    Ierushalmi, N; Lee, G; Nepomnyashy, V; Shadmi, Y

    2016-01-01

    We present benchmark points for LHC searches from flavored gauge mediation models, in which messenger-matter couplings give flavor-dependent squark masses. Our examples include spectra in which a single squark - stop, scharm, or sup - is much lighter than all other colored superpartners, motivating improved quark flavor tagging at the LHC. Many examples feature flavor mixing; in particular, large stop-scharm mixing is possible. The correct Higgs mass is obtained in some examples by virtue of the large stop A-term. We also revisit the general flavor and CP structure of the models. We find that, even though the A-terms can be substantial, their contributions to EDM's are very suppressed, because of the particular dependence of the A-terms on the messenger coupling. This holds regardless of the messenger-coupling texture. More generally, the special structure of the soft terms often leads to stronger suppression of flavor- and CP-violating processes, compared to naive estimates.

  4. MC benchmarks for GERDA LAr veto designs

    International Nuclear Information System (INIS)

    The Germanium Detector Array (GERDA) experiment is designed to search for neutrinoless beta decay in 76Ge and is able to directly test the present claim by parts of the Heidelberg-Moscow Collaboration. The experiment started recently its first physics phase with eight enriched detectors, after a 17 month long commissioning period. GERDA operates an array of HPGe detectors in liquid argon (LAr), which acts both as a shield for external backgrounds and as a cryogenic cooling. Furthermore, LAr has the potential to be instrumented and therefore be used as an active veto for background events through the detection of the produced scintillation light. In this talk, Monte Carlo studies for benchmarking and optimizing different LAr veto designs will be presented. LAr scintillates at 128 nm which, combined with the cryogenic temperature in which the detector is operated and its optical properties, poses many challenges in the design of an efficient veto that would help the experiment to reduce the total background level by one order of magnitude, as it is the goal for the second physics phase of the experiment.

  5. KENO-IV code benchmark calculation, (4)

    International Nuclear Information System (INIS)

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multi-group constants library MGCL. The present paper describes the results of a test using criticality experiments about slab-cylinder system of uranium nitrate solution. In all, 128 cases of experiments have been calculated for the slab-cylinder configuration with and without plexiglass reflector, having the various critical parameters such as the number of cylinders and height of the uranium nitrate solution. It is shown among several important results that the code and library gives a fairly good multiplication factor, that is, k sub(eff) -- 1.0 for heavily reflected cases, whereas k sub(eff) -- 0.91 for the unreflected ones. This suggests the necessity of more advanced treatment of the criticality calculation for the system where neutrons can easily leak out during slowing down process. (author)

  6. One dimensional benchmark calculations using diffusion theory

    International Nuclear Information System (INIS)

    This is a comparative study by using different one dimensional diffusion codes which are available at our Nuclear Engineering Department. Some modifications have been made in the used codes to fit the problems. One of the codes, DIFFUSE, solves the neutron diffusion equation in slab, cylindrical and spherical geometries by using 'Forward elimination- Backward substitution' technique. DIFFUSE code calculates criticality, critical dimensions and critical material concentrations and adjoint fluxes as well. It is used for the space and energy dependent neutron flux distribution. The whole scattering matrix can be used if desired. Normalisation of the relative flux distributions to the reactor power, plotting of the flux distributions and leakage terms for the other two dimensions have been added. Some modifications also have been made for the code output. Two Benchmark problems have been calculated with the modified version and the results are compared with BBD code which is available at our department and uses same techniques of calculation. Agreements are quite good in results such as k-eff and the flux distributions for the two cases studies. (author)

  7. Benchmarking and accounting for the (private) cloud

    CERN Document Server

    Belleman, J

    2015-01-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible, the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to ...

  8. Hydrologic information server for benchmark precipitation dataset

    Science.gov (United States)

    McEnery, John A.; McKee, Paul W.; Shelton, Gregory P.; Ramsey, Ryan W.

    2013-01-01

    This paper will present the methodology and overall system development by which a benchmark dataset of precipitation information has been made available. Rainfall is the primary driver of the hydrologic cycle. High quality precipitation data is vital for hydrologic models, hydrometeorologic studies and climate analysis,and hydrologic time series observations are important to many water resources applications. Over the past two decades, with the advent of NEXRAD radar, science to measure and record rainfall has improved dramatically. However, much existing data has not been readily available for public access or transferable among the agricultural, engineering and scientific communities. This project takes advantage of the existing CUAHSI Hydrologic Information System ODM model and tools to bridge the gap between data storage and data access, providing an accepted standard interface for internet access to the largest time-series dataset of NEXRAD precipitation data ever assembled. This research effort has produced an operational data system to ingest, transform, load and then serve one of most important hydrologic variable sets.

  9. Effect of noise correlations on randomized benchmarking

    Science.gov (United States)

    Ball, Harrison; Stace, Thomas M.; Flammia, Steven T.; Biercuk, Michael J.

    2016-02-01

    Among the most popular and well-studied quantum characterization, verification, and validation techniques is randomized benchmarking (RB), an important statistical tool used to characterize the performance of physical logic operations useful in quantum information processing. In this work we provide a detailed mathematical treatment of the effect of temporal noise correlations on the outcomes of RB protocols. We provide a fully analytic framework capturing the accumulation of error in RB expressed in terms of a three-dimensional random walk in "Pauli space." Using this framework we derive the probability density function describing RB outcomes (averaged over noise) for both Markovian and correlated errors, which we show is generally described by a Γ distribution with shape and scale parameters depending on the correlation structure. Long temporal correlations impart large nonvanishing variance and skew in the distribution towards high-fidelity outcomes—consistent with existing experimental data—highlighting potential finite-sampling pitfalls and the divergence of the mean RB outcome from worst-case errors in the presence of noise correlations. We use the filter-transfer function formalism to reveal the underlying reason for these differences in terms of effective coherent averaging of correlated errors in certain random sequences. We conclude by commenting on the impact of these calculations on the utility of single-metric approaches to quantum characterization, verification, and validation.

  10. OECD/NEA International Benchmark exercises: Validation of CFD codes applied nuclear industry; OECD/NEA internatiion Benchmark exercices: La validacion de los codigos CFD aplicados a la industria nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Pena-Monferrer, C.; Miquel veyrat, A.; Munoz-Cobo, J. L.; Chiva Vicent, S.

    2016-08-01

    In the recent years, due, among others, the slowing down of the nuclear industry, investment in the development and validation of CFD codes, applied specifically to the problems of the nuclear industry has been seriously hampered. Thus the International Benchmark Exercise (IBE) sponsored by the OECD/NEA have been fundamental to analyze the use of CFD codes in the nuclear industry, because although these codes are mature in many fields, still exist doubts about them in critical aspects of thermohydraulic calculations, even in single-phase scenarios. The Polytechnic University of Valencia (UPV) and the Universitat Jaume I (UJI), sponsored by the Nuclear Safety Council (CSN), have actively participated in all benchmark's proposed by NEA, as in the expert meetings,. In this paper, a summary of participation in the various IBE will be held, describing the benchmark itself, the CFD model created for it, and the main conclusions. (Author)

  11. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Javier Ortensi; Sonat Sen; Hans Hammer

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible for defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III

  12. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W. II [Oak Ridge National Lab., TN (United States); Tsao, C.L. [Duke Univ., Durham, NC (United States). School of the Environment

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more complete documentation of the sources and derivation of all values are presented.

  13. Accelerating progress in Artificial General Intelligence: Choosing a benchmark for natural world interaction

    Science.gov (United States)

    Rohrer, Brandon

    2010-12-01

    Measuring progress in the field of Artificial General Intelligence (AGI) can be difficult without commonly accepted methods of evaluation. An AGI benchmark would allow evaluation and comparison of the many computational intelligence algorithms that have been developed. In this paper I propose that a benchmark for natural world interaction would possess seven key characteristics: fitness, breadth, specificity, low cost, simplicity, range, and task focus. I also outline two benchmark examples that meet most of these criteria. In the first, the direction task, a human coach directs a machine to perform a novel task in an unfamiliar environment. The direction task is extremely broad, but may be idealistic. In the second, the AGI battery, AGI candidates are evaluated based on their performance on a collection of more specific tasks. The AGI battery is designed to be appropriate to the capabilities of currently existing systems. Both the direction task and the AGI battery would require further definition before implementing. The paper concludes with a description of a task that might be included in the AGI battery: the search and retrieve task.

  14. Benchmark and gap analysis of current mask carriers vs future requirements: example of the carrier contamination

    Science.gov (United States)

    Fontaine, H.; Davenet, M.; Cheung, D.; Hoellein, I.; Richsteiger, P.; Dejaune, P.; Torsy, A.

    2007-02-01

    In the frame of the European Medea+ 2T302 MUSCLE project, an extensive mask carriers benchmark was carried out in order to evaluate whether some containers answer to the 65nm technology needs. Ten different containers, currently used or expected in the future all along the mask supply chain (blank, maskhouse and fab carriers) were selected at different steps of their life cycle (new, aged, aged & cleaned). The most critical parameters identified for analysis versus future technologies were: automation, particle contamination, chemical contamination (organic outgassing, ionic contamination), cleanability, ESD, airtightness and purgeability. Furthermore, experimental protocols corresponding to suitable methods were then developed and implemented to test each criterion. The benchmark results are presented giving a "state of the art" of mask carriers currently available and allowing a gap analysis for the tested parameters related to future needs. This approach is detailed through the particular case of carrier contamination measurements. Finally, this benchmark / gap analysis leads to propose advisable mask carrier specifications (and the test protocols associated) on various key parameters which can also be taken as guidelines for a standardization perspective for the 65nm technology. This also indicates that none of tested carriers fulfills all the specifications proposed.

  15. Benchmarking: retos y riesgos para el ingeniero industrial

    Directory of Open Access Journals (Sweden)

    Sergio Humberto Romo Picazo

    2000-01-01

    Full Text Available Los métodos para el análisis y solución de problemas que tradicionalmente ha usado el ingeniero industrial pueden adoptar un enfoque distinto, usando el concepto de benchmarking. El benchmarking representa una herramienta muy importante para los ingenieros industriales, ya que bien aplicada conduce al mejoramiento de los procesos. Sin embargo, todo ingeniero industrial debe conocer las limitaciones y riesgos que implica la decisión de llevar a cabo un proyecto de benchmarking, aquí se señalan algunos de ellos.

  16. Simulating diffusion processes in discontinuous media: Benchmark tests

    Science.gov (United States)

    Lejay, Antoine; Pichot, Géraldine

    2016-06-01

    We present several benchmark tests for Monte Carlo methods simulating diffusion in one-dimensional discontinuous media. These benchmark tests aim at studying the potential bias of the schemes and their impact on the estimation of micro- or macroscopic quantities (repartition of masses, fluxes, mean residence time, …). These benchmark tests are backed by a statistical analysis to filter out the bias from the unavoidable Monte Carlo error. We apply them on four different algorithms. The results of the numerical tests give a valuable insight into the fine behavior of these schemes, as well as rules to choose between them.

  17. The OpenupEd quality label: benchmarks for MOOCs

    OpenAIRE

    Rosewell, Jonathan; Jansen, Darco

    2014-01-01

    In this paper we report on the development of the OpenupEd Quality Label, a self-assessment and review quality assurance process for the new European OpenupEd portal (www.openuped.eu) for MOOCs (massive open online courses). This process is focused on benchmark statements that seek to capture good practice, both at the level of the institution and at the level of individual courses. The benchmark statements for MOOCs are derived from benchmarks produced by the E xcellence e learning quality p...

  18. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  19. Effects of exposure imprecision on estimation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2004-01-01

    In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...

  20. Fault detection of a benchmark wind turbine using interval analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Odgaard, Peter Fogh; Bak, Thomas

    2012-01-01

    This paper investigates a state estimation set- membership approach for fault detection of a benchmark wind turbine. The main challenges in the benchmark are high noise on the wind speed measurement and the nonlinearities in the aerodynamic torque such that the overall model of the turbine...... of the measurement with a closed set that is computed based on the past measurements and a model of the system. If the measurement is not consistent with this set, a fault is detected. The result demonstrates effectiveness of the method for fault detection of the benchmark wind turbine....

  1. Implementation of benchmark management in quality assurance audit activities

    International Nuclear Information System (INIS)

    The concept of Benchmark Management is that the practices of the best competitor are taken as benchmark, to analyze and study the distance between that competitor and the institute, and take efficient actions to catch up and even exceed the competitor. This paper analyzes and rebuilds all the process for quality assurance audit with the concept of Benchmark Management, based on the practices during many years of quality assurance audits, in order to improve the level and effect of quality assurance audit activities. (author)

  2. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts. The blade shape is considered as a fixed parameter...

  3. Project ATTACK and Project VISTA: Benchmark studies on the road to NATO's early TNF policy

    International Nuclear Information System (INIS)

    This paper is concerned with those studies and analyses that affected early NATO nuclear policy and force structure. The discussion focuses specifically on two open-quotes benchmarkclose quotes activities. Project VISTA and Project ATTACK. These two studies were chosen less because one can document their direct impact on NATO nuclear policy and more because they capture the state of thinking about tactical nuclear weapons at a particular point of time. Project VISTA offers an especially important benchmark in this respect. Project ATTACK is a rather different kind of benchmark. It is not a pathbreaking study. It is much narrower and more technical than VISTA. It appears to have received no public attention. Project ATTACK is interesting because it seems to capture a open-quotes nuts-and-boltsclose quotes feel for how U.S. (and thereby NATO) theater nuclear policy was evolving prior to MC 48. The background and context for Project VISTA and Project ATTACK are presented and discussed

  4. SUMMARY OF GENERAL WORKING GROUP A+B+D: CODES BENCHMARKING.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.; SHAPOSHNIKOVA, E.; ZIMMERMANN, F.; HOFMANN, I.

    2006-05-29

    Computer simulation is an indispensable tool in assisting the design, construction, and operation of accelerators. In particular, computer simulation complements analytical theories and experimental observations in understanding beam dynamics in accelerators. The ultimate function of computer simulation is to study mechanisms that limit the performance of frontier accelerators. There are four goals for the benchmarking of computer simulation codes, namely debugging, validation, comparison and verification: (1) Debugging--codes should calculate what they are supposed to calculate; (2) Validation--results generated by the codes should agree with established analytical results for specific cases; (3) Comparison--results from two sets of codes should agree with each other if the models used are the same; and (4) Verification--results from the codes should agree with experimental measurements. This is the summary of the joint session among working groups A, B, and D of the HI32006 Workshop on computer codes benchmarking.

  5. IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov

    2012-11-01

    Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.

  6. OECD/NEA main steam line break PWR benchmark simulation by TRACE/S3K coupled code

    International Nuclear Information System (INIS)

    A coupling between the TRACE system thermal-hydraulics code and the SIMULATE-3K (S3K) three-dimensional reactor kinetics code has been developed in a collaboration between the Paul Scherrer Institut (PSI) and Studsvik. In order to verify the coupling scheme and the coupled code capabilities with regards to plant transients, the OECD/NEA Main Steam Line Break PWR benchmark was simulated with the coupled TRACE/S3K code. The core/plant system data were taken from the benchmark specifications, while the nuclear data were generated with the Studsvik's lattice code CASMO-4 and the core analysis code SIMULATE-3. The TRACE/S3K results were compared with the published results obtained by the 17 participants of the benchmark. The comparison shows that the TRACE/S3K code reproduces satisfactory the main transient parameters, namely, the power and reactivity history, steam generator inventory, and pressure response. (author)

  7. Benchmarking the Calculation of Stochastic Heating and Emissivity of Dust Grains in the Context of Radiative Transfer Simulations

    CERN Document Server

    Camps, Peter; Bianchi, Simone; Lunttila, Tuomas; Pinte, Christophe; Natale, Giovanni; Juvela, Mika; Fischera, Joerg; Fitzgerald, Michael P; Gordon, Karl; Baes, Maarten; Steinacker, Juergen

    2015-01-01

    We define an appropriate problem for benchmarking dust emissivity calculations in the context of radiative transfer (RT) simulations, specifically including the emission from stochastically heated dust grains. Our aim is to provide a self-contained guide for implementors of such functionality, and to offer insights in the effects of the various approximations and heuristics implemented by the participating codes to accelerate the calculations. The benchmark problem definition includes the optical and calorimetric material properties, and the grain size distributions, for a typical astronomical dust mixture with silicate, graphite and PAH components; a series of analytically defined radiation fields to which the dust population is to be exposed; and instructions for the desired output. We process this problem using six RT codes participating in this benchmark effort, and compare the results to a reference solution computed with the publicly available dust emission code DustEM. The participating codes implement...

  8. Benchmarking wastewater treatment plants under an eco-efficiency perspective.

    Science.gov (United States)

    Lorenzo-Toja, Yago; Vázquez-Rowe, Ian; Amores, María José; Termes-Rifé, Montserrat; Marín-Navarro, Desirée; Moreira, María Teresa; Feijoo, Gumersindo

    2016-10-01

    The new ISO 14045 framework is expected to slowly start shifting the definition of eco-efficiency toward a life-cycle perspective, using Life Cycle Assessment (LCA) as the environmental impact assessment method together with a system value assessment method for the economic analysis. In the present study, a set of 22 wastewater treatment plants (WWTPs) in Spain were analyzed on the basis of eco-efficiency criteria, using LCA and Life Cycle Costing (LCC) as a system value assessment method. The study is intended to be useful to decision-makers in the wastewater treatment sector, since the combined method provides an alternative scheme for analyzing the relationship between environmental impacts and costs. Two midpoint impact categories, global warming and eutrophication potential, as well as an endpoint single score indicator were used for the environmental assessment, while LCC was used for value assessment. Results demonstrated that substantial differences can be observed between different WWTPs depending on a wide range of factors such as plant configuration, plant size or even legal discharge limits. Based on these results the benchmarking of wastewater treatment facilities was performed by creating a specific classification and certification scheme. The proposed eco-label for the WWTPs rating is based on the integration of the three environmental indicators and an economic indicator calculated within the study under the eco-efficiency new framework. PMID:27235897

  9. Benchmarking wastewater treatment plants under an eco-efficiency perspective.

    Science.gov (United States)

    Lorenzo-Toja, Yago; Vázquez-Rowe, Ian; Amores, María José; Termes-Rifé, Montserrat; Marín-Navarro, Desirée; Moreira, María Teresa; Feijoo, Gumersindo

    2016-10-01

    The new ISO 14045 framework is expected to slowly start shifting the definition of eco-efficiency toward a life-cycle perspective, using Life Cycle Assessment (LCA) as the environmental impact assessment method together with a system value assessment method for the economic analysis. In the present study, a set of 22 wastewater treatment plants (WWTPs) in Spain were analyzed on the basis of eco-efficiency criteria, using LCA and Life Cycle Costing (LCC) as a system value assessment method. The study is intended to be useful to decision-makers in the wastewater treatment sector, since the combined method provides an alternative scheme for analyzing the relationship between environmental impacts and costs. Two midpoint impact categories, global warming and eutrophication potential, as well as an endpoint single score indicator were used for the environmental assessment, while LCC was used for value assessment. Results demonstrated that substantial differences can be observed between different WWTPs depending on a wide range of factors such as plant configuration, plant size or even legal discharge limits. Based on these results the benchmarking of wastewater treatment facilities was performed by creating a specific classification and certification scheme. The proposed eco-label for the WWTPs rating is based on the integration of the three environmental indicators and an economic indicator calculated within the study under the eco-efficiency new framework.

  10. An experimental phylogeny to benchmark ancestral sequence reconstruction.

    Science.gov (United States)

    Randall, Ryan N; Radford, Caelan E; Roof, Kelsey A; Natarajan, Divya K; Gaucher, Eric A

    2016-01-01

    Ancestral sequence reconstruction (ASR) is a still-burgeoning method that has revealed many key mechanisms of molecular evolution. One criticism of the approach is an inability to validate its algorithms within a biological context as opposed to a computer simulation. Here we build an experimental phylogeny using the gene of a single red fluorescent protein to address this criticism. The evolved phylogeny consists of 19 operational taxonomic units (leaves) and 17 ancestral bifurcations (nodes) that display a wide variety of fluorescent phenotypes. The 19 leaves then serve as 'modern' sequences that we subject to ASR analyses using various algorithms and to benchmark against the known ancestral genotypes and ancestral phenotypes. We confirm computer simulations that show all algorithms infer ancient sequences with high accuracy, yet we also reveal wide variation in the phenotypes encoded by incorrectly inferred sequences. Specifically, Bayesian methods incorporating rate variation significantly outperform the maximum parsimony criterion in phenotypic accuracy. Subsampling of extant sequences had minor effect on the inference of ancestral sequences. PMID:27628687

  11. Community-based benchmarking of the CMIP DECK experiments

    Science.gov (United States)

    Gleckler, P. J.

    2015-12-01

    A diversity of community-based efforts are independently developing "diagnostic packages" with little or no coordination between them. A short list of examples include NCAR's Climate Variability Diagnostics Package (CVDP), ORNL's International Land Model Benchmarking (ILAMB), LBNL's Toolkit for Extreme Climate Analysis (TECA), PCMDI's Metrics Package (PMP), the EU EMBRACE ESMValTool, the WGNE MJO diagnostics package, and CFMIP diagnostics. The full value of these efforts cannot be realized without some coordination. As a first step, a WCRP effort has initiated a catalog to document candidate packages that could potentially be applied in a "repeat-use" fashion to all simulations contributed to the CMIP DECK (Diagnostic, Evaluation and Characterization of Klima) experiments. Some coordination of community-based diagnostics has the additional potential to improve how CMIP modeling groups analyze their simulations during model-development. The fact that most modeling groups now maintain a "CMIP compliant" data stream means that in principal without much effort they could readily adopt a set of well organized diagnostic capabilities specifically designed to operate on CMIP DECK experiments. Ultimately, a detailed listing of and access to analysis codes that are demonstrated to work "out of the box" with CMIP data could enable model developers (and others) to select those codes they wish to implement in-house, potentially enabling more systematic evaluation during the model development process.

  12. Structural Benchmark Testing for Stirling Convertor Heater Heads

    Science.gov (United States)

    Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.

    2007-01-01

    The National Aeronautics and Space Administration (NASA) has identified high efficiency Stirling technology for potential use on long duration Space Science missions such as Mars rovers, deep space missions, and lunar applications. For the long life times required, a structurally significant design limit for the Stirling convertor heater head is creep deformation induced even under relatively low stress levels at high material temperatures. Conventional investigations of creep behavior adequately rely on experimental results from uniaxial creep specimens, and much creep data is available for the proposed Inconel-718 (IN-718) and MarM-247 nickel-based superalloy materials of construction. However, very little experimental creep information is available that directly applies to the atypical thin walls, the specific microstructures, and the low stress levels. In addition, the geometry and loading conditions apply multiaxial stress states on the heater head components, far from the conditions of uniaxial testing. For these reasons, experimental benchmark testing is underway to aid in accurately assessing the durability of Stirling heater heads. The investigation supplements uniaxial creep testing with pneumatic testing of heater head test articles at elevated temperatures and with stress levels ranging from one to seven times design stresses. This paper presents experimental methods, results, post-test microstructural analyses, and conclusions for both accelerated and non-accelerated tests. The Stirling projects use the results to calibrate deterministic and probabilistic analytical creep models of the heater heads to predict their life times.

  13. A comparison and benchmark of two electron cloud packages

    Energy Technology Data Exchange (ETDEWEB)

    Lebrun, Paul L.G.; Amundson, James F; Spentzouris, Panagiotis G; Veitzer, Seth A

    2012-01-01

    We present results from precision simulations of the electron cloud (EC) problem in the Fermilab Main Injector using two distinct codes. These two codes are (i)POSINST, a F90 2D+ code, and (ii)VORPAL, a 2D/3D electrostatic and electromagnetic code used for self-consistent simulations of plasma and particle beam problems. A specific benchmark has been designed to demonstrate the strengths of both codes that are relevant to the EC problem in the Main Injector. As differences between results obtained from these two codes were bigger than the anticipated model uncertainties, a set of changes to the POSINST code were implemented. These changes are documented in this note. This new version of POSINST now gives EC densities that agree with those predicted by VORPAL, within {approx}20%, in the beam region. The root cause of remaining differences are most likely due to differences in the electrostatic Poisson solvers. From a software engineering perspective, these two codes are very different. We comment on the pros and cons of both approaches. The design(s) for a new EC package are briefly discussed.

  14. Nomenclatural benchmarking: the roles of digital typification and telemicroscopy

    Directory of Open Access Journals (Sweden)

    Quentin Wheeler

    2012-07-01

    Full Text Available Nomenclatural benchmarking is the periodic realignment of species names with species theories and is necessary for the accurate and uniform use of Linnaean binominals in the face of changing species limits. Gaining access to types, often for little more than a cursory examination by an expert, is a major bottleneck in the advance and availability of biodiversity informatics. For the nearly two million described species it has been estimated that five to six million name-bearing type specimens exist, including those for synonymized binominals. Recognizing that examination of types in person will remain necessary in special cases, we propose a four-part strategy for opening access to types that relies heavily on digitization and that would eliminate much of the bottleneck: (1 modify codes of nomenclature to create registries of nomenclatural acts, such as the proposed ZooBank, that include a requirement for digital representations (e-types for all newly described species to avoid adding to backlog; (2 an “r” strategy that would engineer and deploy a network of automated instruments capable of rapidly creating 3-D images of type specimens not requiring participation of taxon experts; (3 a “K” strategy using remotely operable microscopes to engage taxon experts in targeting and annotating informative characters of types to supplement and extend information content of rapidly acquired e-types, a process that can be done on an as-needed basis as in the normal course of revisionary taxonomy; and (4 creation of a global e-type archive associated with the commissions on nomenclature and species registries providing one-stop-shopping for e-types. We describe a first generation implementation of the “K” strategy that adapts current technology to create a network of Remotely Operable Benchmarkers Of Types (ROBOT specifically engineered to handle the largest backlog of types, pinned insect specimens. The three initial instruments will be in the

  15. Nomenclatural benchmarking: the roles of digital typification and telemicroscopy.

    Science.gov (United States)

    Wheeler, Quentin; Bourgoin, Thierry; Coddington, Jonathan; Gostony, Timothy; Hamilton, Andrew; Larimer, Roy; Polaszek, Andrew; Schauff, Michael; Solis, M Alma

    2012-01-01

    Nomenclatural benchmarking is the periodic realignment of species names with species theories and is necessary for the accurate and uniform use of Linnaean binominals in the face of changing species limits. Gaining access to types, often for little more than a cursory examination by an expert, is a major bottleneck in the advance and availability of biodiversity informatics. For the nearly two million described species it has been estimated that five to six million name-bearing type specimens exist, including those for synonymized binominals. Recognizing that examination of types in person will remain necessary in special cases, we propose a four-part strategy for opening access to types that relies heavily on digitization and that would eliminate much of the bottleneck: (1) modify codes of nomenclature to create registries of nomenclatural acts, such as the proposed ZooBank, that include a requirement for digital representations (e-types) for all newly described species to avoid adding to backlog; (2) an "r" strategy that would engineer and deploy a network of automated instruments capable of rapidly creating 3-D images of type specimens not requiring participation of taxon experts; (3) a "K" strategy using remotely operable microscopes to engage taxon experts in targeting and annotating informative characters of types to supplement and extend information content of rapidly acquired e-types, a process that can be done on an as-needed basis as in the normal course of revisionary taxonomy; and (4) creation of a global e-type archive associated with the commissions on nomenclature and species registries providing one-stop-shopping for e-types. We describe a first generation implementation of the "K" strategy that adapts current technology to create a network of Remotely Operable Benchmarkers Of Types (ROBOT) specifically engineered to handle the largest backlog of types, pinned insect specimens. The three initial instruments will be in the Smithsonian Institution

  16. Benchmarking the QUAD4/TRIA3 element

    Science.gov (United States)

    Pitrof, Stephen M.; Venkayya, Vipperla B.

    1993-09-01

    The QUAD4 and TRIA3 elements are the primary plate/shell elements in NASTRAN. These elements enable the user to analyze thin plate/shell structures for membrane, bending and shear phenomena. They are also very new elements in the NASTRAN library. These elements are extremely versatile and constitute a substantially enhanced analysis capability in NASTRAN. However, with the versatility comes the burden of understanding a myriad of modeling implications and their effect on accuracy and analysis quality. The validity of many aspects of these elements were established through a series of benchmark problem results and comparison with those available in the literature and obtained from other programs like MSC/NASTRAN and CSAR/NASTRAN. Never-the-less such a comparison is never complete because of the new and creative use of these elements in complex modeling situations. One of the important features of QUAD4 and TRIA3 elements is the offset capability which allows the midsurface of the plate to be noncoincident with the surface of the grid points. None of the previous elements, with the exception of bar (beam), has this capability. The offset capability played a crucial role in the design of QUAD4 and TRIA3 elements. It allowed modeling layered composites, laminated plates and sandwich plates with the metal and composite face sheets. Even though the basic implementation of the offset capability is found to be sound in the previous applications, there is some uncertainty in relatively simple applications. The main purpose of this paper is to test the integrity of the offset capability and provide guidelines for its effective use. For the purpose of simplicity, references in this paper to the QUAD4 element will also include the TRIA3 element.

  17. Isprs Benchmark for Multi-Platform Photogrammetry

    Science.gov (United States)

    Nex, F.; Gerke, M.; Remondino, F.; Przybilla, H.-J.; Bäumker, M.; Zurhorst, A.

    2015-03-01

    Airborne high resolution oblique imagery systems and RPAS/UAVs are very promising technologies that will keep on influencing the development of geomatics in the future years closing the gap between terrestrial and classical aerial acquisitions. These two platforms are also a promising solution for National Mapping and Cartographic Agencies (NMCA) as they allow deriving complementary mapping information. Although the interest for the registration and integration of aerial and terrestrial data is constantly increasing, only limited work has been truly performed on this topic. Several investigations still need to be undertaken concerning algorithms ability for automatic co-registration, accurate point cloud generation and feature extraction from multiplatform image data. One of the biggest obstacles is the non-availability of reliable and free datasets to test and compare new algorithms and procedures. The Scientific Initiative "ISPRS benchmark for multi-platform photogrammetry", run in collaboration with EuroSDR, aims at collecting and sharing state-of-the-art multi-sensor data (oblique airborne, UAV-based and terrestrial images) over an urban area. These datasets are used to assess different algorithms and methodologies for image orientation and dense matching. As ground truth, Terrestrial Laser Scanning (TLS), Aerial Laser Scanning (ALS) as well as topographic networks and GNSS points were acquired to compare 3D coordinates on check points (CPs) and evaluate cross sections and residuals on generated point cloud surfaces. In this paper, the acquired data, the pre-processing steps, the evaluation procedures as well as some preliminary results achieved with commercial software will be presented.

  18. Building America Research Benchmark Definition: Updated December 2009

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.; Engebrecht, C.

    2010-01-01

    The Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without chasing a 'moving target.'

  19. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  20. A Framework for Systematic Benchmarking of Monitoring and Diagnostic Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, we present an architecture and a formal framework to be used for systematic benchmarking of monitoring and diagnostic systems and for producing...

  1. Draft Mercury Aquatic Wildlife Benchmarks for Great Salt Lake Assessment

    Science.gov (United States)

    This document describes the EPA Region 8's rationale for selecting aquatic wildlife dietary and tissue mercury benchmarks for use in interpreting available data collected from the Great Salt Lake and surrounding wetlands.

  2. THE IMPORTANCE OF BENCHMARKING IN MAKING MANAGEMENT DECISIONS

    Directory of Open Access Journals (Sweden)

    Adriana-Mihaela IONESCU

    2016-06-01

    Full Text Available Launching a new business or project leads managers to make decisions and choose strategies that will then apply in their company. Most often, they take decisions only on instinct, but there are also companies that use benchmarking studies. Benchmarking is a highly effective management tool and is useful in the new competitive environment that has emerged from the need of organizations to constantly improve their performance in order to be competitive. Using this benchmarking process, organizations try to find the best practices applied in a business, learn from famous leaders and identify ways to increase their performance and competitiveness. Thus, managers gather information about market trends and about competitors, especially about the leaders in the field, and use these information in finding ideas and setting of guidelines for development. Benchmarking studies are often used in businesses of commerce, real estate, and industry and high-tech software.

  3. The State of Energy and Performance Benchmarking for Enterprise Servers

    Science.gov (United States)

    Fanara, Andrew; Haines, Evan; Howard, Arthur

    To address the server industry’s marketing focus on performance, benchmarking organizations have played a pivotal role in developing techniques to determine the maximum achievable performance level of a system. Generally missing has been an assessment of energy use to achieve that performance. The connection between performance and energy consumption is becoming necessary information for designers and operators as they grapple with power constraints in the data center. While industry and policy makers continue to strategize about a universal metric to holistically measure IT equipment efficiency, existing server benchmarks for various workloads could provide an interim proxy to assess the relative energy efficiency of general servers. This paper discusses ideal characteristics a future energy-performance benchmark might contain, suggests ways in which current benchmarks might be adapted to provide a transitional step to this end, and notes the need for multiple workloads to provide a holistic proxy for a universal metric.

  4. Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation

    Science.gov (United States)

    Scott, James R.

    2004-01-01

    NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.

  5. Randomized benchmarking in measurement-based quantum computing

    Science.gov (United States)

    Alexander, Rafael N.; Turner, Peter S.; Bartlett, Stephen D.

    2016-09-01

    Randomized benchmarking is routinely used as an efficient method for characterizing the performance of sets of elementary logic gates in small quantum devices. In the measurement-based model of quantum computation, logic gates are implemented via single-site measurements on a fixed universal resource state. Here we adapt the randomized benchmarking protocol for a single qubit to a linear cluster state computation, which provides partial, yet efficient characterization of the noise associated with the target gate set. Applying randomized benchmarking to measurement-based quantum computation exhibits an interesting interplay between the inherent randomness associated with logic gates in the measurement-based model and the random gate sequences used in benchmarking. We consider two different approaches: the first makes use of the standard single-qubit Clifford group, while the second uses recently introduced (non-Clifford) measurement-based 2-designs, which harness inherent randomness to implement gate sequences.

  6. Hydraulic benchmark data for PWR mixing vane grid

    International Nuclear Information System (INIS)

    The purpose of the present study is to present new hydraulic benchmark data obtained for PWR rod bundles for the purpose of benchmarking Computational Fluid Dynamics (CFD) models of the rod bundle. The flow field in a PWR fuel assembly downstream of structural grids which have mixing vane grids attached is very complex due to the geometry of the subchannel and the high axial component of the velocity field relative to the secondary flows which are used to enhance the heat transfer performance of the rod bundle. Westinghouse has a CFD methodology to model PWR rod bundles that was developed with prior benchmark test data. As improvements in testing techniques have become available, further PWR rod bundle testing is being performed to obtain advanced data which has high spatial and temporal resolution. This paper presents the advanced testing and benchmark data that has been obtained by Westinghouse through collaboration with Texas A&M University. (author)

  7. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    Energy Technology Data Exchange (ETDEWEB)

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  8. Parton Shower Uncertainties with Herwig 7: Benchmarks at Leading Order

    CERN Document Server

    Bellm, Johannes; Plätzer, Simon; Schichtel, Peter; Siódmok, Andrzej

    2016-01-01

    We perform a detailed study of the sources of perturbative uncertainty in parton shower predictions within the Herwig 7 event generator. We benchmark two rather different parton shower algorithms, based on angular-ordered and dipole-type evolution, against each other. We deliberately choose leading order plus parton shower as the benchmark setting to identify a controllable set of uncertainties. This will enable us to reliably assess improvements by higher-order contributions in a follow-up work.

  9. Implementation of the NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for High Performance Computing (HPC). In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for CFD applications.

  10. Performance and Scalability of the NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for scientific applications. In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for scientific applications.

  11. A European social agenda: poverty benchmarking and social transfers

    OpenAIRE

    Atkinson, A. B.

    2000-01-01

    Development of the social dimension of Europe was advanced by the Lisbon Summit in March 2000, and this paper considers the future direction of social policy. The first step towards a social agenda could take the form of benchmarking, based on national competencies in this field, with Member States learning from best performance in the Union; this step would be parallel to the first phase of the Maastricht process towards macro-economic convergence. Initially, this benchmarking would focus on...

  12. A discussion on the design of graph database benchmarks

    OpenAIRE

    Domínguez Sal, David; Martínez Bazán, Norbert; Muntés Mulero, Víctor; Baleta Ferrer, Pedro; Larriba Pey, Josep

    2011-01-01

    Graph Database Management systems (GDBs) are gaining popularity. They are used to analyze huge graph datasets that are naturally appearing in many application areas to model interrelated data. The objective of this paper is to raise a new topic of discussion in the benchmarking community and allow practitioners having a set of basic guidelines for GDB benchmarking. We strongly believe that GDBs will become an important player in the market field of data analysis, and with that, their performa...

  13. Measurement, Standards, and Peer Benchmarking: One Hospital's Journey.

    Science.gov (United States)

    Martin, Brian S

    2016-04-01

    Peer-to-peer benchmarking is an important component of rapid-cycle performance improvement in patient safety and quality-improvement efforts. Institutions should carefully examine critical success factors before engagement in peer-to-peer benchmarking in order to maximize growth and change opportunities. Solutions for Patient Safety has proven to be a high-yield engagement for Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, with measureable improvement in both organizational process and culture.

  14. Benchmarking Open-Source Tree Learners in R/RWeka

    OpenAIRE

    Schauerhuber, Michael; Zeileis, Achim; Meyer, David; Hornik, Kurt

    2007-01-01

    The two most popular classification tree algorithms in machine learning and statistics - C4.5 and CART - are compared in a benchmark experiment together with two other more recent constant-fit tree learners from the statistics literature (QUEST, conditional inference trees). The study assesses both misclassification error and model complexity on bootstrap replications of 18 different benchmark datasets. It is carried out in the R system for statistical computing, made possible by means of the...

  15. Measurement, Standards, and Peer Benchmarking: One Hospital's Journey.

    Science.gov (United States)

    Martin, Brian S

    2016-04-01

    Peer-to-peer benchmarking is an important component of rapid-cycle performance improvement in patient safety and quality-improvement efforts. Institutions should carefully examine critical success factors before engagement in peer-to-peer benchmarking in order to maximize growth and change opportunities. Solutions for Patient Safety has proven to be a high-yield engagement for Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, with measureable improvement in both organizational process and culture. PMID:27017032

  16. Towards a Core Model for Higher Education IT Management Benchmarking

    OpenAIRE

    Markus Juult, Janne

    2013-01-01

    This study evaluates three European higher education IT benchmarking projects by applying a custom comparison framework that is based on benchmarking literature and IT manager experience. The participating projects are Bencheit (Finland), UCISA (The United Kingdom) and UNIVERSITIC (Spain). EDUCAUSE (The United States of America) is also included as a project outside our geographical focus area due to its size and prominence in North America. Each of these projects is examined to map the data ...

  17. Microbiota bacteriana aeróbia da conjuntiva de doadores de córnea Aerobic bacterial microbiota of the conjunctiva of cornea donors

    Directory of Open Access Journals (Sweden)

    Paula Fontana Lorenzini

    2007-03-01

    Full Text Available OBJETIVOS: Determinar a microbiota bacteriana aeróbia da conjuntiva de doadores de córnea e seu padrão de suscetibilidade a antibióticos; verificar o número de córneas utilizadas para transplante e a média de tempo de preservação em solução preservante com gentamicina e estreptomicina; traçar o perfil dos doadores e receptores de córnea. MÉTODOS: Espécimes clínicos foram colhidos de saco inferior da conjuntiva de ambos os olhos, de 40 doadores de córnea. As amostras foram inoculadas em ágar sangue azida, ágar chocolate e ágar MacConkey e o antibiograma foi realizado pelo método de Kirby-Bauer. RESULTADOS: A freqüência de cultura positiva da conjuntiva de doadores de córnea foi de 72,5%, sendo que Gram-positivos totalizaram 81,6% e apenas 18,4% das amostras foram identificadas como Gram-negativos. Vancomicina inibiu 100% dos Gram-positivos, ao passo que a sensibilidade dos Gram-negativos à gentamicina foi de 53,8% e à estreptomicina foi de 30%. O sexo masculino predominou entre os doadores e receptores, a média de tempo entre o óbito e a enucleação foi de 2h e a de preservação em solução preservante com gentamicina e estreptomicina foi de 7 dias. Neoplasia e mais de uma causa associada foram as causas de óbito mais freqüentes. O ceratocone foi a principal indicação para transplante (51,7%. CONCLUSÕES: Staphylococcus coagulase negativo foi o microrganismo com o maior número de isolamentos, apresentando sensibilidade variada aos antimicrobianos. A quantidade de córneas utilizadas para transplante foi bastante inferior em relação ao total de captações. O perfil dos doadores e receptores de córnea mostrou-se heterogêneo para grande parte das variáveis analisadas.PURPOSE: To determine aerobic bacterial microbiota of the conjunctiva of cornea donors and its patterns of susceptibility to antibiotics; verify the number of corneas used for transplant and the average time of preservation in solutions with

  18. A resource for benchmarking the usefulness of protein structure models.

    KAUST Repository

    Carbajo, Daniel

    2012-08-02

    BACKGROUND: Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. RESULTS: This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. CONCLUSIONS: The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by

  19. Summary of results for the uranium benchmark problem of the ANS Ad Hoc Committee on Reactor Physics Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Parish, T.A. [Texas A and M Univ., College Station, TX (United States). Nuclear Engineering Dept.; Mosteller, R.D. [Los Alamos National Lab., NM (United States); Diamond, D.J. [Brookhaven National Lab., Upton, NY (United States); Gehin, J.C. [Oak Ridge National Lab., TN (United States)

    1998-12-31

    This paper presents a summary of the results obtained by all of the contributors to the Uranium Benchmark Problem of the ANS Ad hoc Committee on Reactor Physics Benchmarks. The benchmark problem was based on critical experiments which mocked-up lattices typical of PWRs. Three separate cases constituted the benchmark problem. These included a uniform lattice, an assembly-type lattice with water holes and an assembly-type lattice with pyrex rods. Calculated results were obtained from eighteen separate organizations from all over the world. Some organizations submitted more than one set of results based on different calculational methods and cross section data. Many of the most widely used assembly physics and core analysis computer codes and neutron cross section data libraries were applied by the contributors.

  20. A benchmark model for plant wide control of waste water treatment plants; Benchmark-Modell fuer anlagenweite Klaeranlagenregelungen

    Energy Technology Data Exchange (ETDEWEB)

    Alex, Jens; Jumar, Ulrich [Institut fuer Automation und Kommunikation e.V. Magdeburg (Germany)

    2009-07-01

    For the control of waste water treatment plants a large number of proposals is published. To allow an objective evaluation and comparison of different concepts a benchmark simulation model can be utilised. The presented benchmark system is the result of a task group of the IWA - International Water Association and consists of a library of model components and an evaluation procedure for control concepts. The application of this system is demonstrated by typical control concepts (orig.)