WorldWideScience

Sample records for benchmark specification sheet

  1. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  2. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  3. Enthalpy benchmark experiments for numerical ice sheet models

    Directory of Open Access Journals (Sweden)

    T. Kleiner

    2014-06-01

    Full Text Available We present benchmark experiments to test the implementation of enthalpy and the corresponding boundary conditions in numerical ice sheet models. The first experiment tests particularly the functionality of the boundary condition scheme and the basal melt rate calculation during transient simulations. The second experiment addresses the steady-state enthalpy profile and the resulting position of the cold–temperate transition surface (CTS. For both experiments we assume ice flow in a parallel-sided slab decoupled from the thermal regime. Since we impose several assumptions on the experiment design, analytical solutions can be formulated for the proposed numerical experiments. We compare simulation results achieved by three different ice flow-models with these analytical solutions. The models agree well to the analytical solutions, if the change in conductivity between cold and temperate ice is properly considered in the model. In particular, the enthalpy gradient at the cold side of the CTS vanishes in the limit of vanishing conductivity in the temperate ice part as required from the physical jump conditions at the CTS.

  4. Development of parallel benchmark code by sheet metal forming simulator 'ITAS'

    International Nuclear Information System (INIS)

    This report describes the development of parallel benchmark code by sheet metal forming simulator 'ITAS'. ITAS is a nonlinear elasto-plastic analysis program by the finite element method for the purpose of the simulation of sheet metal forming. ITAS adopts the dynamic analysis method that computes displacement of sheet metal at every time unit and utilizes the implicit method with the direct linear equation solver. Therefore the simulator is very robust. However, it requires a lot of computational time and memory capacity. In the development of the parallel benchmark code, we designed the code by MPI programming to reduce the computational time. In numerical experiments on the five kinds of parallel super computers at CCSE JAERI, i.e., SP2, SR2201, SX-4, T94 and VPP300, good performances are observed. The result will be shown to the public through WWW so that the benchmark results may become a guideline of research and development of the parallel program. (author)

  5. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  6. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Wei [Chinese Academy of Sciences; Ju, Lili [University of South Carolina; Gunzburger, Max [Florida State University; Price, Stephen [Los Alamos National Laboratory; Ringler, Todd [Los Alamos National Laboratory,

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  7. Ensemble approach to predict specificity determinants: benchmarking and validation

    Directory of Open Access Journals (Sweden)

    Panchenko Anna R

    2009-07-01

    Full Text Available Abstract Background It is extremely important and challenging to identify the sites that are responsible for functional specification or diversification in protein families. In this study, a rigorous comparative benchmarking protocol was employed to provide a reliable evaluation of methods which predict the specificity determining sites. Subsequently, three best performing methods were applied to identify new potential specificity determining sites through ensemble approach and common agreement of their prediction results. Results It was shown that the analysis of structural characteristics of predicted specificity determining sites might provide the means to validate their prediction accuracy. For example, we found that for smaller distances it holds true that the more reliable the prediction method is, the closer predicted specificity determining sites are to each other and to the ligand. Conclusion We observed certain similarities of structural features between predicted and actual subsites which might point to their functional relevance. We speculate that majority of the identified potential specificity determining sites might be indirectly involved in specific interactions and could be ideal target for mutagenesis experiments.

  8. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    Energy Technology Data Exchange (ETDEWEB)

    Orii, Shigeo [Japan Atomic Energy Research Inst., Tokyo (Japan)

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  9. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  10. Assessment of Usability Benchmarks: Combining Standardized Scales with Specific Questions

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2011-12-01

    Full Text Available The usability of Web sites and online services is of rising importance. When creating a completely new Web site, qualitative data are adequate for identifying the most usability problems. However, changes of an existing Web site should be evaluated by a quantitative benchmarking process. The proposed paper describes the creation of a questionnaire that allows a quantitative usability benchmarking, i.e. a direct comparison of the different versions of a Web site and an orientation on general standards of usability. The questionnaire is also open for qualitative data. The methodology will be explained by the digital library services of the ZBW.

  11. Benchmarking the penetration-resistance efficiency of multilayer graphene sheets due to spacing the graphene layers

    Science.gov (United States)

    Sadeghzadeh, S.

    2016-07-01

    In this paper, the penetration-resistance efficiency of single-layer and multilayer graphene sheets has been investigated by means of the multiscale approach. The employed multiscale approach has been implemented by establishing a direct correlation between the finite element method and the molecular dynamics approach and validated by comparing its results with those of the existing experimental works. Since by using numerous techniques, a new class of graphene sheets can be fabricated in which the graphene layers are spaced farther apart (more than the usual distance between layers), this paper has concentrated on the optimal spacing between graphene layers with the goal of improving the impact properties of graphene sheets as important candidates for novel impact-resistant panels. For this purpose, the relative protection (protection with respect to weight) values of graphene sheets were obtained, and it was observed that the relative protection of a single-layer graphene sheet is about 3.64 times that of a 20-layer graphene sheet. This study also showed that a spaced multilayer graphene sheet, with its inter-layer distance being 20 times the usual spacing between ordinary graphene layers, has an impact resistance which is about 20 % higher than that of an ordinary 20-layer graphene sheet. The findings of this paper can be appropriately used in the design and fabrication of future-generation impact-resistant protective panels.

  12. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Woodworth, Ken [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kuruganti, Teja [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-17

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the software for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.

  13. The PRISM Benchmark Suite

    OpenAIRE

    Kwiatkowsa, Marta; Norman, Gethin; Parker, David

    2012-01-01

    We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.

  14. Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Parma, Edward J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  15. A field-based method to derive macroinvertebrate benchmark for specific conductivity adapted for small data sets and demonstrated in the Hun-Tai River Basin, Northeast China.

    Science.gov (United States)

    Zhao, Qian; Jia, Xiaobo; Xia, Rui; Lin, Jianing; Zhang, Yuan

    2016-09-01

    Ionic mixtures, measured as specific conductivity, have been increasingly concerned because of their toxicities to aquatic organisms. However, identifying protective values of specific conductivity for aquatic organisms is challenging given that laboratory test systems cannot examine more salt-intolerant species nor effects occurring in streams. Large data sets used for deriving field-based benchmarks are rarely available. In this study, a field-based method for small data sets was used to derive specific conductivity benchmark, which is expected to prevent the extirpation of 95% of local taxa from circum-neutral to alkaline waters dominated by a mixture of SO4(2-) and HCO3(-) anions and other dissolved ions. To compensate for the smaller sample size, species level analyses were combined with genus level analyses. The benchmark is based on extirpation concentration (XC95) values of specific conductivity for 60 macroinvertebrate genera estimated from 296 sampling sites in the Hun-Tai River Basin. We derived the specific conductivity benchmark by using a 2-point interpolation method, which yielded the benchmark of 249 μS/cm. Our study tailored the method that was developed by USEPA to derive aquatic life benchmark for specific conductivity for basin scale application, and may provide useful information for water pollution control and management.

  16. Pan-specific MHC class I predictors: A benchmark of HLA class I pan-specific prediction methods

    DEFF Research Database (Denmark)

    Zhang, Hao; Lundegaard, Claus; Nielsen, Morten

    2009-01-01

    emerging pathogens. Methods have recently been published that are able to predict peptide binding to any human MHC class I molecule. In contrast to conventional allele-specific methods, these methods do allow for extrapolation to un-characterized MHC molecules. These pan-specific HLA predictors have...... not previously been compared using independent evaluation sets. Results: A diverse set of quantitative peptide binding affinity measurements was collected from IEDB, together with a large set of HLA class I ligands from the SYFPEITHI database. Based on these data sets, three different pan-specific HLA web......-accessible predictors NetMHCpan, Adaptive-Double-Threading (ADT), and KISS were evaluated. The performance of the pan-specific predictors was also compared to a well performing allele-specific MHC class I predictor, NetMHC, as well as a consensus approach integrating the predictions from the NetMHC and Net...

  17. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    Science.gov (United States)

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements. PMID:26763289

  18. Benchmarking Deep Networks for Predicting Residue-Specific Quality of Individual Protein Models in CASP11

    Science.gov (United States)

    Liu, Tong; Wang, Yiheng; Eickholt, Jesse; Wang, Zheng

    2016-01-01

    Quality assessment of a protein model is to predict the absolute or relative quality of a protein model using computational methods before the native structure is available. Single-model methods only need one model as input and can predict the absolute residue-specific quality of an individual model. Here, we have developed four novel single-model methods (Wang_deep_1, Wang_deep_2, Wang_deep_3, and Wang_SVM) based on stacked denoising autoencoders (SdAs) and support vector machines (SVMs). We evaluated these four methods along with six other methods participating in CASP11 at the global and local levels using Pearson’s correlation coefficients and ROC analysis. As for residue-specific quality assessment, our four methods achieved better performance than most of the six other CASP11 methods in distinguishing the reliably modeled residues from the unreliable measured by ROC analysis; and our SdA-based method Wang_deep_1 has achieved the highest accuracy, 0.77, compared to SVM-based methods and our ensemble of an SVM and SdAs. However, we found that Wang_deep_2 and Wang_deep_3, both based on an ensemble of multiple SdAs and an SVM, performed slightly better than Wang_deep_1 in terms of ROC analysis, indicating that integrating an SVM with deep networks works well in terms of certain measurements.

  19. A procedure for benchmarking specific full-scale activated sludge plants used for carbon and nitrogen removal

    NARCIS (Netherlands)

    Abusam, A.; Keesman, K.J.; Spanjers, H.; Straten, van G.; Meinema, K.

    2002-01-01

    To enhance development and acceptance of new control strategies, a standard simulation benchmarking methodology to evaluate the performance of wastewater treatment plants has recently been proposed. The proposed methodology is, however, for a typical plant and typical loading and environmental condi

  20. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  1. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  2. Using smooth sheets to describe groundfish habitat in Alaskan waters, with specific application to two flatfishes

    Science.gov (United States)

    Zimmermann, Mark; Reid, Jane A.; Golden, Nadine

    2016-10-01

    In this analysis we demonstrate how preferred fish habitat can be predicted and mapped for juveniles of two Alaskan groundfish species - Pacific halibut (Hippoglossus stenolepis) and flathead sole (Hippoglossoides elassodon) - at five sites (Kiliuda Bay, Izhut Bay, Port Dick, Aialik Bay, and the Barren Islands) in the central Gulf of Alaska. The method involves using geographic information system (GIS) software to extract appropriate information from National Ocean Service (NOS) smooth sheets that are available from NGDC (the National Geophysical Data Center). These smooth sheets are highly detailed charts that include more soundings, substrates, shoreline and feature information than the more commonly-known navigational charts. By bringing the information from smooth sheets into a GIS, a variety of surfaces, such as depth, slope, rugosity and mean grain size were interpolated into raster surfaces. Other measurements such as site openness, shoreline length, proportion of bay that is near shore, areas of rocky reefs and kelp beds, water volumes, surface areas and vertical cross-sections were also made in order to quantify differences between the study sites. Proper GIS processing also allows linking the smooth sheets to other data sets, such as orthographic satellite photographs, topographic maps and precipitation estimates from which watersheds and runoff can be derived. This same methodology can be applied to larger areas, taking advantage of these free data sets to describe predicted groundfish essential fish habitat (EFH) in Alaskan waters.

  3. Simulation of nonlinear benchmarks and sheet metal forming processes using linear and quadratic solid–shell elements combined with advanced anisotropic behavior models

    Directory of Open Access Journals (Sweden)

    WangPeng

    2016-01-01

    Full Text Available A family of prismatic and hexahedral solid‒shell (SHB elements with their linear and quadratic versions is presented in this paper to model thin 3D structures. Based on reduced integration and special treatments to eliminate locking effects and to control spurious zero-energy modes, the SHB solid‒shell elements are capable of modeling most thin 3D structural problems with only a single element layer, while describing accurately the various through-thickness phenomena. In this paper, the SHB elements are combined with fully 3D behavior models, including orthotropic elastic behavior for composite materials and anisotropic plastic behavior for metallic materials, which allows describing the strain/stress state in the thickness direction, in contrast to traditional shell elements. All SHB elements are implemented into ABAQUS using both standard/quasi-static and explicit/dynamic solvers. Several benchmark tests have been conducted, in order to first assess the performance of the SHB elements in quasi-static and dynamic analyses. Then, deep drawing of a hemispherical cup is performed to demonstrate the capabilities of the SHB elements in handling various types of nonlinearities (large displacements and rotations, anisotropic plasticity, and contact. Compared to classical ABAQUS solid and shell elements, the results given by the SHB elements show good agreement with the reference solutions.

  4. Neutron Reference Benchmark Field Specifications: ACRR Polyethylene-Lead-Graphite (PLG) Bucket Environment (ACRR-PLG-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Parm, Edward J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integral dosimetry measurements in the neutron field are reported.

  5. Benchmark 1 - Nonlinear strain path forming limit of a reverse draw: Part A: Benchmark description

    Science.gov (United States)

    Benchmark-1 Committee

    2013-12-01

    The objective of this benchmark is to demonstrate the predictability of forming limits under nonlinear strain paths for a draw panel with a non-axisymmetric reversed dome-shape at the center. It is important to recognize that treating strain forming limits as though they were static during the deformation process may not lead to successful predictions of this benchmark, due to the nonlinearity of the strain paths involved in this benchmark. The benchmark tool is designed to enable a two-stage draw/reverse draw continuous forming process. Three typical sheet materials, AA5182-O Aluminum, and DP600 and TRIP780 Steels, are selected for this benchmark study.

  6. Continuation of the VVER burnup credit benchmark. Evaluation of CB1 results, overview of CB2 results to date, and specification of CB3

    International Nuclear Information System (INIS)

    A calculational benchmark focused on VVER-440 burnup credit, similar to that of the OECD/NEA/NSC Burnup Credit Benchmark Working Group, was proposed on the 96'AER Symposium. Its first part, CB1, was specified there whereas the second part, CB2, was specified a year later, on 97'AER Symposium in Zittau. A final statistical evaluation is presented of CB1 results and summarizes the CB2 results obtained to date. Further, the effect of an axial burnup profile of VVER-440 spent fuel on criticality ('end effect') is proposed to be studied in the CB3 benchmark problem of an infinite array of VVER-440 spent fuel rods. (author)

  7. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    Energy Technology Data Exchange (ETDEWEB)

    Fujii, K [Graduate School of Medicine, Nagoya University, Nagoya, JP (Japan); UCLA School of Medicine, Los Angeles, CA (United States); Bostani, M; Cagnon, C; McNitt-Gray, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks.

  8. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    International Nuclear Information System (INIS)

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks

  9. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  10. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    Science.gov (United States)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  11. Status of the international criticality safety benchmark evaluation project (ICSBEP)

    International Nuclear Information System (INIS)

    Since ICNC'99, four new editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments have been published. The number of benchmark specifications in the Handbook has grown from 2157 in 1999 to 3073 in 2003, an increase of nearly 1000 specifications. These benchmarks are used to validate neutronics codes and nuclear cross-section data. Twenty evaluations representing 192 benchmark specifications were added to the Handbook in 2003. The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) is provided in this paper along with a summary of the newly added benchmark specifications that appear in the 2003 Edition of the Handbook. (author)

  12. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  13. Site-Specific Dynamics of β-Sheet Peptides with (D) Pro-Gly Turns Probed by Laser-Excited Temperature-Jump Infrared Spectroscopy.

    Science.gov (United States)

    Popp, Alexander; Scheerer, David; Chi, Heng; Keiderling, Timothy A; Hauser, Karin

    2016-05-01

    Turn residues and side-chain interactions play an important role for the folding of β-sheets. We investigated the conformational dynamics of a three-stranded β-sheet peptide ((D) P(D) P) and a two-stranded β-hairpin (WVYY-(D) P) by time-resolved temperature-jump (T-jump) infrared spectroscopy. Both peptide sequences contain (D) Pro-Gly residues that favor a tight β-turn. The three-stranded β-sheet (Ac-VFITS(D) PGKTYTEV(D) PGOKILQ-NH2 ) is stabilized by the turn sequences, whereas the β-hairpin (SWTVE(D) PGKYTYK-NH2 ) folding is assisted by both the turn sequence and hydrophobic cross-strand interactions. Relaxation times after the T-jump were monitored as a function of temperature and occur on a sub-microsecond time scale, (D) P(D) P being faster than WVYY-(D) P. The Xxx-(D) Pro tertiary amide provides a detectable IR band, allowing us to probe the dynamics site-specifically. The relative importance of the turn versus the intrastrand stability in β-sheet formation is discussed. PMID:26789931

  14. Site-Specific Dynamics of β-Sheet Peptides with (D) Pro-Gly Turns Probed by Laser-Excited Temperature-Jump Infrared Spectroscopy.

    Science.gov (United States)

    Popp, Alexander; Scheerer, David; Chi, Heng; Keiderling, Timothy A; Hauser, Karin

    2016-05-01

    Turn residues and side-chain interactions play an important role for the folding of β-sheets. We investigated the conformational dynamics of a three-stranded β-sheet peptide ((D) P(D) P) and a two-stranded β-hairpin (WVYY-(D) P) by time-resolved temperature-jump (T-jump) infrared spectroscopy. Both peptide sequences contain (D) Pro-Gly residues that favor a tight β-turn. The three-stranded β-sheet (Ac-VFITS(D) PGKTYTEV(D) PGOKILQ-NH2 ) is stabilized by the turn sequences, whereas the β-hairpin (SWTVE(D) PGKYTYK-NH2 ) folding is assisted by both the turn sequence and hydrophobic cross-strand interactions. Relaxation times after the T-jump were monitored as a function of temperature and occur on a sub-microsecond time scale, (D) P(D) P being faster than WVYY-(D) P. The Xxx-(D) Pro tertiary amide provides a detectable IR band, allowing us to probe the dynamics site-specifically. The relative importance of the turn versus the intrastrand stability in β-sheet formation is discussed.

  15. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  16. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in...

  17. Benchmark 2 - Springback of a draw / re-draw panel: Part A: Benchmark description

    Science.gov (United States)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing; Chen, Zhong

    2013-12-01

    Numerical methods have been effectively implemented to predict springback behavior of complex stampings to reduce die tryout through compensation and produce dimensionally accurate products after forming and trimming. However, accurate prediction of the sprung shape of a panel formed with an initial draw followed with a restrike forming step remains a difficult challenge. The objective of this benchmark was to predict the sprung shape after stamping, restriking and trimming a sheet metal panel. A simple, rectangular draw die was used to draw sheet metal to a set depth with a "larger" tooling radius, followed by additional drawing to a greater depth with a "smaller" tooling radius. Panels were sectioned along a centerline and released to allow measurement of thickness strain and position of the trim line in the sprung condition. Smaller radii were used in the restrike step in order to significantly alter the deformation and the sprung shape. These measurements were used to evaluate numerical analysis predictions submitted by benchmark participants. Additional panels were drawn to "failure" during both the first draw and the re-draw in order to set the parameters for the springback trials and to demonstrate that a sheet metal going through a re-strike operation can exceed conventional forming limits of that under a simple draw operation. Two sheet metals were used for this benchmark study: DP600 steel sheet and aluminum alloy 5182-O.

  18. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The applica......nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  19. Standard Specification for Copper-Aluminum-Silicon-Cobalt Alloy, Copper-Nickel-Silicon-Magnesium Alloy, Copper-Nickel-Silicon Alloy, Copper-Nickel-Aluminum-Magnesium Alloy, and Copper-Nickel-Tin Alloy Sheet and Strip

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2015-01-01

    Standard Specification for Copper-Aluminum-Silicon-Cobalt Alloy, Copper-Nickel-Silicon-Magnesium Alloy, Copper-Nickel-Silicon Alloy, Copper-Nickel-Aluminum-Magnesium Alloy, and Copper-Nickel-Tin Alloy Sheet and Strip

  20. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... of the ‘inside’ costs of the sub-component, technical specifications of the product, opportunistic behavior from the suppliers and cognitive limitation. These are all aspects that easily can dismantle the market mechanism and make it counter-productive in the organization. Thus, by directing more attention...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...

  1. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  2. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  3. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  4. Neutron Reference Benchmark Field Specification: ACRR 44 Inch Lead-Boron (LB44) Bucket Environment (ACRR-LB44-CC-32-CL).

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Richard Manuel [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Parma, Edward J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Griffin, Patrick J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Vehar, David W. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.

  5. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  6. Aeroelastic Benchmark Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the...

  7. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  8. Specific collapse followed by slow hydrogen-bond formation of β-sheet in the folding of single-chain monellin

    Science.gov (United States)

    Kimura, Tetsunari; Uzawa, Takanori; Ishimori, Koichiro; Morishima, Isao; Takahashi, Satoshi; Konno, Takashi; Akiyama, Shuji; Fujisawa, Tetsuro

    2005-01-01

    Characterization of the conformational landscapes for proteins with different secondary structures is important in elucidating the mechanism of protein folding. The folding trajectory of single-chain monellin composed of a five-stranded β-sheet and a helix was investigated by using a pH-jump from the alkaline unfolded to native state. The kinetic changes in the secondary structures and in the overall size and shape were measured by circular dichroism spectroscopy and small-angle x-ray scattering, respectively. The formation of the tertiary structure was monitored by intrinsic and extrinsic fluorescence. A significant collapse was observed within 300 μs after the pH-jump, leading to the intermediate with a small amount of secondary and tertiary structures but with an overall oblate shape. Subsequently, the stepwise formation of secondary and tertiary structures was detected. The current observation was consistent with the theoretical prediction that a more significant collapse precedes the formation of secondary structures in the folding of β-sheet proteins than that of helical proteins [Shea, J. E., Onuchic, J. N. & Brooks, C. L., III (2002) Proc. Natl. Acad. Sci. USA 99, 16064–16068]. Furthermore, it was implied that the initial collapse was promoted by the formation of some specific structural elements, such as tight turns, to form the oblate shape. PMID:15710881

  9. Specific collapse followed by slow hydrogen-bond formation of beta-sheet in the folding of single-chain monellin.

    Science.gov (United States)

    Kimura, Tetsunari; Uzawa, Takanori; Ishimori, Koichiro; Morishima, Isao; Takahashi, Satoshi; Konno, Takashi; Akiyama, Shuji; Fujisawa, Tetsuro

    2005-02-22

    Characterization of the conformational landscapes for proteins with different secondary structures is important in elucidating the mechanism of protein folding. The folding trajectory of single-chain monellin composed of a five-stranded beta-sheet and a helix was investigated by using a pH-jump from the alkaline unfolded to native state. The kinetic changes in the secondary structures and in the overall size and shape were measured by circular dichroism spectroscopy and small-angle x-ray scattering, respectively. The formation of the tertiary structure was monitored by intrinsic and extrinsic fluorescence. A significant collapse was observed within 300 micros after the pH-jump, leading to the intermediate with a small amount of secondary and tertiary structures but with an overall oblate shape. Subsequently, the stepwise formation of secondary and tertiary structures was detected. The current observation was consistent with the theoretical prediction that a more significant collapse precedes the formation of secondary structures in the folding of beta-sheet proteins than that of helical proteins [Shea, J. E., Onuchic, J. N. & Brooks, C. L., III (2002) Proc. Natl. Acad. Sci. USA 99, 16064-16068]. Furthermore, it was implied that the initial collapse was promoted by the formation of some specific structural elements, such as tight turns, to form the oblate shape. PMID:15710881

  10. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  11. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  12. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  13. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  14. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  15. Experimental characterization of the rectification process in ammonia-water absorption systems with a large-specific-area corrugated sheet structured packing

    Energy Technology Data Exchange (ETDEWEB)

    Sieres, Jaime; Fernandez-Seara, Jose; Uhia, Francisco J. [Area de Maquinas y Motores Termicos, E.T.S. de Ingenieros Industriales, Campus Lagoas-Marcosende No 9, 36310 Vigo, Pontevedra (Spain)

    2009-09-15

    In this paper, the mass transfer performance of a large-specific-area corrugated sheet structured packing for ammonia-water absorption refrigeration systems (AARS) is reported. An experimental facility was used to test the performance of the packing. Experimental results of the temperature, ammonia concentration and mass flow rate of the rectified vapour are presented and discussed for different operating conditions including reflux ratio values from 0.2 to 1. The volumetric vapour phase mass transfer coefficient is calculated from the measured data and compared with different correlations found in the literature. A new correlation is proposed which was fitted from the experimental data. Finally, a comparison is made between the actual packing height used in the experimental setup and the height required to obtain the same ammonia rectification in AARS with different packings previously tested by the authors. (author)

  16. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  17. Creating sub-consortia as a means of counteracting changes to specification sheets: the case of Parmigiano Reggiano

    OpenAIRE

    Sidali, Katia Laura; Sokoli, Olta; Scaramuzzi, Silvia; Dorr, Andrea Christina

    2014-01-01

    Prior studies have shown how increasing heterogeneity of commons negatively affects product regulation. This work uses a case-study approach and analyses the Parmigiano Reggiano Consortium in Italy. Specifically, we applied a ground-theory approach and interviewed stakeholders at different levels (n=24) in the time frame 2012-2013. While our study confirms prior findings on new-comers’ efforts to loose strictness in the code of practice, it also shows how Parmigiano Reggiano members organize ...

  18. The General Concept of Benchmarking and Its Application in Higher Education in Europe

    Science.gov (United States)

    Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna

    2009-01-01

    The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…

  19. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  20. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  1. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  2. DOE Commercial Building Benchmark Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  3. Global ice sheet modeling

    International Nuclear Information System (INIS)

    The University of Maine conducted this study for Pacific Northwest Laboratory (PNL) as part of a global climate modeling task for site characterization of the potential nuclear waste respository site at Yucca Mountain, NV. The purpose of the study was to develop a global ice sheet dynamics model that will forecast the three-dimensional configuration of global ice sheets for specific climate change scenarios. The objective of the third (final) year of the work was to produce ice sheet data for glaciation scenarios covering the next 100,000 years. This was accomplished using both the map-plane and flowband solutions of our time-dependent, finite-element gridpoint model. The theory and equations used to develop the ice sheet models are presented. Three future scenarios were simulated by the model and results are discussed

  4. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  5. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  6. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  7. Benchmarking for plant maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Komonen, K.; Ahonen, T.; Kunttu, S. (VTT Technical Research Centre of Finland, Espoo (Finland))

    2010-05-15

    The product of the project, e-Famemain, is a new kind of tool for benchmarking, which is based on many years' research efforts within Finnish industry. It helps to evaluate plants' performance in operations and maintenance by making industrial plants comparable with the aid of statistical methods. The system is updated continually and automatically. It carries out automatically multivariate statistical analysis when data is entered into system, and many other statistical operations. Many studies within Finnish industry during the last ten years have revealed clear causalities between various performance indicators. In addition, these causalities should be taken into account when utilising benchmarking or forecasting indicator values e.g. for new investments. The benchmarking system consists of five sections: data input section, positioning section, locating differences section, best practices and planning section and finally statistical tables. (orig.)

  8. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  9. DEVELOPMENT OF A MARKET BENCHMARK PRICE FOR AGMAS PERFORMANCE EVALUATIONS

    OpenAIRE

    Good, Darrel L.; Irwin, Scott H.; Jackson, Thomas E.

    1998-01-01

    The purpose of this research report is to identify the appropriate market benchmark price to use to evaluate the pricing performance of market advisory services that are included in the annual AgMAS pricing performance evaluations. Five desirable properties of market benchmark prices are identified. Three potential specifications of the market benchmark price are considered: the average price received by Illinois farmers, the harvest cash price, and the average cash price over a two-year crop...

  10. 42 CFR 422.258 - Calculation of benchmarks.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly benchmark amount” means, for a month in a year: (1) For MA local plans with service areas entirely within...

  11. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  12. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...

  13. Benchmark problem proposal

    International Nuclear Information System (INIS)

    The meeting of the Radiation Energy Spectra Unfolding Workshop organized by the Radiation Shielding Information Center is discussed. The plans of the unfolding code benchmarking effort to establish methods of standardization for both the few channel neutron and many channel gamma-ray and neutron spectroscopy problems are presented

  14. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  15. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  16. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  17. Benchmarking Public Procurement 2016

    OpenAIRE

    World Bank Group

    2015-01-01

    Benchmarking Public Procurement 2016 Report aims to develop actionable indicators which will help countries identify and monitor policies and regulations that impact how private sector companies do business with the government. The project builds on the Doing Business methodology and was initiated at the request of the G20 Anti-Corruption Working Group.

  18. Polar ice-sheet contributions to sea level during past warm periods

    Science.gov (United States)

    Dutton, A.

    2015-12-01

    Recent sea-level rise has been dominated by thermal expansion and glacier loss, but the contribution from mass loss from the Greenland and Antarctic ice sheets is expected to exceed other contributions under future sustained warming. Due to limitations of existing ice sheet models and the lack of relevant analogues in the historical record, projecting the timing and magnitude of polar ice sheet mass loss in the future remains challenging. One approach to improving our understanding of how polar ice-sheet retreat will unfold is to integrate observations and models of sea level, ice sheets, and climate during past intervals of warmth when the polar ice sheets contributed to higher sea levels. A recent review evaluated the evidence of polar ice sheet mass loss during several warm periods, including interglacials during the mid-Pliocene warm period, Marine Isotope Stage (MIS) 11, 5e (Last Interglacial), and 1 (Holocene). Sea-level benchmarks of ice-sheet retreat during the first of these three periods, when global mean climate was ~1 to 3 deg. C warmer than preindustrial, are useful for understanding the long-term potential for future sea-level rise. Despite existing uncertainties in these reconstructions, it is clear that our present climate is warming to a level associated with significant polar ice-sheet loss in the past, resulting in a conservative estimate for a global mean sea-level rise of 6 meters above present (or more). This presentation will focus on identifying the approaches that have yielded significant advances in terms of past sea level and ice sheet reconstruction as well as outstanding challenges. A key element of recent advances in sea-level reconstructions is the ability to recognize and quantify the imprint of geophysical processes, such as glacial isostatic adjustment (GIA) and dynamic topography, that lead to significant spatial variability in sea level reconstructions. Identifying specific ice-sheet sources that contributed to higher sea levels

  19. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  20. A protein–DNA docking benchmark

    NARCIS (Netherlands)

    van Dijk, M.; Bonvin, A.M.J.J.

    2008-01-01

    We present a protein–DNA docking benchmark containing 47 unbound–unbound test cases of which 13 are classified as easy, 22 as intermediate and 12 as difficult cases. The latter shows considerable structural rearrangement upon complex formation. DNA-specific modifications such as flipped out bases an

  1. Radiography benchmark 2014

    Science.gov (United States)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  2. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    : SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.

  3. SP2Bench: A SPARQL Performance Benchmark

    CERN Document Server

    Schmidt, Michael; Lausen, Georg; Pinkel, Christoph

    2008-01-01

    Recently, the SPARQL query language for RDF has reached the W3C recommendation status. In response to this emerging standard, the database community is currently exploring efficient storage techniques for RDF data and evaluation strategies for SPARQL queries. A meaningful analysis and comparison of these approaches necessitates a comprehensive and universal benchmark platform. To this end, we have developed SP$^2$Bench, a publicly available, language-specific SPARQL performance benchmark. SP$^2$Bench is settled in the DBLP scenario and comprises both a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. As a proof of concept, we apply SP$^2$Bench to existing engines and discuss ...

  4. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects, and m...... of the benchmark to three spatio-temporal indexes - the TPR-, TPR*-, and Bx-trees. Representative experimental results and consequent guidelines for the usage of these indexes are reported....

  5. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  6. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  7. Guidebook for Using the Tool BEST Cement: Benchmarking and Energy Savings Tool for the Cement Industry

    Energy Technology Data Exchange (ETDEWEB)

    Galitsky, Christina; Price, Lynn; Zhou, Nan; Fuqiu , Zhou; Huawen, Xiong; Xuemin, Zeng; Lan, Wang

    2008-07-30

    The Benchmarking and Energy Savings Tool (BEST) Cement is a process-based tool based on commercially available efficiency technologies used anywhere in the world applicable to the cement industry. This version has been designed for use in China. No actual cement facility with every single efficiency measure included in the benchmark will likely exist; however, the benchmark sets a reasonable standard by which to compare for plants striving to be the best. The energy consumption of the benchmark facility differs due to differences in processing at a given cement facility. The tool accounts for most of these variables and allows the user to adapt the model to operational variables specific for his/her cement facility. Figure 1 shows the boundaries included in a plant modeled by BEST Cement. In order to model the benchmark, i.e., the most energy efficient cement facility, so that it represents a facility similar to the user's cement facility, the user is first required to input production variables in the input sheet (see Section 6 for more information on how to input variables). These variables allow the tool to estimate a benchmark facility that is similar to the user's cement plant, giving a better picture of the potential for that particular facility, rather than benchmarking against a generic one. The input variables required include the following: (1) the amount of raw materials used in tonnes per year (limestone, gypsum, clay minerals, iron ore, blast furnace slag, fly ash, slag from other industries, natural pozzolans, limestone powder (used post-clinker stage), municipal wastes and others); the amount of raw materials that are preblended (prehomogenized and proportioned) and crushed (in tonnes per year); (2) the amount of additives that are dried and ground (in tonnes per year); (3) the production of clinker (in tonnes per year) from each kiln by kiln type; (4) the amount of raw materials, coal and clinker that is ground by mill type (in tonnes per

  8. Algorithm and Architecture Independent Benchmarking with SEAK

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  9. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  10. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  11. Benchmarking and testing the "Sea Level Equation

    Science.gov (United States)

    Spada, G.; Barletta, V. R.; Klemann, V.; van der Wal, W.; James, T. S.; Simon, K.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2012-04-01

    The study of the process of Glacial Isostatic Adjustment (GIA) and of the consequent sea level variations is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying GIA can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated. Following the example of the mantle convection community and our recent successful Benchmark for Post Glacial Rebound codes (Spada et al., 2011, doi: 10.1111/j.1365-246X.2011.04952.x), here we present the results of a benchmark study of independently developed codes designed to solve the SLE. This study has taken place within a collaboration facilitated through the European Cooperation in Science and Technology (COST) Action ES0701. The tests involve predictions of past and current sea level variations, and 3D deformations of the Earth surface. In spite of the signi?cant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found, which can be often attributed to the different numerical algorithms employed within the community, help to constrain the intrinsic errors in model predictions. These are of fundamental importance for a correct interpretation of the geodetic variations observed today, and

  12. Specifications

    International Nuclear Information System (INIS)

    As part of the Danish RERTR Program, three fuel elements with LEU U3O8-Al fuel and three fuel elements with LEU U3Si2-Al fuel were manufactured by NUKEM for irradiation testing in the DR-3 reactor at the Risoe National Laboratory in Denmark. The specifications for the elements with U3O8-Al fuel are presented here as an illustration only. Specifications for the elements with U3Si2-Al fuel were very similar. In this example, materials, material numbers, documents numbers, and drawing numbers specific to a single fabricator have been deleted. (author)

  13. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  14. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  15. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  16. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  17. Specific collapse followed by slow hydrogen-bond formation of β-sheet in the folding of single-chain monellin

    OpenAIRE

    Kimura, Tetsunari; Uzawa, Takanori; Ishimori, Koichiro; Morishima, Isao; Takahashi, Satoshi; Konno, Takashi; Akiyama, Shuji; Fujisawa, Tetsuro

    2005-01-01

    Characterization of the conformational landscapes for proteins with different secondary structures is important in elucidating the mechanism of protein folding. The folding trajectory of single-chain monellin composed of a five-stranded β-sheet and a helix was investigated by using a pH-jump from the alkaline unfolded to native state. The kinetic changes in the secondary structures and in the overall size and shape were measured by circular dichroism spectroscopy and small-angle x-ray scatter...

  18. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  19. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  20. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  1. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  2. Quantum benchmarks for Gaussian states

    CERN Document Server

    Chiribella, Giulio

    2014-01-01

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.

  3. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  4. Benchmarking of collimation tracking using RHIC beam loss data.

    Energy Technology Data Exchange (ETDEWEB)

    Robert-Demolaize,G.; Drees, A.

    2008-06-23

    State-of-the-art tracking tools were recently developed at CERN to study the cleaning efficiency of the Large Hadron Collider (LHC) collimation system. In order to estimate the prediction accuracy of these tools, benchmarking studies can be performed using actual beam loss measurements from a machine that already uses a similar multistage collimation system. This paper reviews the main results from benchmarking studies performed with specific data collected from operations at the Relativistic Heavy Ion Collider (RHIC).

  5. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  6. An accurate benchmark description of the interactions between carbon dioxide and polyheterocyclic aromatic compounds containing nitrogen.

    Science.gov (United States)

    Li, Sicheng; Smith, Daniel G A; Patkowski, Konrad

    2015-07-01

    We assessed the performance of a large variety of modern density functional theory approaches for the adsorption of carbon dioxide on molecular models of pyridinic N-doped graphene. Specifically, we selected eight polyheterocyclic aromatic compounds ranging from pyridine and pyrazine to 1,6-diazacoronene and investigated their complexes with CO2 for a large range of intermolecular distances and including both in-plane and stacked orientations. The benchmark interaction energies were computed at the complete-basis-set limit MP2 level plus a CCSD(T) coupled-cluster correction in a moderate but carefully selected basis set. Using a set of 96 benchmark CCSD(T)-level interaction energies as a reference, we investigated the accuracy of DFT-based approaches as a function of the density functional, the dispersion correction, the basis set, and the counterpoise correction or lack thereof. While virtually all DFT variants exhibit some deterioration of accuracy for distances slightly shorter than the van der Waals minima, we were able to identify several schemes such as B2PLYP-D3 and M05-2X-D3 whose average errors on the entire benchmark data set are in the 5-10% range. The top DFT performers were subsequently used to investigate the energy profile for a carbon dioxide transition through model N-doped graphene pores. All investigated methods confirmed that the largest, N4H4 pore allows for a barrierless CO2 transition to the other side of a graphene sheet. PMID:26055458

  7. BENCHMARKING ON-LINE SERVICES INDUSTRIES

    Institute of Scientific and Technical Information of China (English)

    John HAMILTON

    2006-01-01

    The Web Quality Analyser (WQA) is a new benchmarking tool for industry. It hasbeen extensively tested across services industries. Forty five critical success features are presented as measures that capture the user's perception of services industry websites. This tool differs to previous tools, in that it captures the information technology (IT) related driver sectors of website performance, along with the marketing-services related driver sectors. These driver sectors capture relevant structure, function and performance components.An 'on-off' switch measurement approach determines each component. Relevant component measures scale into a relative presence of the applicable feature, with a feature block delivering one of the sector drivers. Although it houses both measurable and a few subjective components, the WQA offers a proven and useful means to compare relevant websites.The WQA defines website strengths and weaknesses, thereby allowing for corrections to the website structure of the specific business. WQA benchmarking against services related business competitors delivers a position on the WQA index, facilitates specific website driver rating comparisons, and demonstrates where key competitive advantage may reside. This paper reports on the marketing-services driver sectors of this new benchmarking WQA tool.

  8. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  9. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  10. GASN sheets

    International Nuclear Information System (INIS)

    This document gathers around 50 detailed sheets which describe and present various aspects, data and information related to the nuclear sector or, more generally to energy. The following items are addressed: natural and artificial radioactive environment, evolution of energy needs in the world, radioactive wastes, which energy for France tomorrow, the consequences in France of the Chernobyl accident, ammunitions containing depleted uranium, processing and recycling of used nuclear fuel, transport of radioactive materials, seismic risk for the basic nuclear installations, radon, the precautionary principle, the issue of low doses, the EPR, the greenhouse effect, the Oklo nuclear reactors, ITER on the way towards fusion reactors, simulation and nuclear deterrence, crisis management in the nuclear field, does nuclear research put a break on the development of renewable energies by monopolizing funding, nuclear safety and security, the plutonium, generation IV reactors, comparison of different modes of electricity production, medical exposure to ionizing radiations, the control of nuclear activities, food preservation by ionization, photovoltaic solar collectors, the Polonium 210, the dismantling of nuclear installations, wind energy, desalination and nuclear reactors, from non-communication to transparency about nuclear safety, the Jules Horowitz reactor, CO2 capture and storage, hydrogen, solar energy, the radium, the subcontractors of maintenance of the nuclear fleet, biomass, internal radio-contamination, epidemiological studies, submarine nuclear propulsion, sea energy, the Three Mile Island accident, the Chernobyl accident, the Fukushima accident, the nuclear after Fukushima

  11. Benchmarking and energy management schemes in SMEs

    Energy Technology Data Exchange (ETDEWEB)

    Huenges Wajer, Boudewijn [SenterNovem (Netherlands); Helgerud, Hans Even [New Energy Performance AS (Norway); Lackner, Petra [Austrian Energy Agency (Austria)

    2007-07-01

    Many companies are reluctant to focus on energy management or to invest in energy efficiency measures. Nevertheless, there are many good examples proving that the right approach to implementing energy efficiency can very well be combined with the business-priorities of most companies. SMEs in particular can benefit from a facilitated European approach because they normally have a lack of resources and time to invest in energy efficiency. In the EU supported pilot project BESS, 60 SMEs from 11 European countries of the food and drink industries successfully tested a package of interactive instruments which offers such a facilitated approach. A number of pilot companies show a profit increase of 3 up to 10 %. The package includes a user-friendly and web based E-learning scheme for implementing energy management as well as a benchmarking module for company specific comparison of energy performance indicators. Moreover, it has several practical and tested tools to support the cycle of continuous improvement of energy efficiency in the company such as checklists, sector specific measure lists, templates for auditing and energy conservation plans. An important feature and also a key trigger for companies is the possibility for SMEs to benchmark anonymously their energy situation against others of the same sector. SMEs can participate in a unique web based benchmarking system to interactively benchmark in a way which fully guarantees confidentiality and safety of company data. Furthermore, the available data can contribute to a bottom-up approach to support the objectives of (national) monitoring and targeting and thereby also contributing to the EU Energy Efficiency and Energy Services Directive. A follow up project to expand the number of participating SMEs of various sectors is currently being developed.

  12. Aircraft Sheet Metal Practices, Blueprint Reading, Sheet Metal Forming and Heat Treating; Sheet Metal Work 2: 9855.04.

    Science.gov (United States)

    Dade County Public Schools, Miami, FL.

    This course is designed to familiarize vocational students with construction in sheet metal layout. The document outlines goals, specific block objectives, layout practices, blueprint reading, sheet metal forming (by hand and by machine), and heat treatment of metals, and includes posttest samples. Layout techniques and air foil developing are…

  13. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    International Nuclear Information System (INIS)

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  14. Benchmark calculations of sodium fast critical experiments

    International Nuclear Information System (INIS)

    The high expectations from fast critical experiments impose the additional requirements on reliability of final reconstructed values, obtained in experiments at critical facility. Benchmark calculations of critical experiments are characterized by impossibility of complete experiment reconstruction, the large amounts of input data (dependent and independent) with very different reliability. It should also take into account different sensitivity of the measured and appropriate calculated characteristics to the identical changes of geometry parameters, temperature, and isotopic composition of individual materials. The calculations of critical facility experiments are produced for the benchmark models, generated by the specific reconstructing codes with its features when adjusting model parameters, and using the nuclear data library. The generated benchmark model, providing the agreed calculated and experimental values for one or more neutronic characteristics can lead to considerable differences for other key characteristics. The sensitivity of key neutronic characteristics to the extra steel allocation in the core, and ENDF/B nuclear data sources is performed using a few calculated models of BFS-62-3A and BFS1-97 critical assemblies. The comparative analysis of the calculated effective multiplication factor, spectral indices, sodium void reactivity, and radial fission-rate distributions leads to quite different models, providing the best agreement the calculated and experimental neutronic characteristics. This fact should be considered during the refinement of computational models and code-verification purpose. (author)

  15. AONBench: A Methodology for Benchmarking XML Based Service Oriented Applications

    Directory of Open Access Journals (Sweden)

    Abdul Waheed

    2007-09-01

    Full Text Available Service Oriented Architectures (SOA and applications increasingly rely on network infrastructure instead of back-end servers. Cisco system Application Oriented Networking (AON initiative exemplifies this trend. Benchmarking such infrastructure and their services is expected to play an important role in the networking industry. We present AONBech specifications and methodology to benchmark networked XML application servers and appliances. AONBench is not a benchmarking tool. It is a specification and methodology for performance measurements, which leverages from existing XML microbenchmarks and uses HTTP for end-to-end communication. We implement AONBench specifications for end-to-end performance measurements through public domain HTTP load generation tool ApacheBench and Apache web server. We present three case studies of using AONBench for architecting real application oriented networking products.

  16. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  17. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  18. Benchmarking: a method for continuous quality improvement in health.

    Science.gov (United States)

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  19. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  20. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  1. BigDataBench: a Big Data Benchmark Suite from Internet Services

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zhu, Yuqing; Yang, Qiang; He, Yongqiang; Gao, Wanling; Jia, Zhen; Shi, Yingjie; Zhang, Shujie; Zheng, Chen; Lu, Gang; Zhan, Kent; Li, Xiaona; Qiu, Bizhu

    2014-01-01

    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purpo...

  2. ENDF/B-V, LIB-V, and the CSEWG benchmarks

    International Nuclear Information System (INIS)

    A 70-group library, LIB-V, generated with the NJOY processing code from ENDF/B-V, is tested on most of the Cross Section Evaluation Working Group (CSEWG) fast reactor benchmarks. Every experimental measurement reported in the benchmark specifications is compared to both diffusion theory and transport theory calculations. Several comparisons with prior benchmark calculations attempt to assess the effects of data and code improvements

  3. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  4. Structural Biology Fact Sheet

    Science.gov (United States)

    ... Home > Science Education > Structural Biology Fact Sheet Structural Biology Fact Sheet Tagline (Optional) Middle/Main Content Area What is structural biology? Structural biology is a field of science focused ...

  5. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Soil and Litter Invertebrates and Heterotrophic Process

    Energy Technology Data Exchange (ETDEWEB)

    Will, M.E.

    1994-01-01

    This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.

  6. Nominal GDP: Target or Benchmark?

    OpenAIRE

    Hetzel, Robert L.

    2015-01-01

    Some observers have argued that the Federal Reserve would best fulfill its mandate by adopting a target for nominal gross domestic product (GDP). Insights from the monetarist tradition suggest that nominal GDP targeting could be destabilizing. However, adopting benchmarks for both nominal and real GDP could offer useful information about when monetary policy is too tight or too loose.

  7. Benchmark calculations for EGS5

    International Nuclear Information System (INIS)

    In the past few years, EGS4 has undergone an extensive upgrade to EGS5, in particularly in the areas of low-energy electron physics, low-energy photon physics, PEGS cross section generation, and the coding from Mortran to Fortran programming. Benchmark calculations have been made to assure the accuracy, reliability and high quality of the EGS5 code system. This study reports three benchmark examples that show the successful upgrade from EGS4 to EGS5 based on the excellent agreements among EGS4, EGS5 and measurements. The first benchmark example is the 1969 Crannell Experiment to measure the three-dimensional distribution of energy deposition for 1-GeV electrons shower in water and aluminum tanks. The second example is the 1995 Compton-scattered spectra measurements for 20-40 keV, linearly polarized photon by Namito et. al., in KEK, which was a main part of the low-energy photon expansion work for both EGS4 and EGS5. The third example is the 1986 heterogeneity benchmark experiment by Shortt et. al., who used a monoenergetic 20-MeV electron beam to hit the front face of a water tank containing both air and aluminum cylinders and measured spatial depth dose distribution using a small solid-state detector. (author)

  8. Benchmarking biodiversity performances of farmers

    NARCIS (Netherlands)

    Snoo, de G.R.; Lokhorst, A.M.; Dijk, van J.; Staats, H.; Musters, C.J.M.

    2010-01-01

    Farmers are the key players when it comes to the enhancement of farmland biodiversity. In this study, a benchmark system that focuses on improving farmers’ nature conservation was developed and tested among Dutch arable farmers in different social settings. The results show that especially tailored

  9. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  10. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  11. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  12. Results of the isotopic concentrations of VVER calculational burnup credit benchmark No. 2(CB2)

    International Nuclear Information System (INIS)

    Results of the nuclide concentrations are presented of VVER Burnup Credit Benchmark No. 2(CB2) that were performed in The Nuclear Technology Center of Cuba with available codes and libraries. The CB2 benchmark specification as the second phase of the VVER burnup credit benchmark is summarized. The CB2 benchmark focused on VVER burnup credit study proposed on the 97' AER Symposium. The obtained results are isotopic concentrations of spent fuel as a function of the burnup and cooling time. The depletion point 'ORIGEN2' code and other codes were used for the calculation of the spent fuel concentration. (author)

  13. Benchmark 2 - Springback of a draw / re-draw panel: Part B: Physical tryout report

    Science.gov (United States)

    Carsley, John E.; Chen, Xu; Pang, Zheng; Liu, Yuyang

    2013-12-01

    Physical tryout report is summarized. A simple, rectangular draw die was used to draw sheet metal to a set depth with a "larger" tooling radius, followed by additional drawing to a greater depth with a "smaller" tooling radius. Panels were sectioned along a centerline and released to allow measurement of thickness strain and position of the trim line in the sprung condition. Smaller radii were used in the restrike step in order to significantly alter the deformation and the sprung shape. Additional panels were drawn to "failure" during both the first draw and the redraw in order to set the parameters for the springback trials and to demonstrate that a sheet metal going through a restrike operation can exceed conventional forming limits of that under a simple draw operation. Two sheet metals were used for this benchmark study: DP600 steel sheet and AA 5182-O.

  14. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  15. Application of a shear-modified GTN model to incremental sheet forming

    Science.gov (United States)

    Smith, Jacob; Malhotra, Rajiv; Liu, W. K.; Cao, Jian

    2013-12-01

    This paper investigates the effects of using a shear-modified Gurson-Tvergaard-Needleman model, which is based on the mechanics of voids, for simulating material behavior in the incremental forming process. The problem chosen for analysis is a simplified version of the NUMISHEET 2014 incremental forming benchmark test. The implications of the shear-modification of the model specifically for incremental sheet forming processes are confirmed using finite element analysis. It is shown that including the shear term has a significant effect on fracture timing in incremental forming, which is not well reflected in the observed tensile test simulations for calibration. The numerical implementation and the need for comprehensive calibration of the model are briefly discussed.

  16. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  17. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  18. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    Science.gov (United States)

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project. PMID:24825693

  19. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  20. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Experimental results of pulse parameters and control rod worth measurements at TRIGA Mark 2 reactor in Ljubljana are presented. The measurements were performed with a completely fresh, uniform, and compact core. Only standard fuel elements with 12 wt% uranium were used. Special efforts were made to get reliable and accurate results at well-defined experimental conditions, and it is proposed to use the results as a benchmark test case for TRIGA reactors

  1. Standard Specification for Low-Carbon Nickel-Chromium-Molybdenum, Low-Carbon Nickel-Chromium-Molybdenum-Copper, Low-Carbon Nickel-Chromium-Molybdenum-Tantalum, Low-Carbon Nickel-Chromium-Molybdenum-Tungsten, and Low-Carbon Nickel-Molybdenum-Chromium Alloy Plate, Sheet, and Strip

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2015-01-01

    Standard Specification for Low-Carbon Nickel-Chromium-Molybdenum, Low-Carbon Nickel-Chromium-Molybdenum-Copper, Low-Carbon Nickel-Chromium-Molybdenum-Tantalum, Low-Carbon Nickel-Chromium-Molybdenum-Tungsten, and Low-Carbon Nickel-Molybdenum-Chromium Alloy Plate, Sheet, and Strip

  2. Standard specification for Nickel-Chromium-Iron alloys (UNS N06600, N06601, N06603, N06690, N06693, N06025, N06045 and N06696), Nickel-Chromium-Cobalt-Molybdenum alloy (UNS N06617), and Nickel-Iron-Chromium-Tungsten alloy (UNS N06674) plate, sheet and strip

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2011-01-01

    Standard specification for Nickel-Chromium-Iron alloys (UNS N06600, N06601, N06603, N06690, N06693, N06025, N06045 and N06696), Nickel-Chromium-Cobalt-Molybdenum alloy (UNS N06617), and Nickel-Iron-Chromium-Tungsten alloy (UNS N06674) plate, sheet and strip

  3. Multisensor benchmark data for riot control

    Science.gov (United States)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  4. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  5. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  6. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past the doorstep and is now in early trials in the project and construction environment....

  7. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  8. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  9. Ice sheet in peril

    DEFF Research Database (Denmark)

    Hvidberg, Christine Schøtt

    2016-01-01

    Earth's large ice sheets in Greenland and Antarctica are major contributors to sea level change. At present, the Greenland Ice Sheet (see the photo) is losing mass in response to climate warming in Greenland (1), but the present changes also include a long-term response to past climate transitions....... On page 590 of this issue, MacGregor et al. (2) estimate the mean rates of snow accumulation and ice flow of the Greenland Ice Sheet over the past 9000 years based on an ice sheet-wide dated radar stratigraphy (3). They show that the present changes of the Greenland Ice Sheet are partly an ongoing...... response to the last deglaciation. The results help to clarify how sensitive the ice sheet is to climate changes....

  10. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  11. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  12. Thermoforming of foam sheet

    OpenAIRE

    Akkerman, Remko; Pronk, Ruud M.

    1997-01-01

    Thermoforming is a widely used process for the manufacture of foam sheet products. Polystyrene foam food trays for instance can be produced by first heating the thermoplastic foam sheet, causing the gas contained to build up pressure and expand, after which a vacuum pressure can be applied to draw the sheet in the required form on the mould. This production method appears to be a very sensitive process with respect to e.g. the sheet temperature, the pressures applied and the cooling time. Mor...

  13. Perforating Thin Metal Sheets

    Science.gov (United States)

    Davidson, M. E.

    1985-01-01

    Sheets only few mils thick bonded together, punched, then debonded. Three-step process yields perforated sheets of metal. (1): Individual sheets bonded together to form laminate. (2): laminate perforated in desired geometric pattern. (3): After baking, laminate separates into individual sheets. Developed for fabricating conductive layer on blankets that collect and remove ions; however, perforated foils have other applications - as conductive surfaces on insulating materials; stiffeners and conductors in plastic laminates; reflectors in antenna dishes; supports for thermal blankets; lightweight grille cover materials; and material for mockup of components.

  14. Ice sheet in peril

    OpenAIRE

    Hvidberg, Christine Schøtt

    2016-01-01

    Earth's large ice sheets in Greenland and Antarctica are major contributors to sea level change. At present, the Greenland Ice Sheet (see the photo) is losing mass in response to climate warming in Greenland (1), but the present changes also include a long-term response to past climate transitions. On page 590 of this issue, MacGregor et al. (2) estimate the mean rates of snow accumulation and ice flow of the Greenland Ice Sheet over the past 9000 years based on an ice sheet-wide dated radar ...

  15. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  16. Benchmarking the QUAD4/TRIA3 element

    Science.gov (United States)

    Pitrof, Stephen M.; Venkayya, Vipperla B.

    1993-09-01

    The QUAD4 and TRIA3 elements are the primary plate/shell elements in NASTRAN. These elements enable the user to analyze thin plate/shell structures for membrane, bending and shear phenomena. They are also very new elements in the NASTRAN library. These elements are extremely versatile and constitute a substantially enhanced analysis capability in NASTRAN. However, with the versatility comes the burden of understanding a myriad of modeling implications and their effect on accuracy and analysis quality. The validity of many aspects of these elements were established through a series of benchmark problem results and comparison with those available in the literature and obtained from other programs like MSC/NASTRAN and CSAR/NASTRAN. Never-the-less such a comparison is never complete because of the new and creative use of these elements in complex modeling situations. One of the important features of QUAD4 and TRIA3 elements is the offset capability which allows the midsurface of the plate to be noncoincident with the surface of the grid points. None of the previous elements, with the exception of bar (beam), has this capability. The offset capability played a crucial role in the design of QUAD4 and TRIA3 elements. It allowed modeling layered composites, laminated plates and sandwich plates with the metal and composite face sheets. Even though the basic implementation of the offset capability is found to be sound in the previous applications, there is some uncertainty in relatively simple applications. The main purpose of this paper is to test the integrity of the offset capability and provide guidelines for its effective use. For the purpose of simplicity, references in this paper to the QUAD4 element will also include the TRIA3 element.

  17. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  18. Gaia FGK benchmark stars: Metallicity

    Science.gov (United States)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  19. Infertility Fact Sheet

    Science.gov (United States)

    ... Home > ePublications > Our ePublications > Infertility fact sheet ePublications Infertility fact sheet This information in Spanish (en español) ... to the fallopian tube instead of the uterus. Gamete intrafallopian transfer (GIFT) involves transferring eggs and sperm into the ...

  20. Thermoforming of foam sheet

    NARCIS (Netherlands)

    Akkerman, Remko; Pronk, Ruud M.

    1997-01-01

    Thermoforming is a widely used process for the manufacture of foam sheet products. Polystyrene foam food trays for instance can be produced by first heating the thermoplastic foam sheet, causing the gas contained to build up pressure and expand, after which a vacuum pressure can be applied to draw t

  1. Benchmarking: A tool for conducting self-assessment

    International Nuclear Information System (INIS)

    There is more information on nuclear plant performance available than can reasonably be assimilated and used effectively by plant management or personnel responsible for self-assessment. Also, it is becoming increasingly more important that an effective self-assessment program uses internal parameters not only to evaluate performance, but to incorporate lessons learned from other plants. Because of the quantity of information available, it is important to focus efforts and resources in areas where safety or performance is a concern and where the most improvement can be realized. One of the techniques that is being used to effectively accomplish this is benchmarking. Benchmarking involves the use of various sources of information to self-identify a plant's strengths and weaknesses, identify which plants are strong performers in specific areas, evaluate what makes a top performer, and incorporate the success factors into existing programs. The formality with which benchmarking is being implemented varies widely depending on the objective. It can be as simple as looking at a single indicator, such as systematic assessment of licensee performance (SALP) in engineering and technical support, then surveying the top performers with specific questions. However, a more comprehensive approach may include the performance of a detailed benchmarking study. Both operational and economic indicators may be used in this type of evaluation. Some of the indicators that may be considered and the limitations of each are discussed

  2. Data assimilation and prognostic whole ice sheet modelling with the variationally derived, higher order, open source, and fully parallel ice sheet model VarGlaS

    Directory of Open Access Journals (Sweden)

    D. J. Brinkerhoff

    2013-07-01

    Full Text Available We introduce a novel, higher order, finite element ice sheet model called VarGlaS (Variational Glacier Simulator, which is built on the finite element framework FEniCS. Contrary to standard procedure in ice sheet modelling, VarGlaS formulates ice sheet motion as the minimization of an energy functional, conferring advantages such as a consistent platform for making numerical approximations, a coherent relationship between motion and heat generation, and implicit boundary treatment. VarGlaS also solves the equations of enthalpy rather than temperature, avoiding the solution of a contact problem. Rather than include a lengthy model spin-up procedure, VarGlaS possesses an automated framework for model inversion. These capabilities are brought to bear on several benchmark problems in ice sheet modelling, as well as a 500 yr simulation of the Greenland ice sheet at high resolution. VarGlaS performs well in benchmarking experiments and, given a constant climate and a 100 yr relaxation period, predicts a mass evolution of the Greenland ice sheet that matches present-day observations of mass loss. VarGlaS predicts a thinning in the interior and thickening of the margins of the ice sheet.

  3. A comprehensive benchmarking system for evaluating global vegetation models

    Directory of Open Access Journals (Sweden)

    D. I. Kelley

    2012-11-01

    Full Text Available We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM, and the Lund-Potsdam-Jena (LPJ and Land Processes and eXchanges (LPX dynamic global vegetation models (DGVMs. SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP is too high. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2, but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

  4. Benchmarking of human resources management

    OpenAIRE

    David M. Akinnusi

    2008-01-01

    This paper reviews the role of human resource management (HRM) which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HR...

  5. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  6. NFS Tricks and Benchmarking Traps

    OpenAIRE

    Seltzer, Margo; Ellard, Daniel

    2003-01-01

    We describe two modi cations to the FreeBSD 4.6 NFS server to increase read throughput by improving the read-ahead heuristic to deal with reordered requests and stride access patterns. We show that for some stride access patterns, our new heuristics improve end-to-end NFS throughput by nearly a factor of two. We also show that benchmarking and experimenting with changes to an NFS server can be a subtle and challenging task, and that it is often difficult to distinguish the impact of a new ...

  7. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    The experimental results of startup tests after reconstruction and modification of the TRIGA Mark II reactor in Ljubljana are presented. The experiments were performed with a completely fresh, compact, and uniform core. The operating conditions were well defined and controlled, so that the results can be used as a benchmark test case for TRIGA reactor calculations. Both steady-state and pulse mode operation were tested. In this paper, the following steady-state experiments are treated: critical core and excess reactivity, control rod worths, fuel element reactivity worth distribution, fuel temperature distribution, and fuel temperature reactivity coefficient

  8. Mechanics of Sheeting Joints

    Science.gov (United States)

    Martel, S. J.

    2015-12-01

    Physical breakdown of rock across a broad scale spectrum involves fracturing. In many areas large fractures develop near the topographic surface, with sheeting joints being among the most impressive. Sheeting joints share many geometric, textural, and kinematic features with other joints (opening-mode fractures) but differ in that they are (a) discernibly curved, (b) open near the topographic surface, and (c) form subparallel to the topographic surface. Where sheeting joints are geologically young, the surface-parallel compressive stresses are typically several MPa or greater. Sheeting joints are best developed beneath domes, ridges, and saddles; they also are reported, albeit rarely, beneath valleys or bowls. A mechanism that accounts for all these associations has been sought for more than a century: neither erosion of overburden nor high lateral compressive stresses alone suffices. Sheeting joints are not accounted for by Mohr-Coulomb shear failure criteria. Principles of linear elastic fracture mechanics, together with the mechanical effect of a curved topographic surface, do provide a basis for understanding sheeting joint growth and the pattern sheeting joints form. Compressive stresses parallel to a singly or doubly convex topographic surface induce a tensile stress perpendicular to the surface at shallow depths; in some cases this alone could overcome the weight of overburden to open sheeting joints. If regional horizontal compressive stresses, augmented by thermal stresses, are an order of magnitude or so greater than a characteristic vertical stress that scales with topographic amplitude, then topographic stress perturbations can cause sheeting joints to open near the top of a ridge. This topographic effect can be augmented by pressure within sheeting joints arising from water, ice, or salt. Water pressure could be particularly important in helping drive sheeting joints downslope beneath valleys. Once sheeting joints have formed, the rock sheets between

  9. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  10. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  11. Rethinking benchmark dates in international relations

    OpenAIRE

    Buzan, Barry; Lawson, George

    2014-01-01

    International Relations (IR) has an ‘orthodox set’ of benchmark dates by which much of its research and teaching is organized: 1500, 1648, 1919, 1945 and 1989. This article argues that IR scholars need to question the ways in which these orthodox dates serve as internal and external points of reference, think more critically about how benchmark dates are established, and generate a revised set of benchmark dates that better reflects macro-historical international dynamics. The first part of t...

  12. Benchmarking for Excellence and the Nursing Process

    Science.gov (United States)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  13. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  14. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  15. Method and system for benchmarking computers

    Science.gov (United States)

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  16. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  17. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  18. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  19. Algebra task & drill sheets

    CERN Document Server

    Reed, Nat

    2011-01-01

    For grades 3-5, our State Standards-based combined resource meets the algebraic concepts addressed by the NCTM standards and encourages the students to review the concepts in unique ways. The task sheets introduce the mathematical concepts to the students around a central problem taken from real-life experiences, while the drill sheets provide warm-up and timed practice questions for the students to strengthen their procedural proficiency skills. Included are opportunities for problem-solving, patterning, algebraic graphing, equations and determining averages. The combined task & drill sheets

  20. Algebra task & drill sheets

    CERN Document Server

    Reed, Nat

    2011-01-01

    For grades 6-8, our State Standards-based combined resource meets the algebraic concepts addressed by the NCTM standards and encourages the students to review the concepts in unique ways. The task sheets introduce the mathematical concepts to the students around a central problem taken from real-life experiences, while the drill sheets provide warm-up and timed practice questions for the students to strengthen their procedural proficiency skills. Included are opportunities for problem-solving, patterning, algebraic graphing, equations and determining averages. The combined task & drill sheets

  1. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  2. The GLAS Standard Data Products Specification--Level 2, Version 9. Volume 14

    Science.gov (United States)

    Lee, Jeffrey E.

    2013-01-01

    The Geoscience Laser Altimeter System (GLAS) is the primary instrument for the ICESat (Ice, Cloud and Land Elevation Satellite) laser altimetry mission. ICESat was the benchmark Earth Observing System (EOS) mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. From 2003 to 2009, the ICESat mission provided multi-year elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It also provided topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets.This document defines the Level-2 GLAS standard data products. This document addresses the data flow, interfaces, record and data formats associated with the GLAS Level 2 standard data products. The term standard data products refers to those EOS instrument data that are routinely generated for public distribution. The National Snow and Ice Data Center (NSDIC) distribute these products. Each data product has a unique Product Identification code assigned by the Senior Project Scientist. The Level 2 Standard Data Products specifically include those derived geophysical data values (i.e., ice sheet elevation, cloud height, vegetation height, etc.). Additionally, the appropriate correction elements used to transform the Level 1A and Level 1B Data Products into Level 2 Data Products are included. The data are packaged with time tags, precision orbit location coordinates, and data quality and usage flags.

  3. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  4. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  5. Benchmark scenarios for the NMSSM

    CERN Document Server

    Djouadi, A; Ellwanger, U; Godbole, R; Hugonie, C; King, S F; Lehti, S; Moretti, S; Nikitenko, A; Rottlander, I; Schumacher, M; Teixeira, A

    2008-01-01

    We discuss constrained and semi--constrained versions of the next--to--minimal supersymmetric extension of the Standard Model (NMSSM) in which a singlet Higgs superfield is added to the two doublet superfields that are present in the minimal extension (MSSM). This leads to a richer Higgs and neutralino spectrum and allows for many interesting phenomena that are not present in the MSSM. In particular, light Higgs particles are still allowed by current constraints and could appear as decay products of the heavier Higgs states, rendering their search rather difficult at the LHC. We propose benchmark scenarios which address the new phenomenological features, consistent with present constraints from colliders and with the dark matter relic density, and with (semi--)universal soft terms at the GUT scale. We present the corresponding spectra for the Higgs particles, their couplings to gauge bosons and fermions and their most important decay branching ratios. A brief survey of the search strategies for these states a...

  6. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  7. Polarised light sheet tomography.

    Science.gov (United States)

    Reidt, Sascha L; O'Brien, Daniel J; Wood, Kenneth; MacDonald, Michael P

    2016-05-16

    The various benefits of light sheet microscopy have made it a widely used modality for capturing three-dimensional images. It is mostly used for fluorescence imaging, but recently another technique called light sheet tomography solely relying on scattering was presented. The method was successfully applied to imaging of plant roots in transparent soil, but is limited when it comes to more turbid samples. This study presents a polarised light sheet tomography system and its advantages when imaging in highly scattering turbid media. The experimental configuration is guided by Monte Carlo radiation transfer methods, which model the propagation of a polarised light sheet in the sample. Images of both reflecting and absorbing phantoms in a complex collagenous matrix were acquired, and the results for different polarisation configurations are compared. Focus scanning methods were then used to reduce noise and produce three-dimensional reconstructions of absorbing targets. PMID:27409945

  8. HRSA Data Fact Sheets

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Health Resources and Services Administration (HRSA) Data Fact Sheets provide summary data about HRSA’s activities in each Congressional District, County, State,...

  9. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  10. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  11. Energy information sheets

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    The National Energy Information Center (NEIC), as part of its mission, provides energy information and referral assistance to Federal, State, and local governments, the academic community, business and industrial organizations, and the public. The Energy Information Sheets was developed to provide general information on various aspects of fuel production, prices, consumption, and capability. Additional information on related subject matter can be found in other Energy Information Administration (EIA) publications as referenced at the end of each sheet.

  12. Biodiesel Basics (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2014-06-01

    This fact sheet provides a brief introduction to biodiesel, including a discussion of biodiesel blends, which blends are best for which vehicles, where to buy biodiesel, how biodiesel compares to diesel fuel in terms of performance, how biodiesel performs in cold weather, whether biodiesel use will plug vehicle filters, how long-term biodiesel use may affect engines, biodiesel fuel standards, and whether biodiesel burns cleaner than diesel fuel. The fact sheet also dismisses the use of vegetable oil as a motor fuel.

  13. Synergetic effect of benchmarking competitive advantages

    Directory of Open Access Journals (Sweden)

    N.P. Tkachova

    2011-12-01

    Full Text Available It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  14. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  15. Evaluating software verification systems: benchmarks and competitions

    NARCIS (Netherlands)

    Beyer, Dirk; Huisman, Marieke; Klebanov, Vladimir; Monahan, Rosemary

    2014-01-01

    This report documents the program and the outcomes of Dagstuhl Seminar 14171 “Evaluating Software Verification Systems: Benchmarks and Competitions”. The seminar brought together a large group of current and future competition organizers and participants, benchmark maintainers, as well as practition

  16. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  17. Benchmark Assessment for Improved Learning. AACC Report

    Science.gov (United States)

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  18. The Linked Data Benchmark Council Project

    NARCIS (Netherlands)

    Boncz, P.A.; Fundulaki, I.; Gubichev, A.; Larriba-Pey, J.; Neumann, T.

    2013-01-01

    Despite the fast growth and increasing popularity, the broad field of RDF and Graph database systems lacks an independent authority for developing benchmarks, and for neutrally assessing benchmark results through industry-strength auditing which would allow to quantify and compare the performance of

  19. Benchmarking implementations of lazy functional languages

    NARCIS (Netherlands)

    Hartel, P.H.; Langendoen, K.G.

    1993-01-01

    Five implementations of different lazy functional languages are compared using a common benchmark of a dozen medium size programs. The benchmarking procedure has been designed such that one set of programs can be translated automatically into different languages, thus allowing a fair comparison of t

  20. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  1. Benchmarking in healthcare using aggregated indicators

    DEFF Research Database (Denmark)

    Traberg, Andreas; Jacobsen, Peter

    2010-01-01

    Benchmarking has become a fundamental part of modern health care systems, but unfortunately, no benchmarking framework is unanimously accepted for assessing both quality and performance. The aim of this paper is to present a benchmarking model that is able to take different stakeholder perspectives...... into account. By presenting performance as a function of a patient perspective, an operations management perspective, and an employee perspective a more holistic approach to benchmarking is proposed. By collecting statistical information from several national and regional agencies and internal databases......, the model is constructed as a comprehensive hierarchy of indicators. By aggregating the outcome of each indicator, the model is able to benchmark healthcare providing units. By assessing performance deeper in the hierarchy, a more detailed view of performance is obtained. The validity test of the model...

  2. Robot automated EMPT sheet welding

    OpenAIRE

    Pasquale, Pablo; Schäfer, Ralph

    2012-01-01

    Many industrial applications require sheet to sheet or sheet to tube joints. The electromagnetic pulse technology is capable to produce these kinds of joints. In literature many examples of sheet to sheet solid state welding between similar and dissimilar metals are presented and analyzed in detail. However, the most of the presented welding applications, which are very focussed on the academic level, are simple specimens for example for tensile test. These specimens are usuall...

  3. A proposed benchmark problem for cargo nuclear threat monitoring

    Science.gov (United States)

    Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.

  4. A proposed benchmark problem for cargo nuclear threat monitoring

    International Nuclear Information System (INIS)

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, ). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.x4 in.x16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.x16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.

  5. A proposed benchmark problem for cargo nuclear threat monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, Thomas Wesley, E-mail: twholmes@ncsu.edu [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States); Calderon, Adan; Peeples, Cody R.; Gardner, Robin P. [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States)

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, ). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.x4 in.x16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.x16 in. side facing the system. The two sources used in the benchmark are {sup 137}Cs and {sup 235}U.

  6. LITMUS: An Open Extensible Framework for Benchmarking RDF Data Management Solutions

    OpenAIRE

    Thakkar, Harsh; Dubey, Mohnish; Sejdiu, Gezim; Ngomo, Axel-Cyrille Ngonga; Debattista, Jeremy; Lange, Christoph; Lehmann, Jens; Auer, Sören; Vidal, Maria-Esther

    2016-01-01

    Developments in the context of Open, Big, and Linked Data have led to an enormous growth of structured data on the Web. To keep up with the pace of efficient consumption and management of the data at this rate, many data Management solutions have been developed for specific tasks and applications. We present LITMUS, a framework for benchmarking data management solutions. LITMUS goes beyond classical storage benchmarking frameworks by allowing for analysing the performance of frameworks across...

  7. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  8. Benchmarking Measures of Network Influence

    Science.gov (United States)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  9. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  10. Validation of gadolinium burnout using PWR benchmark specification

    Energy Technology Data Exchange (ETDEWEB)

    Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2014-07-01

    Graphical abstract: - Highlights: • We present methodology for validation of gadolinium burnout in PWR. • We model 17 × 17 PWR fuel assembly using MCB code. • We demonstrate C/E ratios of measured and calculated concentrations of Gd isotopes. • The C/E for Gd154, Gd156, Gd157, Gd158 and Gd160 shows good agreement of ±10%. • The C/E for Gd152 and Gd155 shows poor agreement below ±10%. - Abstract: The paper presents comparative analysis of measured and calculated concentrations of gadolinium isotopes in spent nuclear fuel from the Japanese Ohi-2 PWR. The irradiation of the 17 × 17 fuel assembly containing pure uranium and gadolinia bearing fuel pins was numerically reconstructed using the Monte Carlo Continuous Energy Burnup Code – MCB. The reference concentrations of gadolinium isotopes were measured in early 1990s at Japan Atomic Energy Research Institute. It seems that the measured concentrations were never used for validation of gadolinium burnout. In our study we fill this gap and assess quality of both: applied numerical methodology and experimental data. Additionally we show time evolutions of infinite neutron multiplication factor K{sub inf}, FIMA burnup, U235 and Gd155–Gd158. Gadolinium-based materials are commonly used in thermal reactors as burnable absorbers due to large neutron absorption cross-section of Gd155 and Gd157.

  11. Validation of gadolinium burnout using PWR benchmark specification

    International Nuclear Information System (INIS)

    Graphical abstract: - Highlights: • We present methodology for validation of gadolinium burnout in PWR. • We model 17 × 17 PWR fuel assembly using MCB code. • We demonstrate C/E ratios of measured and calculated concentrations of Gd isotopes. • The C/E for Gd154, Gd156, Gd157, Gd158 and Gd160 shows good agreement of ±10%. • The C/E for Gd152 and Gd155 shows poor agreement below ±10%. - Abstract: The paper presents comparative analysis of measured and calculated concentrations of gadolinium isotopes in spent nuclear fuel from the Japanese Ohi-2 PWR. The irradiation of the 17 × 17 fuel assembly containing pure uranium and gadolinia bearing fuel pins was numerically reconstructed using the Monte Carlo Continuous Energy Burnup Code – MCB. The reference concentrations of gadolinium isotopes were measured in early 1990s at Japan Atomic Energy Research Institute. It seems that the measured concentrations were never used for validation of gadolinium burnout. In our study we fill this gap and assess quality of both: applied numerical methodology and experimental data. Additionally we show time evolutions of infinite neutron multiplication factor Kinf, FIMA burnup, U235 and Gd155–Gd158. Gadolinium-based materials are commonly used in thermal reactors as burnable absorbers due to large neutron absorption cross-section of Gd155 and Gd157

  12. Deliverable 1.2 Specification of industrial benchmark tests

    DEFF Research Database (Denmark)

    Arentoft, Mogens; Ravn, Bjarne Gottlieb

    Technical report for the Growth project: IMPRESS, Improvement of precision in forming by simultaneous modelling of deflections in workpiece-die-press system - Output from WP1: Numerical simulation of deflections in workpiece-die-press system....

  13. Earnings Benchmarks in International hotel firms

    Directory of Open Access Journals (Sweden)

    Laura Parte Esteban

    2011-11-01

    Full Text Available This paper focuses on earnings management around earnings benchmarks (avoiding losses and earnings decreases hypothesis in international firms and non international firms belonging to the Spanish hotel industry. First, frequency histograms are used to determine the existence of a discontinuity in earnings in both segments. Second, the use of discretionary accruals as a tool to meet earnings benchmarks is analysed in international and non international firms. Empirical evidence shows that international and non international firms meet earnings benchmarks. It is also noted different behaviour between international and non international firms.

  14. LAPUR-K BWR stability benchmark

    International Nuclear Information System (INIS)

    This paper documents the stability benchmark of the LAPUR-K code using the measurements taken at the Ringhals Unit 1 plant over four cycles of operation. This benchmark was undertaken to demonstrate the ability of LAPUR-K to calculate the decay ratios for both core-wide and regional mode oscillations. This benchmark contributes significantly to assuring that LAPUR-K can be used to define the exclusion region for the Monticello Plant in response to recent US Nuclear Regulatory Commission notices concerning oscillation observed at Boiling Water Reactor plants. Stability is part of Northern States Power Reload Safety Evaluation of the Monticello Plant

  15. Laboratory Powder Metallurgy Makes Tough Aluminum Sheet

    Science.gov (United States)

    Royster, D. M.; Thomas, J. R.; Singleton, O. R.

    1993-01-01

    Aluminum alloy sheet exhibits high tensile and Kahn tear strengths. Rapid solidification of aluminum alloys in powder form and subsequent consolidation and fabrication processes used to tailor parts made of these alloys to satisfy such specific aerospace design requirements as high strength and toughness.

  16. Safety advice sheets

    CERN Multimedia

    HSE Unit

    2013-01-01

    You never know when you might be faced with questions such as: when/how should I dispose of a gas canister? Where can I find an inspection report? How should I handle/store/dispose of a chemical substance…?   The SI section of the DGS/SEE Group is primarily responsible for safety inspections, evaluating the safety conditions of equipment items, premises and facilities. On top of this core task, it also regularly issues “Safety Advice Sheets” on various topics, designed to be of assistance to users but also to recall and reinforce safety rules and procedures. These clear and concise sheets, complete with illustrations, are easy to display in the appropriate areas. The following safety advice sheets have been issued so far: Other sheets will be published shortly. Suggestions are welcome and should be sent to the SI section of the DGS/SEE Group. Please send enquiries to general-safety-visits.service@cern.ch.

  17. The implementation of benchmarking process in marketing education services by Ukrainian universities

    Directory of Open Access Journals (Sweden)

    G.V. Okhrimenko

    2016-03-01

    Full Text Available The aim of the article. The consideration of theoretical and practical aspects of benchmarking at universities is the main task of this research. At first, the researcher identified the essence of benchmarking. It involves comparing the characteristics of college or university leading competitors in the industry and copying of proven designs. Benchmarking tries to eliminate the fundamental problem of comparison – the impossibility of being better than the one from whom they borrow solution. Benchmarking involves, therefore, self-evaluation including systematic collection of data and information with the view to making relevant comparisons of strengths and weaknesses of performance aspects. Benchmarking identifies gaps in performance, seeks new approaches for improvements, monitors progress, reviews benefits and assures adoption of good practices. The results of the analysis. There are five types of benchmarking: internal, competitive, functional, procedural and general. Benchmarking is treated as a process of systematically applied and has specific stages: 1 identification of study object; 2 identification of businesses for comparison; 3 selection of data collection methods; 4 determining variations in terms of efficiency and determination of the levels of future results; 5 communicating of the results of benchmarking; 6 development of implementation plan, initiating the implementation, monitoring implementation; 7 new benchmarks definition. The researcher gave the results of practical use of the benchmarking algorithm at universities. In particular, the monitoring and SWOT-analysis were identified competitive practices used at Ukrainian universities. The main criteria for determining the potential for benchmarking of universities were: 1 the presence of new teaching methods at universities; 2 the involvement of foreign lecturers, partners of other universities for cooperation; 3 promoting education services for target groups; 4 violation of

  18. Statistical benchmark for BosonSampling

    Science.gov (United States)

    Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas

    2016-03-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.

  19. Energy information sheets

    Energy Technology Data Exchange (ETDEWEB)

    1993-12-02

    The National Energy Information Center (NEIC), as part of its mission, provides energy information and referral assistance to Federal, State, and local governments, the academic community, business and industrial organizations, and the general public. Written for the general public, the EIA publication Energy Information Sheets was developed to provide information on various aspects of fuel production, prices, consumption and capability. The information contained herein pertains to energy data as of December 1991. Additional information on related subject matter can be found in other EIA publications as referenced at the end of each sheet.

  20. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and...

  1. Benchmarking carbon emissions performance in supply chains

    OpenAIRE

    Acquaye, Adolf; Genovese, Andrea; Barrett, John W.; Koh, Lenny

    2014-01-01

    Purpose – The paper aims to develop a benchmarking framework to address issues such as supply chain complexity and visibility, geographical differences and non-standardized data, ensuring that the entire supply chain environmental impact (in terms of carbon) and resource use for all tiers, including domestic and import flows, are evaluated. Benchmarking has become an important issue in supply chain management practice. However, challenges such as supply chain complexity and visibility, geogra...

  2. MPI Benchmarking Revisited: Experimental Design and Reproducibility

    OpenAIRE

    Hunold, Sascha; Carpen-Amarie, Alexandra

    2015-01-01

    The Message Passing Interface (MPI) is the prevalent programming model used on today's supercomputers. Therefore, MPI library developers are looking for the best possible performance (shortest run-time) of individual MPI functions across many different supercomputer architectures. Several MPI benchmark suites have been developed to assess the performance of MPI implementations. Unfortunately, the outcome of these benchmarks is often neither reproducible nor statistically sound. To overcome th...

  3. Benchmark Two-Good Utility Functions

    OpenAIRE

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticity. It is shown how each of these utility functions arises from a simple graphical construction based on a single given indifference curve. Also, it is shown that possessors of such utility function...

  4. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  5. Under Pressure Benchmark for DDBMS Availability

    OpenAIRE

    Fior, Alessandro Gustavo; Meira, Jorge Augusto; Cunha De Almeida, Eduardo; Coelho, Ricardo Gonçalves; Didonet Del Fabro, Marcos; Le Traon, Yves

    2013-01-01

    The availability of Distributed Database Management Systems (DDBMS) is related to the probability of being up and running at a given point in time, and managing failures. One well-known and widely used mechanism to ensure availability is replication, which includes performance impact on maintaining data replicas across the DDBMS's machine nodes. Benchmarking can be used to measure such impact. In this article, we present a benchmark that evaluates the performance of DDBMS, considering availab...

  6. Benchmarking implementations of lazy functional languages

    OpenAIRE

    Hartel, P.H.; Langendoen, K. G.

    1993-01-01

    Five implementations of different lazy functional languages are compared using a common benchmark of a dozen medium size programs. The benchmarking procedure has been designed such that one set of programs can be translated automatically into different languages, thus allowing a fair comparison of the quality of compilers for different lazy functional languages. Aspects studied include compile time, execution time, ease of programmingdetermined by the availability of certain key features

  7. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  8. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  9. Lesson learned from the SARNET wall condensation benchmarks

    International Nuclear Information System (INIS)

    Highlights: • The results of the benchmarking activity on wall condensation are reported. • The work was performed in the frame of SARNET. • General modelling techniques for condensation are discussed. • Results of University of Pisa and of other benchmark participants are discussed. • The lesson learned is drawn. - Abstract: The prediction of condensation in the presence of noncondensable gases has received continuing attention in the frame of the Severe Accident Research Network of Excellence, both in the first (2004–2008) and in the second (2009–2013) EC integrated projects. Among the different reasons for considering so relevant this basic phenomenon, coped with by classical treatments dated in the first decades of the last century, there is the interest for developing updated CFD models for reactor containment analysis, requiring validating at a different level the available modelling techniques. In the frame of SARNET, benchmarking activities were undertaken taking advantage of the work performed at different institutions in setting up and developing models for steam condensation in conditions of interest for nuclear reactor containment. Four steps were performed in the activity, involving: (1) an idealized problem freely inspired at the actual conditions occurring in an experimental facility, CONAN, installed at the University of Pisa; (2) a first comparison with experimental data purposely collected by the CONAN facility; (3) a second comparison with data available from experimental campaigns performed in the same apparatus before the inclusion of the activities in SARNET; (4) a third exercise involving data obtained at lower mixture velocity than in previous campaigns, aimed at providing conditions closer to those addressed in reactor containment analyses. The last step of the benchmarking activity required to change the configuration of the experimental apparatus to achieve the lower flow rates involved in the new test specifications. The

  10. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  11. Clinically meaningful performance benchmarks in MS

    Science.gov (United States)

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (10 seconds) ranges of performance. PMID:24174581

  12. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  13. Crib sheets and exam performance in a data structures course

    Science.gov (United States)

    Hamouda, Sally; Shaffer, Clifford A.

    2016-01-01

    In this paper, we study the relationship between the use of "crib sheets" or "cheat sheets" and performance on in-class exams. Our extensive survey of the existing literature shows that it is not decisive on the questions of when or whether crib sheets actually help students to either perform better on an exam or better learn the material. We report on our own detailed analysis for a body of crib sheets created for the final exam in a junior-level Data Structures and Algorithms course. We wanted to determine whether there is any feature of the crib sheets that correlates to good exam scores. Exam performance was compared against a number of potential indicators for quality in a crib sheet. We have found that students performed significantly better on questions at the comprehension level of Bloom's taxonomy when their crib sheet contained good information on the topic, while performance on questions at higher levels of the taxonomy did not show correlation to crib sheet contents. We have also seen that students at certain levels of performance on the final exam (specifically, medium-to-high performance) did relatively better on certain questions than other students at that performance level when they had good coverage of that question's topic on their crib sheet.

  14. Fact Sheet on Stress

    Science.gov (United States)

    ... items) NIMH (7 items) Share Fact Sheet on Stress Download PDF Download ePub Q&A on Stress for Adults: How it affects your health and ... to avoid more serious health effects. What is stress? Stress can be defined as the brain's response ...

  15. CSS - Cascading Style Sheets

    OpenAIRE

    Martinelli, Massimo

    2009-01-01

    Curso "CSS - Cascading Style Sheets" sobre programación web con CSS para el "Máster doble competencia en ciencias informáticas y ciencias sociales" ("Master double competence in computer science and social science"). Proyecto TEMPUS JEP – 26235-2005

  16. Ethanol Basics (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2015-01-01

    Ethanol is a widely-used, domestically-produced renewable fuel made from corn and other plant materials. More than 96% of gasoline sold in the United States contains ethanol. Learn more about this alternative fuel in the Ethanol Basics Fact Sheet, produced by the U.S. Department of Energy's Clean Cities program.

  17. Production (information sheets)

    NARCIS (Netherlands)

    2007-01-01

    Documentation sheets: Geo energy 2 Integrated System Approach Petroleum Production (ISAPP) The value of smartness 4 Reservoir permeability estimation from production data 6 Coupled modeling for reservoir application 8 Toward an integrated near-wellbore model 10 TNO conceptual framework for "E&P Unce

  18. Pseudomonas - Fact Sheet

    OpenAIRE

    Public Health Agency

    2012-01-01

    Fact sheet on Pseudomonas, including:What is Pseudomonas?What infections does it cause?Who is susceptible to pseudomonas infection?How will I know if I have pseudomonas infection?How can Pseudomonas be prevented from spreading?How can I protect myself from Pseudomonas?How is Pseudomonas infection treated?

  19. The Development of a Benchmark Tool for NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2013-07-01

    Full Text Available The aim of this article is to describe a proposed benchmark methodology and software application targeted at measuring the performance of both SQL and NoSQL databases. These represent the results obtained during PhD research (being actually a part of a larger application intended for NoSQL database management. A reason for aiming at this particular subject is the complete lack of benchmarking tools for NoSQL databases, except for YCBS [1] and a benchmark tool made specifically to compare Redis to RavenDB. While there are several well-known benchmarking systems for classical relational databases (starting with the canon TPC-C, TPC-E and TPC-H, on the other side of databases world such tools are mostly missing and seriously needed.

  20. Benchmark 3 - Springback of an Al-Mg alloy in warm forming conditions

    Science.gov (United States)

    Manach, Pierre-Yves; Coër, Jérémy; Jégata Hervé Laurent, Anthony; Yoon, Jeong Whan

    2016-08-01

    Accurate prediction of springback is a long-standing challenge in the field of warm forming of aluminium sheets. The objective of this benchmark is to predict the effect of temperature on the springback process through the use of the split-ring test [1] with an Al-Mg alloy. This test consists in determining the residual stress state by measuring the opening of a ring cut from the sidewall of a formed cylindrical cup. Cylindrical cups are drawn with a heated die and blank-holder at temperatures of 20, 150 and 240°C. The force-displacement response during the forming process, the thickness and the earing profiles of the cup as well as the ring opening and the temperature of the blank are used to evaluate numerical predictions submitted by the benchmark participants. Problem description, material properties, and simulation reports with experimental data are summarized.

  1. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  2. The Physics of Ice Sheets

    Science.gov (United States)

    Bassis, J. N.

    2008-01-01

    The great ice sheets in Antarctica and Greenland are vast deposits of frozen freshwater that contain enough to raise sea level by approximately 70 m if they were to completely melt. Because of the potentially catastrophic impact that ice sheets can have, it is important that we understand how ice sheets have responded to past climate changes and…

  3. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind...

  4. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  5. Organizational and economic aspects of benchmarking innovative products at the automobile industry enterprises

    Directory of Open Access Journals (Sweden)

    L.M. Taraniuk

    2016-06-01

    Full Text Available The aim of the article. The aim of the article is to determine the nature and characteristics of the use of benchmarking in the activity of domestic enterprises of automobile industry under current economic conditions. The results of the analysis. The article identified the concept of benchmarking, examining the stages of benchmarking, determination the efficiency of benchmarking in work automakers. It is considered the historical aspects of the emergence of benchmarking method in world economics. It is determined the economic aspects of the benchmarking in the work of enterprise automobile industry. The analysis on the stages of benchmarking of innovative products in the modern development of the productive forces and the impact of market factors on the economic activities of companies, including in the enterprise of automobile industry. The attention is focused on the specifics of implementing benchmarking at companies of automobile industry. It is considered statistics number of owners of electric vehicles worldwide. The authors researched market of electric vehicles in Ukraine. Also, it is considered the need of benchmarking using to improve the competitiveness of the national automobile industry especially CJSC “Zaporizhia Automobile Building Plant”. Authors suggested reasonable steps for its improvement. The authors improved methodical approach to assessing the selection of vehicles with the best technical parameters based on benchmarking, which, unlike the existing ones, based on the calculation of the integral factor of technical specifications of vehicles in order to establish better competitive products of companies automobile industry among evaluated. The main indicators of the national production of electric vehicles are shown. Attention is paid to the development of important ways of CJSC “Zaporizhia Automobile Building Plant”, where authors established the aspects that need to pay attention in the management of the

  6. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    Science.gov (United States)

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  7. Benchmark 2 - Springback of a draw / re-draw panel: Part C: Benchmark analysis

    Science.gov (United States)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing

    2013-12-01

    Benchmark analysis is summarized for DP600 and AA 5182-O. Nine simulation results submitted for this benchmark study are compared to the physical measurement results. The details on the codes, friction parameters, mesh technology, CPU, and material models are also summarized at the end of this report with the participant information details.

  8. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  9. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  10. Antarctic ice sheet GLIMMER model test and its simplified model on 2-dimensional ice flow

    Institute of Scientific and Technical Information of China (English)

    Xueyuan Tang; Zhanhai Zhang; Bo Sun; Yuansheng Li; Na Li; Bangbing Wang; Xiangpei Zhang

    2008-01-01

    The 3-dimensional finite difference thermodynamic coupled model on Antarctic ice sheet, GLIMMER model, is described. An ide-alized ice sheet numerical test was conducted under the EISMINT-I benchmark, and the characteristic curves of ice sheets under steady state were obtained. Based on this, this model was simplified from a 3-dimensional one to 2-dimensional one. Improvement of the dif-ference method and coordinate system was proposed. Evolution of the 2-dimensional ice flow was simulated under coupled temperature field conditions. The results showed that the characteristic curves deriving from the conservation of the mass, momentum and energy agree with the results of ice sheet profile simulated with GLIMMER model and with the theoretical results. The application prospect of the simplified 2-dimensional ice flow model to simulate the relation of age-depth-accumulation in Dome A region was discussed.

  11. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  12. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  13. Plasma focus discharges with multiple current sheets

    International Nuclear Information System (INIS)

    Systematic measurements on the structure and the propagation velocity v(z, r) of the current sheets were carried out for deuterium plasma-focus discharges. Each discharge had a multiplicity N ≤ 4 (N = 2 in ∼80% of the discharges) of pinching current sheets. The linear correlation of coefficients between the total neutron yield per discharge, Y, and the velocity vi of the i-th sheet were determine for vi at different locations (z, r) -- z is the coordinate along the electrode axis orthogonal to r -- during the sheet rolling off the interelectrode gap (when dv/dz achieves its maximum value) and during the radial compression (maximum of dv/dr). Two quasi-identical plasma focus machines (with and without field distortion elements at the electrode breech) were utilized at variable levels of the capacitor bank energy W in the interval 13 kJ ≤ W ≤ 30 kJ, and peak electrode current IM (approx-gt 1 MA). The difference between the two systems was limited to a variation ΔC = 0.2 C and ΔL = 0.2 Lc of the capacitance C and inductance Lc of the of two-module capacitor bank in order to determine the derivatives ΔY/ΔL, ΔY/ΔW. The acquired information is applied to further increase Y (> 1010 neutrons per discharge from D + D reactions) for the assigned IM, W values. A neural network analysis is carried out to assess the bearing on Y -- and on the neutron pulse duration -- of the current-sheet mutual interference, specifically, of N and of the time-spacing between sheets

  14. Biomolecular Science (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2012-04-01

    A brief fact sheet about NREL Photobiology and Biomolecular Science. The research goal of NREL's Biomolecular Science is to enable cost-competitive advanced lignocellulosic biofuels production by understanding the science critical for overcoming biomass recalcitrance and developing new product and product intermediate pathways. NREL's Photobiology focuses on understanding the capture of solar energy in photosynthetic systems and its use in converting carbon dioxide and water directly into hydrogen and advanced biofuels.

  15. Topographical atlas sheets

    Science.gov (United States)

    Wheeler, George Montague

    1876-01-01

    The following topographical atlas sheets, accompanying Appendix J.J. of the Annual Report of the Chief of Engineers, U.S. Army-being Annual Report upon U. S. Geographical Surveys-have been published during the fiscal year ending June 30, 1876, and are a portion of the series projected to embrace the territory of the United States lying west of the 100th meridian.

  16. Information sheets on energy

    International Nuclear Information System (INIS)

    These sheets, presented by the Cea, bring some information, in the energy domain, on the following topics: the world energy demand and the energy policy in France and in Europe, the part of the nuclear power in the energy of the future, the greenhouse gases emissions and the fight against the greenhouse effect, the carbon dioxide storage cost and the hydrogen economy. (A.L.B.)

  17. Benchmarks and statistics of entanglement dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tiersch, Markus

    2009-09-04

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  18. Benchmarks and statistics of entanglement dynamics

    International Nuclear Information System (INIS)

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  19. Musical Sheet Segmentation

    Directory of Open Access Journals (Sweden)

    Vinaya V

    2014-05-01

    Full Text Available Music has had an important role in human life from time immemorial. Music has been shared in two ways aurally and as written documents known as musical notes or musical scores. Many ancient cultures have used musical symbols to represent melodies and lyrics but none of them is as comprehensive as a written language or document. Thus the knowledge of ancient music is limited to a few fragments which are mostly unpublished. Hence to preserve such music we need to introduce a computerized system to digitalize and to decode the musical symbol images and reconstruct it as new score which will be in machine readable format. Here the machine readable format is MIDI. Prior to converting the musical symbols into MIDI format, musical sheets need to be segmented for isolating the musical symbols. Since synthetically generated musical sheets are used here, the segmentation process can be carried out using recursive graph cut method. Here we discuss the initial few steps such as staff line removal, text removal and segmentation of musical sheet.

  20. Analysis of ANS LWR physics benchmark problems.

    Energy Technology Data Exchange (ETDEWEB)

    Taiwo, T. A.

    1998-07-29

    Various Monte Carlo and deterministic solutions to the three PWR Lattice Benchmark Problems recently defined by the ANS Ad Hoc Committee on Reactor Physics Benchmarks are presented. These solutions were obtained using the VIM continuous-energy Monte Carlo code and the DIF3D/WIMS-D4M code package implemented at the Argonne National Laboratory. The code results for the K{sub eff} and relative pin power distribution are compared to measured values. Additionally, code results for the three benchmark-prescribed infinite lattice configurations are also intercompared. The results demonstrate that the codes produce very good estimates of both the K{sub eff} and power distribution for the critical core and the lattice parameters of the infinite lattice configuration.

  1. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador;

    2016-01-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision......-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  2. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  3. Standardized benchmarking in the quest for orthologs.

    Science.gov (United States)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  4. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  5. Thermal Transport Properties of Dry Spun Carbon Nanotube Sheets

    Directory of Open Access Journals (Sweden)

    Heath E. Misak

    2016-01-01

    Full Text Available The thermal properties of carbon nanotube- (CNT- sheet were explored and compared to copper in this study. The CNT-sheet was made from dry spinning CNTs into a nonwoven sheet. This nonwoven CNT-sheet has anisotropic properties in in-plane and out-of-plane directions. The in-plane direction has much higher thermal conductivity than the out-of-plane direction. The in-plane thermal conductivity was found by thermal flash analysis, and the out-of-plane thermal conductivity was found by a hot disk method. The thermal irradiative properties were examined and compared to thermal transport theory. The CNT-sheet was heated in the vacuum and the temperature was measured with an IR Camera. The heat flux of CNT-sheet was compared to that of copper, and it was found that the CNT-sheet has significantly higher specific heat transfer properties compared to those of copper. CNT-sheet is a potential candidate to replace copper in thermal transport applications where weight is a primary concern such as in the automobile, aircraft, and space industries.

  6. Titanium Sheet Fabricated from Powder for Industrial Applications

    Energy Technology Data Exchange (ETDEWEB)

    Peter, William H [ORNL; Muth, Thomas R [ORNL; Chen, Wei [ORNL; Yamamoto, Yukinori [ORNL; Jolly, Brian C [ORNL; Stone, Nigel [CSIRO ICT Center, Australia; Cantin, G.M.D. [CSIRO ICT Center, Australia; Barnes, John [CSIRO ICT Center, Australia; Paliwal, Muktesh [Ametek, Inc.; Smith, Ryan [Ametek, Inc.; Capone, Joseph [Ametek, Inc.; Liby, Alan L [ORNL; Williams, James C [Ohio State University; Blue, Craig A [ORNL

    2012-01-01

    In collaboration with Ametek and Commonwealth Scientific and Industrial Research Organization (CSIRO), Oak Ridge National Laboratory has evaluated three different methods for converting titanium hydride-dehydride (HDH) powder into thin gauge titanium sheet from a roll compacted preform. Methodologies include sintering, followed by cold rolling and annealing; direct hot rolling of the roll-compacted sheet; and hot rolling of multiple layers of roll compacted sheet that are encapsulated in a steel can. All three methods have demonstrated fully consolidated sheet, and each process route has the ability to produce sheet that meets ASTM B265 specifications. However, not every method currently provides sheet that can be highly formed without tearing. The degree of sintering between powder particles, post processing density, and the particle to particle boundary layer where compositional variations may exist, have a significant effect on the ability to form the sheet into useful components. Uniaxial tensile test results, compositional analysis, bend testing, and biaxial testing of the titanium sheet produced from hydride-dehydride powder will be discussed. Multiple methods of fabrication and the resulting properties can then be assessed to determine the most economical means of making components for industrial applications.

  7. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  8. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  9. International Benchmarking of Electricity Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2014-01-01

    TSO operating in each jurisdiction. The solution for European regulators has been found in international regulatory benchmarking, organized in collaboration with the Council of European Energy Regulators (CEER) in 2008 and 2012 for 22 and 23 TSOs, respectively. The frontier study provides static cost...... efficiency estimates for each TSO, as well as dynamic results in terms of technological improvement rate and efficiency catch-up speed. In this paper, we provide the methodology for the benchmarking, using non-parametric DEA under weight restrictions, as well as an analysis of the static cost efficiency...

  10. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  11. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li17Pb83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li17Pb83 blankets. (author)

  12. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...... organizational relations, behaviors and actions. In closing it is briefly considered how to study the calculative practices of benchmarking....

  13. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    solutions to the problem have been proposed so far including, for instance, evolutionary techniques, swarm intelligence or ad hoc solutions. However, the large diversity of the solutions and the lack of a common benchmark, made any comparative analysis of the different solutions extremely difficult...

  14. Benchmark Generation and Simulation at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Lagadapati, Mahesh [North Carolina State University (NCSU), Raleigh; Mueller, Frank [North Carolina State University (NCSU), Raleigh; Engelmann, Christian [ORNL

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  15. First CSNI numerical benchmark problem: comparison report

    International Nuclear Information System (INIS)

    In order to be able to make valid statements about a model's ability to describe a certain physical situation, it is indispensable that the numerical errors are much smaller than the modelling errors; otherwise, numerical errors could compensate or over pronounce model errors in an uncontrollable way. Therefore, knowledge about the numerical errors dependence on discretization parameters (e.g. size of spatial and temporal mesh) is required. In recognition of this need, numerical benchmark problems have been introduced. In the area of transient two-phase flow, numerical benchmarks are rather new. In June 1978, the CSNI Working Group on Emergency Core Cooling of Water Reactors has proposed to ICD /CSNI to sponsor a First CSNI Numerical Benchmark exercise. By the end of October 1979, results of the computation had been received from 10 organisations in 10 different countries. Based on these contributions, a preliminary comparison report has been prepared and distributed to the members of the CSNI Working Group on Emergency Core Cooling of Water Reactors, and to the contributors to the benchmark exercise. Comments on the preliminary comparison report by some contributors have subsequently been received. They have been considered in writing this final comparison report

  16. FinPar: A Parallel Financial Benchmark

    DEFF Research Database (Denmark)

    Andreetta, Christian; Begot, Vivien; Berthold, Jost;

    2016-01-01

    sensitive to the input dataset and therefore requires multiple code versions that are optimized differently, which also raises maintainability problems. This article presents three array-based applications from the financial domain that are suitable for gpgpu execution. Common benchmark-design practice has...

  17. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  18. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    Angles Rojas, R.; Pham, M.D.; Boncz, P.A.

    2014-01-01

    With inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics in industrial-st

  19. Benchmarking in radiation protection in pharmaceutical industries

    International Nuclear Information System (INIS)

    A benchmarking on radiation protection in seven pharmaceutical companies in Germany and Switzerland was carried out. As the result relevant parameters describing the performance and costs of radiation protection were acquired and compiled and subsequently depicted in figures in order to make these data comparable. (orig.)

  20. Alberta K-12 ESL Proficiency Benchmarks

    Science.gov (United States)

    Salmon, Kathy; Ettrich, Mike

    2012-01-01

    The Alberta K-12 ESL Proficiency Benchmarks are organized by division: kindergarten, grades 1-3, grades 4-6, grades 7-9, and grades 10-12. They are descriptors of language proficiency in listening, speaking, reading, and writing. The descriptors are arranged in a continuum of seven language competences across five proficiency levels. Several…

  1. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  2. Benchmarking Declarative Approximate Selection Predicates

    CERN Document Server

    Hassanzadeh, Oktie

    2009-01-01

    Declarative data quality has been an active research topic. The fundamental principle behind a declarative approach to data quality is the use of declarative statements to realize data quality primitives on top of any relational data source. A primary advantage of such an approach is the ease of use and integration with existing applications. Several similarity predicates have been proposed in the past for common quality primitives (approximate selections, joins, etc.) and have been fully expressed using declarative SQL statements. In this thesis, new similarity predicates are proposed along with their declarative realization, based on notions of probabilistic information retrieval. Then, full declarative specifications of previously proposed similarity predicates in the literature are presented, grouped into classes according to their primary characteristics. Finally, a thorough performance and accuracy study comparing a large number of similarity predicates for data cleaning operations is performed.

  3. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  4. Measuring Ice Sheet Height with ICESat-2

    Science.gov (United States)

    Walsh, K.; Smith, B.; Neumann, T.; Hancock, D.

    2015-12-01

    ICESat-2 is NASA's next-generation laser altimeter, designed to measure changes in ice sheet height and sea ice freeboard. Over the ice sheets, it will use a continuous repeat-track pointing strategy to ensure that it accurately measures elevation changes along a set of reference tracks. Over most of the area of Earth's ice sheets, ICESat-2 will provide coverage with a track-to-track spacing better than ~3 km. The onboard ATLAS instrument will use a photon-counting approach to provide a global geolocated photon point cloud, which is then converted into surface-specific elevation data sets. In this presentation, we will outline our strategy for taking the low-level photon point cloud and turning it into measurements posted at 20 m along-track for a set of pre-defined reference points by (1) selecting groups of photon events (PEs) around each along-track point, (2) refining the initial PE selection by fitting selected PEs with an along-track segment model and eliminating outliers to the model, (3) applying histogram-based corrections to the surface height based on the residuals to the along-track segment model, (4) calculate error estimates based on estimates of relative contributions of signal and noise PEs to the observed PE count, and (5) determining the final location and surface height of the along-track segment. These measurements are then corrected for short-scale (100-200 m) across-track surface topography around the reference points to develop a time series of land ice heights. The resulting data products will allow us to measure ice sheet elevation change with a point-for-point accuracy of a few centimeters over Earth's ice sheets.

  5. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  6. Effects of Exposure Imprecision on Estimation of the Benchmark Dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose......Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose...

  7. The GLAS Standard Data Products Specification-Level 1, Version 9

    Science.gov (United States)

    Lee, Jeffrey E.

    2013-01-01

    The Geoscience Laser Altimeter System (GLAS) is the primary instrument for the ICESat (Ice, Cloud and Land Elevation Satellite) laser altimetry mission. ICESat was the benchmark Earth Observing System (EOS) mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. From 2003 to 2009, the ICESat mission provided multi-year elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It also provided topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets.This document defines the Level-1 GLAS standard data products. This document addresses the data flow, interfaces, record and data formats associated with the GLAS Level 1 standard data products. GLAS Level 1 standard data products are composed of Level 1A and Level 1B data products. The term standard data products refers to those EOS instrument data that are routinely generated for public distribution. The National Snow and Ice Data Center (NSDIC) distribute these products. Each data product has a unique Product Identification code assigned by the Senior Project Scientist. GLAS Level 1A and Level 1B Data Products are composed from those Level 0 data that have been reformatted or transformed to corrected and calibrated data in physical units at the full instrument rate and resolution.

  8. The GLAS Standard Data Products Specification-Data Dictionary, Version 1.0. Volume 15

    Science.gov (United States)

    Lee, Jeffrey E.

    2013-01-01

    The Geoscience Laser Altimeter System (GLAS) is the primary instrument for the ICESat (Ice, Cloud and Land Elevation Satellite) laser altimetry mission. ICESat was the benchmark Earth Observing System (EOS) mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. From 2003 to 2009, the ICESat mission provided multi-year elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It also provided topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets.This document contains the data dictionary for the GLAS standard data products. It details the parameters present on GLAS standard data products. Each parameter is defined with a short name, a long name, units on product, type of variable, a long description and products that contain it. The term standard data products refers to those EOS instrument data that are routinely generated for public distribution. These products are distributed by the National Snow and Ice Data Center (NSDIC).

  9. Pre Managed Earnings Benchmarks and Earnings Management of Australian Firms

    Directory of Open Access Journals (Sweden)

    Subhrendu Rath

    2012-03-01

    Full Text Available This study investigates benchmark beating behaviour and circumstances under which managers inflate earnings to beat earnings benchmarks. We show that two benchmarks, positive earnings and positive earnings change, are associated with earnings manipulation. Using a sample ofAustralian firms from 2000 to 2006, we find that when the underlying earnings are negative or below prior year’s earnings, firms are more likely to use discretionary accruals to inflate earnings to beat benchmarks.

  10. Benchmarking of corporate social responsibility: Methodological problems and robustness.

    OpenAIRE

    Graafland, J.J.; Eijffinger, S.C.W.; Smid, H.

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  11. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, and inefficient portfolio versions. The covariance between the of...

  12. Towards a Benchmark Suite for Modelica Compilers: Large Models

    OpenAIRE

    Frenkel, Jens; Schubert, Christian; Kunze, Günter; Fritzson, Peter; Sjölund, Martin; Pop, Adrian

    2011-01-01

    The paper presents a contribution to a Modelica benchmark suite. Basic ideas for a tool independent benchmark suite based on Python scripting along with models for testing the performance of Modelica compilers regarding large systems of equation are given. The automation of running the benchmark suite is demonstrated followed by a selection of benchmark results to determine the current limits of Modelica tools and how they scale for an increasing number of equations.

  13. 47 CFR 69.108 - Transport rate benchmark.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  14. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  15. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  17. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  19. 29 CFR 1952.203 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.203 Section 1952.203... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  20. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  1. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  2. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  3. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  4. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  5. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  6. Benchmarking the True Random Number Generator of TPM Chips

    CERN Document Server

    Suciu, Alin

    2010-01-01

    A TPM (trusted platform module) is a chip present mostly on newer motherboards, and its primary function is to create, store and work with cryptographic keys. This dedicated chip can serve to authenticate other devices or to protect encryption keys used by various software applications. Among other features, it comes with a True Random Number Generator (TRNG) that can be used for cryptographic purposes. This random number generator consists of a state machine that mixes unpredictable data with the output of a one way hash function. According the specification it can be a good source of unpredictable random numbers even without having to require a genuine source of hardware entropy. However the specification recommends collecting entropy from any internal sources available such as clock jitter or thermal noise in the chip itself, a feature that was implemented by most manufacturers. This paper will benchmark the random number generator of several TPM chips from two perspectives: the quality of the random bit s...

  7. Material Safety Data Sheets (MSDS)

    CERN Document Server

    Lalley, J

    About 250.000 Material Safety Data sheets from the U.S. Government Department of Defense MSDS database, a mirror of data from siri.uvm.edu, MSDS sheets maintained by Cornell University Environmental Health and Safety and other Cornell departments.

  8. Paired cost comparison, a benchmarking technique for identifying areas of cost improvement in environmental restoration projects and waste management activities

    International Nuclear Information System (INIS)

    This paper provides an overview of benchmarking and how the Department of Energy's Office of Environmental Restoration and Waste Management used benchmarking techniques, specifically the Paired Cost Comparison, to identify cost disparities and their causes. The paper includes a discussion of the project categories selected for comparison and the criteria used to select the projects. Results are presented and factors that contribute to cost differences are discussed. Also, conclusions and the application of the Paired Cost Comparison are presented

  9. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions

    Science.gov (United States)

    Sae-Khow, Jirasak

    2014-01-01

    This study was the development of e-learning indicators used as an e-learning benchmarking model for higher education institutes. Specifically, it aimed to: 1) synthesize the e-learning indicators; 2) examine content validity by specialists; and 3) explore appropriateness of the e-learning indicators. Review of related literature included…

  10. Adaptive Cruise Control for a SMART Car: A Comparison Benchmark for MPC-PWA Control Methods

    NARCIS (Netherlands)

    Corona, D.; De Schutter, B.

    2008-01-01

    The design of an adaptive cruise controller for a SMART car, a type of small car, is proposed as a benchmark setup for several model predictive control methods for nonlinear and piecewise affine systems. Each of these methods has been already applied to specific case studies, different from method t

  11. Benchmark calculations on residue production within the EURISOL DS project; Part II: thick targets

    CERN Document Server

    David, J.-C; Boudard, A; Doré, D; Leray, S; Rapp, B; Ridikas, D; Thiollière, N

    Benchmark calculations on residue production using MCNPX 2.5.0. Calculations were compared to mass-distribution data for 5 different elements measured at ISOLDE, and to specific activities of 28 radionuclides in different places along the thick target measured in Dubna.

  12. Modelling the Antarctic Ice Sheet

    DEFF Research Database (Denmark)

    Pedersen, Jens Olaf Pepke; Holm, A.

    2015-01-01

    The Antarctic ice sheet is a major player in the Earth’s climate system and is by far the largest depository of fresh water on the planet. Ice stored in the Antarctic ice sheet (AIS) contains enough water to raise sea level by about 58 m, and ice loss from Antarctica contributed significantly...... Science) Antarctic Ice Sheet (DAIS) model (Shaffer 2014) is forced by reconstructed time series of Antarctic temperature, global sea level and ocean subsurface temperature over the last two glacial cycles. In this talk a modelling work of the Antarctic ice sheet over most of the Cenozoic era using...... the DAIS model will be presented. G. Shaffer (2014) Formulation, calibration and validation of the DAIS model (version 1), a simple Antarctic ice sheet model sensitive to variations of sea level and ocean subsurface temperature, Geosci. Model Dev., 7, 1803‐1818...

  13. Review of the GMD Benchmark Event in TPL-007-1

    Energy Technology Data Exchange (ETDEWEB)

    Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-07-21

    Los Alamos National Laboratory (LANL) examined the approaches suggested in NERC Standard TPL-007-1 for defining the geo-electric field for the Benchmark Geomagnetic Disturbance (GMD) Event. Specifically; 1. Estimating 100-year exceedance geo-electric field magnitude; The scaling of the GMD Benchmark Event to geomagnetic latitudes below 60 degrees north; and 3. The effect of uncertainties in earth conductivity data on the conversion from geomagnetic field to geo-electric field. This document summarizes the review and presents recommendations for consideration

  14. Photovoltaics Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-02-01

    This fact sheet is an overview of the Photovoltaics (PV) subprogram at the U.S. Department of Energy SunShot Initiative. The U.S. Department of Energy (DOE)’s Solar Energy Technologies Office works with industry, academia, national laboratories, and other government agencies to advance solar PV, which is the direct conversion of sunlight into electricity by a semiconductor, in support of the goals of the SunShot Initiative. SunShot supports research and development to aggressively advance PV technology by improving efficiency and reliability and lowering manufacturing costs. SunShot’s PV portfolio spans work from early-stage solar cell research through technology commercialization, including work on materials, processes, and device structure and characterization techniques.

  15. Soft Costs Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-05-01

    This fact sheet is an overview of the systems integration subprogram at the U.S. Department of Energy SunShot Initiative. Soft costs can vary significantly as a result of a fragmented energy marketplace. In the U.S., there are 18,000 jurisdictions and 3,000 utilities with different rules and regulations for how to go solar. The same solar equipment may vary widely in its final installation price due to process and market variations across jurisdictions, creating barriers to rapid industry growth. SunShot supports the development of innovative solutions that enable communities to build their local economies and establish clean energy initiatives that meet their needs, while at the same time creating sustainable solar market conditions.

  16. Systems Integration Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-06-01

    This fact sheet is an overview of the Systems Integration subprogram at the U.S. Department of Energy SunShot Initiative. The Systems Integration subprogram enables the widespread deployment of safe, reliable, and cost-effective solar energy technologies by addressing the associated technical and non-technical challenges. These include timely and cost-effective interconnection procedures, optimal system planning, accurate prediction of solar resources, monitoring and control of solar power, maintaining grid reliability and stability, and many more. To address the challenges associated with interconnecting and integrating hundreds of gigawatts of solar power onto the electricity grid, the Systems Integration program funds research, development, and demonstration projects in four broad, interrelated focus areas: grid performance and reliability, dispatchability, power electronics, and communications.

  17. Texture and sheet forming

    Energy Technology Data Exchange (ETDEWEB)

    Canova, G.R.; Kocks, U.F.; Fressengeas, C.; Dudzinski, D.; Lequeu, Ph.; Sornberger, G.

    1987-01-01

    The classical Marciniak-Kuczynski (Defect) theory, which consists in calculating the behavior of an initial defect in the sheet, in the form of a thin groove, is applied together with a full-constraints or relaxed-constraints theory of polycrystal viscoplasticity. Purpose of this is to investigate the effect of the induced texture on the Forming Limit Diagram (FLD), and the effect of grain shape as well. An alternative fast way of deriving FLD's is also proposed using a perturbation method. Comparisons are made between the results obtained by both Defect and Perturbation theories, in the case of ideal fcc rolling texture components, and in the case of polycrystals. 13 refs., 7 figs.

  18. Vitamin and Mineral Supplement Fact Sheets

    Science.gov (United States)

    ... Tables Online DRI Tool Daily Value (DV) Tables Vitamin and Mineral Supplement Fact Sheets Search the list ... Supplements: Background Information Botanical Dietary Supplements: Background Information Vitamin and Mineral Fact Sheets Botanical Supplement Fact Sheets ...

  19. CFD validation in OECD/NEA t-junction benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E. (Mathematics and Computer Science); (Cambridge Univ.); (Moscow Institute of Nuclar Energy Safety)

    2011-08-23

    When streams of rapidly moving flow merge in a T-junction, the potential arises for large oscillations at the scale of the diameter, D, with a period scaling as O(D/U), where U is the characteristic flow velocity. If the streams are of different temperatures, the oscillations result in experimental fluctuations (thermal striping) at the pipe wall in the outlet branch that can accelerate thermal-mechanical fatigue and ultimately cause pipe failure. The importance of this phenomenon has prompted the nuclear energy modeling and simulation community to establish a benchmark to test the ability of computational fluid dynamics (CFD) codes to predict thermal striping. The benchmark is based on thermal and velocity data measured in an experiment designed specifically for this purpose. Thermal striping is intrinsically unsteady and hence not accessible to steady state simulation approaches such as steady state Reynolds-averaged Navier-Stokes (RANS) models.1 Consequently, one must consider either unsteady RANS or large eddy simulation (LES). This report compares the results for three LES codes: Nek5000, developed at Argonne National Laboratory (USA), and Cabaret and Conv3D, developed at the Moscow Institute of Nuclear Energy Safety at (IBRAE) in Russia. Nek5000 is based on the spectral element method (SEM), which is a high-order weighted residual technique that combines the geometric flexibility of the finite element method (FEM) with the tensor-product efficiencies of spectral methods. Cabaret is a 'compact accurately boundary-adjusting high-resolution technique' for fluid dynamics simulation. The method is second-order accurate on nonuniform grids in space and time, and has a small dispersion error and computational stencil defined within one space-time cell. The scheme is equipped with a conservative nonlinear correction procedure based on the maximum principle. CONV3D is based on the immersed boundary method and is validated on a wide set of the experimental

  20. Measuring NUMA effects with the STREAM benchmark

    CERN Document Server

    Bergstrom, Lars

    2011-01-01

    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

  1. The PROOF benchmark suite measuring PROOF performance

    Science.gov (United States)

    Ryu, S.; Ganis, G.

    2012-06-01

    The PROOF benchmark suite is a new utility suite of PROOF to measure performance and scalability. The primary goal of the benchmark suite is to determine optimal configuration parameters for a set of machines to be used as PROOF cluster. The suite measures the performance of the cluster for a set of standard tasks as a function of the number of effective processes. Cluster administrators can use the suite to measure the performance of the cluster and find optimal configuration parameters. PROOF developers can also utilize the suite to help them measure, identify problems and improve their software. In this paper, the new tool is explained in detail and use cases are presented to illustrate the new tool.

  2. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  3. Active vibration control of nonlinear benchmark buildings

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xing-de; CHEN Dao-zheng

    2007-01-01

    The present nonlinear model reduction methods unfit the nonlinear benchmark buildings as their vibration equations belong to a non-affine system. Meanwhile,the controllers designed directly by the nonlinear control strategy have a high order, and they are difficult to be applied actually. Therefore, a new active vibration control way which fits the nonlinear buildings is proposed. The idea of the proposed way is based on the model identification and structural model linearization, and exerting the control force to the built model according to the force action principle. This proposed way has a better practicability as the built model can be reduced by the balance reduction method based on the empirical Grammian matrix. A three-story benchmark structure is presented and the simulation results illustrate that the proposed method is viable for the civil engineering structures.

  4. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  5. Non-judgemental Dynamic Fuel Cycle Benchmarking

    CERN Document Server

    Scopatz, Anthony Michael

    2015-01-01

    This paper presents a new fuel cycle benchmarking analysis methodology by coupling Gaussian process regression, a popular technique in Machine Learning, to dynamic time warping, a mechanism widely used in speech recognition. Together they generate figures-of-merit that are applicable to any time series metric that a benchmark may study. The figures-of-merit account for uncertainty in the metric itself, utilize information across the whole time domain, and do not require that the simulators use a common time grid. Here, a distance measure is defined that can be used to compare the performance of each simulator for a given metric. Additionally, a contribution measure is derived from the distance measure that can be used to rank order the importance of fuel cycle metrics. Lastly, this paper warns against using standard signal processing techniques for error reduction. This is because it is found that error reduction is better handled by the Gaussian process regression itself.

  6. Argonne Code Center: benchmark problem book

    International Nuclear Information System (INIS)

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification

  7. KENO-IV code benchmark calculation, (6)

    International Nuclear Information System (INIS)

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO3)4 aqueous solution, Pu metal or PuO2-polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)

  8. Argonne Code Center: benchmark problem book

    Energy Technology Data Exchange (ETDEWEB)

    1977-06-01

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification. (RWR)

  9. Geophysical constraints on the dynamics and retreat of the Barents Sea ice sheet as a paleobenchmark for models of marine ice sheet deglaciation

    Science.gov (United States)

    Patton, Henry; Andreassen, Karin; Bjarnadóttir, Lilja R.; Dowdeswell, Julian A.; Winsborrow, Monica C. M.; Noormets, Riko; Polyak, Leonid; Auriac, Amandine; Hubbard, Alun

    2015-12-01

    Our understanding of processes relating to the retreat of marine-based ice sheets, such as the West Antarctic Ice Sheet and tidewater-terminating glaciers in Greenland today, is still limited. In particular, the role of ice stream instabilities and oceanographic dynamics in driving their collapse are poorly constrained beyond observational timescales. Over numerous glaciations during the Quaternary, a marine-based ice sheet has waxed and waned over the Barents Sea continental shelf, characterized by a number of ice streams that extended to the shelf edge and subsequently collapsed during periods of climate and ocean warming. Increasing availability of offshore and onshore geophysical data over the last decade has significantly enhanced our knowledge of the pattern and timing of retreat of this Barents Sea ice sheet (BSIS), particularly so from its Late Weichselian maximum extent. We present a review of existing geophysical constraints that detail the dynamic evolution of the BSIS through the last glacial cycle, providing numerical modelers and geophysical workers with a benchmark data set with which to tune ice sheet reconstructions and explore ice sheet sensitivities and drivers of dynamic behavior. Although constraining data are generally spatially sporadic across the Barents and Kara Seas, behaviors such as ice sheet thinning, major ice divide migration, asynchronous and rapid flow switching, and ice stream collapses are all evident. Further investigation into the drivers and mechanisms of such dynamics within this unique paleo-analogue is seen as a key priority for advancing our understanding of marine-based ice sheet deglaciations, both in the deep past and in the short-term future.

  10. Overview and Discussion of the OECD/NRC Benchmark Based on NUPEC PWR Subchannel and Bundle Tests

    Directory of Open Access Journals (Sweden)

    M. Avramova

    2013-01-01

    Full Text Available The Pennsylvania State University (PSU under the sponsorship of the US Nuclear Regulatory Commission (NRC has prepared, organized, conducted, and summarized the Organisation for Economic Co-operation and Development/US Nuclear Regulatory Commission (OECD/NRC benchmark based on the Nuclear Power Engineering Corporation (NUPEC pressurized water reactor (PWR subchannel and bundle tests (PSBTs. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency (NEA of OECD and the Japan Nuclear Energy Safety Organization (JNES, Japan. The OECD/NRC PSBT benchmark was organized to provide a test bed for assessing the capabilities of various thermal-hydraulic subchannel, system, and computational fluid dynamics (CFDs codes. The benchmark was designed to systematically assess and compare the participants’ numerical models for prediction of detailed subchannel void distribution and department from nucleate boiling (DNB, under steady-state and transient conditions, to full-scale experimental data. This paper provides an overview of the objectives of the benchmark along with a definition of the benchmark phases and exercises. The NUPEC PWR PSBT facility and the specific methods used in the void distribution measurements are discussed followed by a summary of comparative analyses of submitted final results for the exercises of the two benchmark phases.

  11. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    International Nuclear Information System (INIS)

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of keff. Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for keff, f28/f25, c28/f25, and βeff. These limited results demonstrate the importance of studying other integral parameters in addition to keff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files

  12. Results of the isotopic concentrations of VVER calculational burnup credit benchmark no. 2(cb2

    International Nuclear Information System (INIS)

    The characterization of the irradiated fuel materials is becoming more important with the Increasing use of nuclear energy in the world. The purpose of this document is to present the results of the nuclide concentrations calculated Using Calculation VVER Burnup Credit Benchmark No. 2(CB2). The calculations were Performed in The Nuclear Technology Center of Cuba. The CB2 benchmark specification as the second phase of the VVER burnup credit benchmark is Summarized in [1]. The CB2 benchmark focused on VVER burnup credit study proposed on the 97' AER Symposium [2]. It should provide a comparison of the ability of various code systems And data libraries to predict VVER-440 spent fuel isotopes (isotopic concentrations) using Depletion analysis. This phase of the benchmark calculations is still in progress. CB2 should be finished by summer 1999 and evaluated results could be presented on the next AER Symposium. The obtained results are isotopic concentrations of spent fuel as a function of the burnup and Cooling time. The depletion point ORIGEN2[3] code was used for the calculation of the spent Fuel concentration. The depletion analysis was performed using the VVER-440 irradiated fuel assemblies with in-core Irradiation time of 3 years, burnup of the 30000 mwd/TU, and an after discharge cooling Time of 0 and 1 year. This work also comprises the results obtained by other codes[4].

  13. OCB: A Generic Benchmark to Evaluate the Performances of Object-Oriented Database Systems

    CERN Document Server

    Darmont, Jérôme; Schneider, Michel

    1998-01-01

    We present in this paper a generic object-oriented benchmark (the Object Clustering Benchmark) that has been designed to evaluate the performances of clustering policies in object-oriented databases. OCB is generic because its sample database may be customized to fit the databases introduced by the main existing benchmarks (e.g., OO1). OCB's current form is clustering-oriented because of its clustering-oriented workload, but it can be easily adapted to other purposes. Lastly, OCB's code is compact and easily portable. OCB has been implemented in a real system (Texas, running on a Sun workstation), in order to test a specific clustering policy called DSTC. A few results concerning this test are presented.

  14. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    ' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... state during the computation. We ran the system with two servers doing the secure computation using a database with information on about 2500 users. Answers arrived in about 25 seconds....

  15. Benchmarking Performance of Web Service Operations

    OpenAIRE

    Zhang, Shuai

    2011-01-01

    Web services are often used for retrieving data from servers providing information of different kinds. A data providing web service operation returns collections of objects for a given set of arguments without any side effects. In this project a web service benchmark (WSBENCH) is developed to simulate the performance of web service calls. Web service operations are specified as SQL statements. The function generator of WSBENCH converts user specified SQL queries into functions and automatical...

  16. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  17. WIDER FACE: A Face Detection Benchmark

    OpenAIRE

    Yang, Shuo; Luo, Ping; Loy, Chen Change; Tang, Xiaoou

    2015-01-01

    Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categori...

  18. Benchmarking Nature Tourism between Zhangjiajie and Repovesi

    OpenAIRE

    Wu, Zhou

    2014-01-01

    Since nature tourism became a booming business in modern society, more and more tourists choose nature-based tourism destination for their holidays. To find ways to promote Repovesi national park is quite significant, in a bid to reinforce the competitiveness of Repovesi national park. The topic of this thesis is both to find good marketing strategies used by the Zhangjiajie national park, via benchmarking and to provide some suggestions to Repovesi national park. The Method used in t...

  19. Benchmarking polish basic metal manufacturing companies

    Directory of Open Access Journals (Sweden)

    P. Pomykalski

    2014-01-01

    Full Text Available Basic metal manufacturing companies are undergoing substantial strategic changes resulting from global changes in demand. During such periods managers should closely monitor and benchmark the financial results of companies operating in their section. Proper and timely identification of the consequences of changes in these areas may be crucial as managers seek to exploit opportunities and avoid threats. The paper examines changes in financial ratios of basic metal manufacturing companies operating in Poland in the period 2006-2011.

  20. BN-600 full MOX core benchmark analysis

    International Nuclear Information System (INIS)

    As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.

  1. Direct Simulation of a Solidification Benchmark Experiment

    OpenAIRE

    Carozzani, Tommy; Gandin, Charles-André; Digonnet, Hugues; Bellet, Michel; Zaidat, Kader; Fautrelle, Yves

    2013-01-01

    International audience A solidification benchmark experiment is simulated using a three-dimensional cellular automaton-finite element solidification model. The experiment consists of a rectangular cavity containing a Sn-3 wt pct Pb alloy. The alloy is first melted and then solidified in the cavity. A dense array of thermocouples permits monitoring of temperatures in the cavity and in the heat exchangers surrounding the cavity. After solidification, the grain structure is revealed by metall...

  2. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  3. Performance Comparison of HPF and MPI Based NAS Parallel Benchmarks

    Science.gov (United States)

    Saini, Subhash

    1997-01-01

    Compilers supporting High Performance Form (HPF) features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR), Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI) combinations will be compared, based on latest NAS Parallel Benchmark results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition, we would also present NPB, (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu CAPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, and SGI Origin2000. We would also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  4. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  5. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  6. Introduction to the HPC Challenge Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  7. Benchmarking and accounting for the (private) cloud

    Science.gov (United States)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  8. State of the art: benchmarking microprocessors for embedded automotive applications

    Directory of Open Access Journals (Sweden)

    Adnan Shaout

    2016-09-01

    Full Text Available Benchmarking microprocessors provides a way for consumers to evaluate the performance of the processors. This is done by using either synthetic or real world applications. There are a number of benchmarks that exist today to assist consumers in evaluating the vast number of microprocessors that are available in the market. In this paper an investigation of the various benchmarks available for evaluating microprocessors for embedded automotive applications will be performed. We will provide an overview of the following benchmarks: Whetstone, Dhrystone, Linpack, standard performance evaluation corporation (SPEC CPU2006, embedded microprocessor benchmark consortium (EEMBC AutoBench and MiBench. A comparison of existing benchmarks will be given based on relevant characteristics of automotive applications which will give the proper recommendation when benchmarking processors for automotive applications.

  9. Number & operations task & drill sheets

    CERN Document Server

    Reed, Nat

    2011-01-01

    For grades 6-8, our State Standards-based combined resource meets the number & operations concepts addressed by the NCTM standards and encourages the students to review the concepts in unique ways. The task sheets introduce the mathematical concepts to the students around a central problem taken from real-life experiences, while the drill sheets provide warm-up and timed practice questions for the students to strengthen their procedural proficiency skills. Included are problems involving place value, fractions, addition, subtraction and using money. The combined task & drill sheets offer spac

  10. Seeing graphene-based sheets

    Directory of Open Access Journals (Sweden)

    Jaemyung Kim

    2010-03-01

    Full Text Available Graphene-based sheets such as graphene, graphene oxide and reduced graphene oxide have stimulated great interest due to their promising electronic, mechanical and thermal properties. Microscopy imaging is indispensable for characterizing these single atomic layers, and oftentimes is the first measure of sample quality. This review provides an overview of current imaging techniques for graphene-based sheets and highlights a recently developed fluorescence quenching microscopy technique that allows high-throughput, high-contrast imaging of graphene-based sheets on arbitrary substrate and even in solution.

  11. Perforation of metal sheets

    DEFF Research Database (Denmark)

    Steenstrup, Jens Erik

    The main purposes of this project are:1. Development of a dynamic model for the piercing and performation process2. Analyses of the main parameters3. Establishing demands for process improvements4. Expansion of the existing parameter limitsThe literature survey describes the process influence of ...... and a tool designed for punches with minimum length. Further, a systematic problem solving procedure is established. This procedure includes simulation as an integrated part, necessary for problem detection and to predict a favourable solution....... simulation is focused on the sheet deformation. However, the effect on the tool and press is included. The process model is based on the upper bound analysis in order to predict the force progress and hole characteristics etc. Parameter analyses are divided into two groups, simulation and experimental tests....... The tests complement each other in order to reveal the dominant parameters, which are decisive for the final product. Crucial demands are established to enable the desired improvements, focused on expanding the existing parameter limits. The demands include a justified need regarding a high-speed press...

  12. BDGS: A Scalable Big Data Generator Suite in Big Data Benchmarking

    OpenAIRE

    Ming, Zijian; Luo, Chunjie; Gao, Wanling; Han, Rui; Yang, Qiang; Wang, Lei; Zhan, Jianfeng

    2014-01-01

    Data generation is a key issue in big data benchmarking that aims to generate application-specific data sets to meet the 4V requirements of big data. Specifically, big data generators need to generate scalable data (Volume) of different types (Variety) under controllable generation rates (Velocity) while keeping the important characteristics of raw data (Veracity). This gives rise to various new challenges about how we design generators efficiently and successfully. To date, most existing tec...

  13. State Fact Sheets on COPD

    Science.gov (United States)

    ... About CDC.gov . COPD Homepage Data and Statistics Fact Sheets Publications Resources COPD en Español Related Links Air Pollution & Respiratory Health Air Quality, Fires, and Volcanic Eruptions ...

  14. Energy information sheets, September 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The National Energy Information Center (NEIC), as part of its mission, provides energy information and referral assistance to Federal, State, and local governments, the academic community, business and industrial organizations, and the public. The Energy Information Sheets was developed to provide general information on various aspects of fuel production, prices, consumption, and capability. Additional information on related subject matter can be found in other Energy Information Administration (EIA) publications as referenced at the end of each sheet.

  15. Seeing graphene-based sheets

    OpenAIRE

    Jaemyung Kim; Franklin Kim; Jiaxing Huang

    2010-01-01

    Graphene-based sheets such as graphene, graphene oxide and reduced graphene oxide have stimulated great interest due to their promising electronic, mechanical and thermal properties. Microscopy imaging is indispensable for characterizing these single atomic layers, and oftentimes is the first measure of sample quality. This review provides an overview of current imaging techniques for graphene-based sheets and highlights a recently developed fluorescence quenching microscopy technique that al...

  16. Energy information sheets, July 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-07-01

    The National Energy Information Center (NEIC), as part of its mission, provides energy information and referral assistance to Federal, State, and local governments, the academic community, business and industrial organizations, and the public. The Energy Information Sheets was developed to provide general information on various aspects of fuel production, prices, consumption, and capability. Additional information on related subject matter can be found in other Energy Information Administration (EIA) publications as referenced at the end of each sheet.

  17. Nuclear knowledge management experience of the international criticality safety benchmark evaluation project

    International Nuclear Information System (INIS)

    accuracy and completeness of the descriptive information given in the evaluation by comparison with original documentation (published and unpublished). 2. The benchmark specification can be derived from the descriptive information given in the evaluation. 3. The completeness of the benchmark specification. 4. The results and conclusions. 5. Adherence to format. In addition, each evaluation has undergoes an independent peer review by another working group member at a different facility. Starting with the evaluator's submittal in the appropriate format, the independent peer reviewers verifies: 1. That the benchmark specification can be derived from the descriptive information given in the evaluation. 2. The completeness of the benchmark specification. 3. The results and conclusions. 4. Adherence to format. A third review by the Working Group verifies that the benchmark specification and the conclusions are adequately supported. The work of the ICSBEP is documented as an International Handbook of Evaluated Criticality Safety Benchmark Experiments. Over 250 scientists from around the world have combined their efforts to produce this handbook, which currently spans over 30,000 pages and contains benchmark specifications for over 3350 critical configurations. The handbook is intended for use by criticality safety analysts to perform necessary validations of their calculation techniques and is expected to be a valuable tool for decades to come. The handbook is currently in use in 58 different countries. (author)

  18. Glaciological investigations on ice-sheet response in South Greenland

    Energy Technology Data Exchange (ETDEWEB)

    Mayer, C.; Boeggild, C.E.; Podlech, S.; Olesen, O.B.; Ahlstroem, A.P. [Geological Survey of Denmark and Greenland, Copenhagen (Denmark); Krabill, W. [NASA Goddard Space Flight Center, Lab. for Hydrospheric Processes, Wallops Island, VA (United States)

    2002-07-01

    The reaction of the world's large ice sheets to global climate change is still in the focus of scientific debate. Recent investigations have shown pronounced thinning in the southern part of the Greenland ice sheet (Inland Ice). In order to investigate the cause of the observed thinning and to judge the sensitivity of this part of the ice sheet a combined fieldwork, remote sensing and modelling project was designed. A glaciological transect was established in may 2001 on one of the main outlet glaciers in South Greenland, and the first data are now available. In addition, the history of the glacier variations during the last 40 years has been reconstructed. The conclusion of the investigations is that glacier variations in southern Greenland do not seem to be related only to climatic influence on the surface mass balance. Dynamic changes in ice-sheet geometry and basal conditions connected to climatic variations on a much longer timescale appear also to be significant. With the mass-balance transect in place, and also surveyed with laser altimetry, future changes in the study area can be mapped accurately and compared with the actual climatic and surface mass-balance data, which will allow discrimination between mass-balance effects and dynamic reactions. Climatic conditions during periods of recession of the past 40 years need to be investigated using climate data from nearby weather stations, or from climate proxy data. This compiled glacier history can be used as a benchmark for testing of the ice-dynamic model. (BA)

  19. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  20. Towards Systematic Benchmarking of Climate Model Performance

    Science.gov (United States)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  1. BEGAFIP. Programming service, development and benchmark calculations

    International Nuclear Information System (INIS)

    This report summarizes improvements to BEGAFIP (the Swedish equivalent to the Oak Ridge computer code ORIGEN). The improvements are: addition of a subroutine making it possible to calculate neutron sources, exchange of fission yields and branching ratios in the data library to those published by Meek and Rider in 1978. In addition, BENCHMARK-calculations have been made with BEGAFIP as well as with ORIGEN regarding the build-up of actinides for a fuel burnup of 33 MWd/kg U. The results were compared to those arrived upon from the more sophisticated code CASMO. (author)

  2. An OpenMP Compiler Benchmark

    Directory of Open Access Journals (Sweden)

    Matthias S. Müller

    2003-01-01

    Full Text Available The purpose of this benchmark is to propose several optimization techniques and to test their existence in current OpenMP compilers. Examples are the removal of redundant synchronization constructs, effective constructs for alternative code and orphaned directives. The effectiveness of the compiler generated code is measured by comparing different OpenMP constructs and compilers. If possible, we also compare with the hand coded "equivalent" solution. Six out of seven proposed optimization techniques are already implemented in different compilers. However, most compilers implement only one or two of them.

  3. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich;

    2015-01-01

    In multicomponent electrolyte solutions, the tendency of ions to diffuse at different rates results in a charge imbalance that is counteracted by the electrostatic coupling between charged species leading to a process called “electrochemical migration” or “electromigration.” Although not commonly...... not been published to date. This contribution provides a set of three benchmark problems that demonstrate the effect of electric coupling during multicomponent diffusion and electrochemical migration and at the same time facilitate the intercomparison of solutions from existing reactive transport codes...

  4. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...... of an undergraduate business school education. This paper presents case analysis of the research-oriented participatory education curriculum developed at Copenhagen Business School because it appears uniquely suited, by a curious mix of Danish education tradition and deliberate innovation, to offer an educational...

  5. COVE 2A Benchmarking calculations using NORIA

    International Nuclear Information System (INIS)

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs

  6. ABM11 parton distributions and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, Sergey [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institut Fiziki Vysokikh Ehnergij, Protvino (Russian Federation); Bluemlein, Johannes; Moch, Sven-Olaf [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-08-15

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant {alpha}{sub s} at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS scheme for {alpha}{sub s} and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  7. BALANCE-SHEET vs. ARBITRAGE CDOs

    Directory of Open Access Journals (Sweden)

    SILVIU EDUARD DINCA

    2016-02-01

    Full Text Available During the past few years, in the recent post-crisis aftermath, global asset managers are constantly searching new ways to optimize their investment portfolios while financial and banking institutions around the world are exploring new alternatives to better secure their financing and refinancing demands altogether with the enhancement of their risk management capabilities. We will exhibit herewith a comparison between the balance-sheet and arbitrage CDO securitizations as financial markets-based funding, investment and risks mitigation techniques, highlighting certain key structuring and implementation specifics on each of them.

  8. Building with Benchmarks: The Role of the District in Philadelphia's Benchmark Assessment System

    Science.gov (United States)

    Bulkley, Katrina E.; Christman, Jolley Bruce; Goertz, Margaret E.; Lawrence, Nancy R.

    2010-01-01

    In recent years, interim assessments have become an increasingly popular tool in districts seeking to improve student learning and achievement. Philadelphia has been at the forefront of this change, implementing a set of Benchmark assessments aligned with its Core Curriculum district-wide in 2004. In this article, we examine the overall context…

  9. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  10. Benchmark results in vector atmospheric radiative transfer

    International Nuclear Information System (INIS)

    In this paper seven vector radiative transfer codes are inter-compared for the case of underlying black surface. They include three techniques based on the discrete ordinate method (DOM), two Monte-Carlo methods, the successive orders scattering method, and a modified doubling-adding technique. It was found that all codes give very similar results. Therefore, we were able to produce benchmark results for the Stokes parameters both for reflected and transmitted light in the cases of molecular, aerosol and cloudy multiply scattering media. It was assumed that the single scattering albedo is equal to one. Benchmark results have been provided by several studies before, including Coulson et al., Garcia and Siewert, Wauben and Hovenier, and Natraj et al. among others. However, the case of the elongated phase functions such as for a cloud and with a high angular resolution is presented here for the first time. Also in difference with other studies, we make inter-comparisons using several codes for the same input dataset, which enables us to quantify the corresponding errors more accurately.

  11. Benchmark analysis of KRITZ-2 critical experiments

    International Nuclear Information System (INIS)

    In the KRITZ-2 critical experiments, criticality and pin power distributions were measured at room temperature and high temperature (about 245 degC) for three different cores (KRITZ-2:1, KRITZ-2:13, KRITZ-2:19) loading slightly enriched UO2 or MOX fuels. Recently, international benchmark problems were provided by ORNL and OECD/NEA based on the KRITZ-2 experimental data. The published experimental data for the system with slightly enriched fuels at high temperature are rare in the world and they are valuable for nuclear data testing. Thus, the benchmark analysis was carried out with a continuous-energy Monte Carlo code MVP and its four nuclear data libraries based on JENDL-3.2, JENDL-3.3, JEF-2.2 and ENDF/B-VI.8. As a result, fairly good agreements with the experimental data were obtained with any libraries for the pin power distributions. However, the JENDL-3.3 and ENDF/B-VI.8 give under-prediction of criticality and too negative isothermal temperature coefficients for slightly enriched UO2 cores, although the older nuclear data JENDL-3.2 and JEF-2.2 give rather good agreements with the experimental data. From the detailed study with an infinite unit cell model, it was found that the differences among the results with different libraries are mainly due to the different fission cross section of U-235 in the energy range below 1.0 eV. (author)

  12. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  13. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  14. Swiss electricity grid - Benchmarking pilot project

    International Nuclear Information System (INIS)

    This article is a short version of the ENET number 210369. This report for the Swiss Federal Office of Energy (SFOE) describes a benchmarking pilot project carried out as a second phase in the development of a formula for the regulation of an open electricity market in Switzerland. It follows on from an initial phase involving the definition of a 'blue print' and a basic concept. The aims of the pilot project - to check out the practicability of the concept - are discussed. The collection of anonymised data for the benchmarking model from over 30 electricity utilities operating on all 7 Swiss grid levels and their integration in the three areas 'Technology', 'Grid Costs' and 'Capital Invested' are discussed in detail. In particular, confidentiality and data protection aspects are looked at. The methods used in the analysis of the data are described and the results of an efficiency analysis of various utilities are presented. The report is concluded with a listing of questions concerning data collection and analysis as well as operational and capital costs that are still to be answered

  15. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  16. Direct data access protocols benchmarking on DPM

    Science.gov (United States)

    Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina

    2015-12-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.

  17. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc.

  18. Tuning the mechanical properties of vertical graphene sheets through atomic layer deposition.

    Science.gov (United States)

    Davami, Keivan; Jiang, Yijie; Cortes, John; Lin, Chen; Shaygan, Mehrdad; Turner, Kevin T; Bargatin, Igor

    2016-04-15

    We report the fabrication and characterization of graphene nanostructures with mechanical properties that are tuned by conformal deposition of alumina. Vertical graphene (VG) sheets, also called carbon nanowalls (CNWs), were grown on copper foil substrates using a radio-frequency plasma-enhanced chemical vapor deposition (RF-PECVD) technique and conformally coated with different thicknesses of alumina (Al2O3) using atomic layer deposition (ALD). Nanoindentation was used to characterize the mechanical properties of pristine and alumina-coated VG sheets. Results show a significant increase in the effective Young's modulus of the VG sheets with increasing thickness of deposited alumina. Deposition of only a 5 nm thick alumina layer on the VG sheets nearly triples the effective Young's modulus of the VG structures. Both energy absorption and strain recovery were lower in VG sheets coated with alumina than in pure VG sheets (for the same peak force). This may be attributed to the increase in bending stiffness of the VG sheets and the creation of connections between the sheets after ALD deposition. These results demonstrate that the mechanical properties of VG sheets can be tuned over a wide range through conformal atomic layer deposition, facilitating the use of VG sheets in applications where specific mechanical properties are needed.

  19. THE IMPORTANCE OF BENCHMARKING IN MAKING MANAGEMENT DECISIONS

    OpenAIRE

    Adriana-Mihaela IONESCU; Cristina Elena BIGIOI

    2016-01-01

    Launching a new business or project leads managers to make decisions and choose strategies that will then apply in their company. Most often, they take decisions only on instinct, but there are also companies that use benchmarking studies. Benchmarking is a highly effective management tool and is useful in the new competitive environment that has emerged from the need of organizations to constantly improve their performance in order to be competitive. Using this benchmarking process, organiza...

  20. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  1. Indian Management Education and Benchmarking Practices: A Conceptual Framework

    OpenAIRE

    Dr. Dharmendra MEHTA; Er. Sunayana SONI; Dr. Naveen K MEHTA; Dr. Rajesh K MEHTA

    2015-01-01

    Benchmarking can be defined as a process through which practices are analyzed to provide a standard measurement (‘benchmark’) of effective performance within an organization (such as a university/institute). Benchmarking is also used to compare performance with other organizations and other sectors. As management education is passing through challenging times so some modern management tool like benchmarking is required to improve the quality of management education and to overcome the challen...

  2. The importance of an accurate benchmark choice: the spanish case

    OpenAIRE

    Ruiz Campo, Sofía; Monjas Barroso, Manuel

    2012-01-01

    The performance of a fund cannot be judged unless it is first measured, and measurement is not possible without an objective frame of reference. A benchmark serves as a reliable and consistent gauge of the multiple dimensions of performance: return, risk and correlation. The benchmark must be a fair target for investment managers and be representative of the relevant opportunity set. The objective of this paper is to analyse whether the different benchmarks generally used to me...

  3. BENCHMARKING FOR THE ROMANIAN HEAVY COMMERCIAL VEHICLES MARKET

    Directory of Open Access Journals (Sweden)

    Pop Nicolae Alexandru

    2014-07-01

    Full Text Available The globalization has led to a better integration of international markets of goods, services and capital markets, fact which leads to a significant increase of investments in those regions with low labor cost and with access to commercial routes. The development of international trade has imposed a continuous growth of the volumes of transported goods and the development of a transport system, able to stand against the new pressure exercised by cost, time and space. The solution to efficient transport is the intermodal transportation relying on state-of-the-art technological platforms, which integrates the advantages specific to each means of transportation: flexibility for road transportation, high capacity for railway, low costs for sea, and speed for air transportation. Romania’s integration in the pan-European transport system alongside with the EU’s enlargement towards the east will change Romania’s positioning into a central one. The integrated governmental program of improving the intermodal infrastructure will ensure fast railway, road and air connections. For the Danube harbors and for the sea ports, EU grants and allowances will be used thus increasing Romania’s importance in its capacity as one of Europe’s logistical hubs. The present paper intends to use benchmarking, the management and strategic marketing tool, in order to realize an evaluation of the Romanian heavy commercial vehicles market, within European context. Benchmarking encourages change in a complex and dynamic context where a permanent solution cannot be found. The different results stimulate the use of benchmarking as a solution to reduce gaps. MAN’s case study shows the dynamics of the players on the Romanian market for heavy commercial vehicles, when considering the strong growth of Romanian exported goods but with a modest internal demand, a limited but developing road infrastructure, and an unfavorable international economical context together with

  4. A new approach to nanoporous graphene sheets via rapid microwave-induced plasma for energy applications

    International Nuclear Information System (INIS)

    We developed a novel approach to the fabrication of three-dimensional, nanoporous graphene sheets featuring a high specific surface area of 734.9 m2 g−1 and an ultrahigh pore volume of 4.1 cm3 g−1 through a rapid microwave-induced plasma treatment. The sheets were used as electrodes for supercapacitors and for the oxygen reduction reaction (ORR) for fuel cells. Argon-plasma grown sheets exhibited a 44% improvement of supercapacitive performance (203 F g−1) over the plasma grown sheets (141 F g−1). N-doped sheets with Co3O4 showed an outstanding ORR activity evidenced from the much smaller Tafel slope (42 mV/decade) than that of Pt/C (82 mV/decade), which is caused by the high electrical conductivity of the graphene sheets, the planar N species content and the nanoporous morphology. (paper)

  5. Benchmarking in national health service procurement in Scotland.

    Science.gov (United States)

    Walker, Scott; Masson, Ron; Telford, Ronnie; White, David

    2007-11-01

    The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland. PMID:17958971

  6. Hospital Energy Benchmarking Guidance - Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  7. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  8. Automobile sheet metal part production with incremental sheet forming

    Directory of Open Access Journals (Sweden)

    İsmail DURGUN

    2016-02-01

    Full Text Available Nowadays, effect of global warming is increasing drastically so it leads to increased interest on energy efficiency and sustainable production methods. As a result of adverse conditions, national and international project platforms, OEMs (Original Equipment Manufacturers, SMEs (Small and Mid-size Manufacturers perform many studies or improve existing methodologies in scope of advanced manufacturing techniques. In this study, advanced manufacturing and sustainable production method "Incremental Sheet Metal Forming (ISF" was used for sheet metal forming process. A vehicle fender was manufactured with or without die by using different toolpath strategies and die sets. At the end of the study, Results have been investigated under the influence of method and parameters used.Keywords: Template incremental sheet metal, Metal forming

  9. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    Science.gov (United States)

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  10. Hydrostatic grounding line parameterization in ice sheet models

    Directory of Open Access Journals (Sweden)

    H. Seroussi

    2014-06-01

    Full Text Available Modeling of grounding line migration is essential to simulate accurately the behavior of marine ice sheets and investigate their stability. Here, we assess the sensitivity of numerical models to the parameterization of the grounding line position. We run the MISMIP3D benchmark experiments using a two-dimensional shelfy-stream approximation (SSA model with different mesh resolutions and different sub-element parameterizations of grounding line position. Results show that different grounding line parameterizations lead to different steady state grounding line positions as well as different retreat/advance rates. Our simulations explain why some vertically depth-averaged model simulations exhibited behaviors similar to full-Stokes models in the MISMIP3D benchmark, while the vast majority of simulations based on SSA showed results deviating significantly from full-Stokes results. The results reveal that differences between simulations performed with and without sub-element parameterization are as large as those performed with different approximations of the stress balance equations and that the reversibility test can be passed at much lower resolutions than the steady-state grounding line position. We conclude that fixed grid models that do not employ such a parameterization should be avoided, as they do not provide accurate estimates of grounding line dynamics, even at high spatial resolution. For models that include sub-element grounding line parameterization, a mesh resolution lower than 2 km should be employed.

  11. Uranium mining sites - Thematic sheets

    International Nuclear Information System (INIS)

    A first sheet proposes comments, data and key numbers about uranium extraction in France: general overview of uranium mining sites, status of waste rock and tailings after exploitation, site rehabilitation. The second sheet addresses the sources of exposure to ionizing radiations due to ancient uranium mining sites: discussion on the identification of these sources associated with these sites, properly due to mining activities or to tailings, or due to the transfer of radioactive substances towards water and to the contamination of sediments, description of the practice and assessment of radiological control of mining sites. A third sheet addresses the radiological exposure of public to waste rocks, and the dose assessment according to exposure scenarios: main exposure ways to be considered, studied exposure scenarios (passage on backfilled path and grounds, stay in buildings built on waste rocks, keeping mineralogical samples at home). The fourth sheet addresses research programmes of the IRSN on uranium and radon: epidemiological studies (performed on mine workers; on French and on European cohorts, French and European studies on the risk of lung cancer associated with radon in housing), study of the biological effects of chronic exposures. The last sheet addresses studies and expertises performed by the IRSN on ancient uranium mining sites in France: studies commissioned by public authorities, radioactivity control studies performed by the IRSN about mining sites, participation of the IRSN to actions to promote openness to civil society

  12. Guide for waste profile sheets

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-09-15

    This waste profile sheet was prepared to help petroleum industry operators properly classify and handle industrial waste. The profile sheets complied with British Columbia (BC) occupational health and safety regulations. Waste sheets were provided for compressed gases; flammable and combustible materials; oxidizing materials; poisonous materials; toxic materials; biohazardous and infectious materials; corrosive materials; and dangerously reactive materials. The waste information sheets were divided into 4 sections: (1) general information, (2) hazard information, (3) management methods, and (4) transportation. Sheets were provided for absorbents and rags; acids; batteries; carbon-amine; carbon-glycol; flammable and self-heating carbon; metal catalysts; caustic materials; contaminated debris; desiccant materials; drill sump materials; filters; raw gas fluids; frac fluids; hydrotest fluids; incinerator ashes; lubricating oils; PCBs; pigging wax; various sludges; solvent residues; process water; and well workover fluids. Detailed information on the handling, storage and disposal of the wastes was provided, as well as information related to reportable releases, required labels and placards, and documentation related to hazardous waste shipment. tabs., figs.

  13. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  14. Synthetic neuronal datasets for benchmarking directed functional connectivity metrics

    Directory of Open Access Journals (Sweden)

    João Rodrigues

    2015-05-01

    Full Text Available Background. Datasets consisting of synthetic neural data generated with quantifiable and controlled parameters are a valuable asset in the process of testing and validating directed functional connectivity metrics. Considering the recent debate in the neuroimaging community concerning the use of these metrics for fMRI data, synthetic datasets that emulate the BOLD signal dynamics have played a central role by supporting claims that argue in favor or against certain choices. Generative models often used in studies that simulate neuronal activity, with the aim of gaining insight into specific brain regions and functions, have different requirements from the generative models for benchmarking datasets. Even though the latter must be realistic, there is a tradeoff between realism and computational demand that needs to be contemplated and simulations that efficiently mimic the real behavior of single neurons or neuronal populations are preferred, instead of more cumbersome and marginally precise ones. Methods. This work explores how simple generative models are able to produce neuronal datasets, for benchmarking purposes, that reflect the simulated effective connectivity and, how these can be used to obtain synthetic recordings of EEG and fMRI BOLD signals. The generative models covered here are AR processes, neural mass models consisting of linear and nonlinear stochastic differential equations and populations with thousands of spiking units. Forward models for EEG consist in the simple three-shell head model while the fMRI BOLD signal is modeled with the Balloon-Windkessel model or by convolution with a hemodynamic response function. Results. The simulated datasets are tested for causality with the original spectral formulation for Granger causality. Modeled effective connectivity can be detected in the generated data for varying connection strengths and interaction delays. Discussion. All generative models produce synthetic neuronal data with

  15. Benchmarking novel approaches for modelling species range dynamics

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  16. Benchmarking Big Data Systems and the BigData Top100 List.

    Science.gov (United States)

    Baru, Chaitanya; Bhandarkar, Milind; Nambiar, Raghunath; Poess, Meikel; Rabl, Tilmann

    2013-03-01

    "Big data" has become a major force of innovation across enterprises of all sizes. New platforms with increasingly more features for managing big datasets are being announced almost on a weekly basis. Yet, there is currently a lack of any means of comparability among such platforms. While the performance of traditional database systems is well understood and measured by long-established institutions such as the Transaction Processing Performance Council (TCP), there is neither a clear definition of the performance of big data systems nor a generally agreed upon metric for comparing these systems. In this article, we describe a community-based effort for defining a big data benchmark. Over the past year, a Big Data Benchmarking Community has become established in order to fill this void. The effort focuses on defining an end-to-end application-layer benchmark for measuring the performance of big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. This article describes the efforts that have been undertaken thus far toward the definition of a BigData Top100 List. While highlighting the major technical as well as organizational challenges, through this article, we also solicit community input into this process.

  17. Benchmarking farmer performance as an incentive for sustainable farming: environmental impacts of pesticides.

    Science.gov (United States)

    Kragten, S; De Snoo, G R

    2003-01-01

    Pesticide use in The Netherlands is very high, and pesticides are found across all environmental compartments. Among individual farmers, though, there is wide variation in both pesticide use and the potential environmental impact of that use, providing policy leverage for environmental protection. This paper reports on a benchmarking tool with which farmers can compare their environmental and economic performance with that of other farmers, thereby serving as an incentive for them to adopt more sustainable methods of food production methods. The tool is also designed to provide farmers with a more detailed picture of the environmental impacts of their methods of pest management. It is interactive and available on the internet: www.agriwijzer.nl. The present version has been developed specifically for arable farmers, but it is to be extended to encompass other agricultural sectors, in particular horticulture (bulb flowers, stem fruits), as well as various other aspects of sustainability (nutrient inputs, 'on-farm' biodiversity, etc.). The benchmarking methodology was tested on a pilot group of 20 arable farmers, whose general response was positive. They proved to be more interested in comparative performance in terms of economic rather than environmental indicators. In their judgment the benchmarking tool can serve a useful purpose in steering them towards more sustainable forms of agricultural production. The benchmarking results can also be used by other actors in the agroproduction chain, such as food retailers and the food industry. PMID:15151309

  18. Tearing resistance of some co-polyester sheets

    International Nuclear Information System (INIS)

    A three-zone model consisting of initial, evolutionary and stabilised plastic zones for tearing resistance was proposed for polymer sheets. An analysis with the model, based on the essential work of fracture (EWF) approach, was demonstrated to be capable for predicting specific total work of fracture along the tear path across all the plastic zones although accuracy of specific essential work of fracture is subject to improvement. Photo-elastic images were used for identification of plastic deformation sizes and profiles. Fracture mode change during loading was described in relation with the three zones. Tearing fracture behaviour of extruded mono- and bi-layer sheets of different types of amorphous co-polyesters and different thicknesses was investigated. Thick material exhibited higher specific total work of tear fracture than thin mono-layer sheet in the case of amorphous polyethylene terephthalate (PET). This finding was explained in terms of plastic zone size formed along the tear path, i.e., thick material underwent larger plastic deformation than thin material. When PET and polyethylene terephthalate glycol (PETG) were laminated with each other, specific total work of fracture of the bi-layer sheets was not noticeably improved over that of the constituent materials

  19. LHC benchmarks from flavored gauge mediation

    Science.gov (United States)

    Ierushalmi, N.; Iwamoto, S.; Lee, G.; Nepomnyashy, V.; Shadmi, Y.

    2016-07-01

    We present benchmark points for LHC searches from flavored gauge mediation models, in which messenger-matter couplings give flavor-dependent squark masses. Our examples include spectra in which a single squark — stop, scharm, or sup — is much lighter than all other colored superpartners, motivating improved quark flavor tagging at the LHC. Many examples feature flavor mixing; in particular, large stop-scharm mixing is possible. The correct Higgs mass is obtained in some examples by virtue of the large stop A-term. We also revisit the general flavor and CP structure of the models. Even though the A-terms can be substantial, their contributions to EDM's are very suppressed, because of the particular dependence of the A-terms on the messenger coupling. This holds regardless of the messenger-coupling texture. More generally, the special structure of the soft terms often leads to stronger suppression of flavor- and CP-violating processes, compared to naive estimates.

  20. Plasma Waves as a Benchmark Problem

    CERN Document Server

    Kilian, Patrick; Schreiner, Cedric; Spanier, Felix

    2016-01-01

    A large number of wave modes exist in a magnetized plasma. Their properties are determined by the interaction of particles and waves. In a simulation code, the correct treatment of field quantities and particle behavior is essential to correctly reproduce the wave properties. Consequently, plasma waves provide test problems that cover a large fraction of the simulation code. The large number of possible wave modes and the freedom to choose parameters make the selection of test problems time consuming and comparison between different codes difficult. This paper therefore aims to provide a selection of test problems, based on different wave modes and with well defined parameter values, that is accessible to a large number of simulation codes to allow for easy benchmarking and cross validation. Example results are provided for a number of plasma models. For all plasma models and wave modes that are used in the test problems, a mathematical description is provided to clarify notation and avoid possible misunderst...

  1. Development of solutions to benchmark piping problems

    Energy Technology Data Exchange (ETDEWEB)

    Reich, M; Chang, T Y; Prachuktam, S; Hartzman, M

    1977-12-01

    Benchmark problems and their solutions are presented. The problems consist in calculating the static and dynamic response of selected piping structures subjected to a variety of loading conditions. The structures range from simple pipe geometries to a representative full scale primary nuclear piping system, which includes the various components and their supports. These structures are assumed to behave in a linear elastic fashion only, i.e., they experience small deformations and small displacements with no existing gaps, and remain elastic through their entire response. The solutions were obtained by using the program EPIPE, which is a modification of the widely available program SAP IV. A brief outline of the theoretical background of this program and its verification is also included.

  2. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  3. SARNET benchmark on QUENCH-11. Final report

    International Nuclear Information System (INIS)

    The QUENCH out-of-pile experiments at Forschungszentrum Karlsruhe (Karlsruhe Research Center) are set up to investigate the hydrogen source term that results from the water or steam injection into an uncovered core of a Light-Water Reactor, to examine the behavior of overheated fuel elements under different flooding conditions, and to create a database for model development and improvement of Severe Fuel Damage (SFD) code packages. The boil-off experiment QUENCH-11 was performed on December 8, 2005 as the second of two experiments in the frame of the EC-supported LACOMERA program. It was to simulate ceasing pumps in case of a small break LOCA or a station blackout with a late depressurization of the primary system, starting with boil-down of a test bundle that was partially filled with water. It is the first test to investigate the whole sequence of an anticipated reactor accident from the boil-off phase to delayed reflood of the bundle with a low water injection rate. The test is characterized by an interaction of thermal-hydraulics and material interactions that is even stronger than in previous QUENCH tests. It was proposed by INRNE Sofia (Bulgarian Academy of Sciences) and defined together with Forschungszentrum Karlsruhe. After the test, QUENCH-11 was chosen as a SARNET code benchmark exercise. Its task is a comparison between experimental data and analytical results to assess the reliability of the code prediction for different phases of an accident and the experiment. The SFD codes used were ASTEC, ATHLET-CD, ICARE-CATHARE, MELCOR, RATEG/SVECHA, RELAP/-SCDAPSIM, and SCDAP/RELAP5. The INRNE took responsibility as benchmark coordinator to compare the code results with the experimental data. As a basis of the present work, histories of temperatures, hydrogen production and other important variables were used. Besides, axial profiles at quench initiation and the final time of 7000 s, above all of temperatures, are presented. For most variables a mainstream of

  4. SARNET benchmark on QUENCH-11. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Stefanova, A. [Bylgarska Akademiya na Naukite, Sofia (Bulgaria). Inst. for Nuclear Research and Nuclear Energy; Drath, T. [Ruhr-Univ. Bochum (Germany). Energy Systems and Energy Economics; Duspiva, J. [Nuclear Research Inst., Rez (CZ). Dept. of Reactor Technology] (and others)

    2008-03-15

    The QUENCH out-of-pile experiments at Forschungszentrum Karlsruhe (Karlsruhe Research Center) are set up to investigate the hydrogen source term that results from the water or steam injection into an uncovered core of a Light-Water Reactor, to examine the behavior of overheated fuel elements under different flooding conditions, and to create a database for model development and improvement of Severe Fuel Damage (SFD) code packages. The boil-off experiment QUENCH-11 was performed on December 8, 2005 as the second of two experiments in the frame of the EC-supported LACOMERA program. It was to simulate ceasing pumps in case of a small break LOCA or a station blackout with a late depressurization of the primary system, starting with boil-down of a test bundle that was partially filled with water. It is the first test to investigate the whole sequence of an anticipated reactor accident from the boil-off phase to delayed reflood of the bundle with a low water injection rate. The test is characterized by an interaction of thermal-hydraulics and material interactions that is even stronger than in previous QUENCH tests. It was proposed by INRNE Sofia (Bulgarian Academy of Sciences) and defined together with Forschungszentrum Karlsruhe. After the test, QUENCH-11 was chosen as a SARNET code benchmark exercise. Its task is a comparison between experimental data and analytical results to assess the reliability of the code prediction for different phases of an accident and the experiment. The SFD codes used were ASTEC, ATHLET-CD, ICARE-CATHARE, MELCOR, RATEG/SVECHA, RELAP/-SCDAPSIM, and SCDAP/RELAP5. The INRNE took responsibility as benchmark coordinator to compare the code results with the experimental data. As a basis of the present work, histories of temperatures, hydrogen production and other important variables were used. Besides, axial profiles at quench initiation and the final time of 7000 s, above all of temperatures, are presented. For most variables a mainstream of

  5. Development of an MPI benchmark program library

    International Nuclear Information System (INIS)

    Distributed parallel simulation software with message passing interfaces has been developed to realize large-scale and high performance numerical simulations. The most popular API for message communication is an MPI. The MPI will be provided on the Earth Simulator. It is known that performance of message communication using the MPI libraries gives a significant influence on a whole performance of simulation programs. We developed an MPI benchmark program library named MBL in order to measure the performance of message communication precisely. The MBL measures the performance of major MPI functions such as point-to-point communications and collective communications and the performance of major communication patterns which are often found in application programs. In this report, the description of the MBL and the performance analysis of the MPI/SX measured on the SX-4 are presented. (author)

  6. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  7. Sheet Bending using Soft Tools

    Science.gov (United States)

    Sinke, J.

    2011-05-01

    Sheet bending is usually performed by air bending and V-die bending processes. Both processes apply rigid tools. These solid tools facilitate the generation of software for the numerical control of those processes. When the lower rigid die is replaced with a soft or rubber tool, the numerical control becomes much more difficult, since the soft tool deforms too. Compared to other bending processes the rubber backed bending process has some distinct advantages, like large radius-to-thickness ratios, applicability to materials with topcoats, well defined radii, and the feasibility of forming details (ridges, beads). These advantages may give the process exclusive benefits over conventional bending processes, not only for industries related to mechanical engineering and sheet metal forming, but also for other disciplines like Architecture and Industrial Design The largest disadvantage is that also the soft (rubber) tool deforms. Although the tool deformation is elastic and recovers after each process cycle, the applied force during bending is related to the deformation of the metal sheet and the deformation of the rubber. The deformation of the rubber interacts with the process but also with sheet parameters. This makes the numerical control of the process much more complicated. This paper presents a model for the bending of sheet materials using a rubber lower die. This model can be implemented in software in order to control the bending process numerically. The model itself is based on numerical and experimental research. In this research a number of variables related to the tooling and the material have been evaluated. The numerical part of the research was used to investigate the influence of the features of the soft lower tool, like the hardness and dimensions, and the influence of the sheet thickness, which also interacts with the soft tool deformation. The experimental research was focused on the relation between the machine control parameters and the most

  8. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  9. Ice sheet hydrology - a review

    Energy Technology Data Exchange (ETDEWEB)

    Jansson, Peter; Naeslund, Jens-Ove [Dept. of Physical Geography and Quaternary Geology, Stockholm Univ., Stockholm (Sweden); Rodhe, Lars [Geological Survey of Sweden, Uppsala (Sweden)

    2007-03-15

    This report summarizes the theoretical knowledge on water flow in and beneath glaciers and ice sheets and how these theories are applied in models to simulate the hydrology of ice sheets. The purpose is to present the state of knowledge and, perhaps more importantly, identify the gaps in our understanding of ice sheet hydrology. Many general concepts in hydrology and hydraulics are applicable to water flow in glaciers. However, the unique situation of having the liquid phase flowing in conduits of the solid phase of the same material, water, is not a commonly occurring phenomena. This situation means that the heat exchange between the phases and the resulting phase changes also have to be accounted for in the analysis. The fact that the solidus in the pressure-temperature dependent phase diagram of water has a negative slope provides further complications. Ice can thus melt or freeze from both temperature and pressure variations or variations in both. In order to provide details of the current understanding of water flow in conjunction with deforming ice and to provide understanding for the development of ideas and models, emphasis has been put on the mathematical treatments, which are reproduced in detail. Qualitative results corroborating theory or, perhaps more often, questioning the simplifications made in theory, are also given. The overarching problem with our knowledge of glacier hydrology is the gap between the local theories of processes and the general flow of water in glaciers and ice sheets. Water is often channelized in non-stationary conduits through the ice, features which due to their minute size relative to the size of glaciers and ice sheets are difficult to incorporate in spatially larger models. Since the dynamic response of ice sheets to global warming is becoming a key issue in, e.g. sea-level change studies, the problems of the coupling between the hydrology of an ice sheet and its dynamics is steadily gaining interest. New work is emerging

  10. Ice sheet hydrology - a review

    International Nuclear Information System (INIS)

    This report summarizes the theoretical knowledge on water flow in and beneath glaciers and ice sheets and how these theories are applied in models to simulate the hydrology of ice sheets. The purpose is to present the state of knowledge and, perhaps more importantly, identify the gaps in our understanding of ice sheet hydrology. Many general concepts in hydrology and hydraulics are applicable to water flow in glaciers. However, the unique situation of having the liquid phase flowing in conduits of the solid phase of the same material, water, is not a commonly occurring phenomena. This situation means that the heat exchange between the phases and the resulting phase changes also have to be accounted for in the analysis. The fact that the solidus in the pressure-temperature dependent phase diagram of water has a negative slope provides further complications. Ice can thus melt or freeze from both temperature and pressure variations or variations in both. In order to provide details of the current understanding of water flow in conjunction with deforming ice and to provide understanding for the development of ideas and models, emphasis has been put on the mathematical treatments, which are reproduced in detail. Qualitative results corroborating theory or, perhaps more often, questioning the simplifications made in theory, are also given. The overarching problem with our knowledge of glacier hydrology is the gap between the local theories of processes and the general flow of water in glaciers and ice sheets. Water is often channelized in non-stationary conduits through the ice, features which due to their minute size relative to the size of glaciers and ice sheets are difficult to incorporate in spatially larger models. Since the dynamic response of ice sheets to global warming is becoming a key issue in, e.g. sea-level change studies, the problems of the coupling between the hydrology of an ice sheet and its dynamics is steadily gaining interest. New work is emerging

  11. Photovoltage versus microprobe sheet resistance measurements on ultrashallow structures

    DEFF Research Database (Denmark)

    Clarysse, T.; Moussa, A.; Parmentier, B.;

    2010-01-01

    Earlier work [T. Clarysse , Mater. Sci. Eng., B 114-115, 166 (2004); T. Clarysse , Mater. Res. Soc. Symp. Proc. 912, 197 (2006)] has shown that only few contemporary tools are able to measure reliably (within the international technology roadmap for semiconductors specifications) sheet resistances...... on ultrashallow (sub-50-nm) chemical-vapor-deposited layers [T. Clarysse , Mater. Res. Soc. Symp. Proc. 912, 197 (2006)], especially in the presence of medium/highly doped underlying layers (representative for well/halo implants). Here the authors examine more closely the sheet resistance anomalies...

  12. Information-Theoretic Benchmarking of Land Surface Models

    Science.gov (United States)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  13. International benchmarking of telecommunications prices and price changes

    OpenAIRE

    Productivity Commission

    2002-01-01

    The report, a series of international benchmarking studies conducted by the Productivity Commission, compares Australian telecommunications prices, price changes and regulatory arrangements with those in nine other OECD countries, updating a similar study, International Benchmarking of Australian Telecommunications Services, released in March 1999.

  14. A timesharing benchmark on an IBM/370-168

    International Nuclear Information System (INIS)

    For the preparation of a planned exchange of an IBM/370-158 CPU with an 168 CPU, a benchmark was run. Five different configurations were investigated using two types of benchmark scripts and the two timesharing operating systems, OS/VS2 Rel. 3 and TSS/370 Rel. 2. The description of the measurements and the results are presented. (orig.)

  15. Computational benchmark problem for deep penetration in iron

    International Nuclear Information System (INIS)

    A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab

  16. A Competitive Benchmarking Study of Noncredit Program Administration.

    Science.gov (United States)

    Alstete, Jeffrey W.

    1996-01-01

    A benchmarking project to measure administrative processes and financial ratios received 57 usable replies from 300 noncredit continuing education programs. Programs with strong financial surpluses were identified and their processes benchmarked (including response to inquiries, registrants, registrant/staff ratio, new courses, class size,…

  17. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    International Nuclear Information System (INIS)

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  18. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the...

  19. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    Science.gov (United States)

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  20. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  1. 42 CFR 457.420 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 457.420 Section 457.420 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... State Plan Requirements: Coverage and Benefits § 457.420 Benchmark health benefits coverage....

  2. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    Science.gov (United States)

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  3. Quality indicators for international benchmarking of mental health care

    DEFF Research Database (Denmark)

    Hermann, Richard C; Mattke, Soeren; Somekh, David;

    2006-01-01

    To identify quality measures for international benchmarking of mental health care that assess important processes and outcomes of care, are scientifically sound, and are feasible to construct from preexisting data.......To identify quality measures for international benchmarking of mental health care that assess important processes and outcomes of care, are scientifically sound, and are feasible to construct from preexisting data....

  4. Selecting indicators for international benchmarking of radiotherapy centres

    NARCIS (Netherlands)

    Lent, van W.A.M.; Beer, de R. D.; Triest, van B.; Harten, van W.H.

    2013-01-01

    Introduction: Benchmarking can be used to improve hospital performance. It is however not easy to develop a concise and meaningful set of indicators on aspects related to operations management. We developed an indicator set for managers and evaluated its use in an international benchmark of radiothe

  5. BIM quickscan: benchmark of BIM performance in the Netherlands

    NARCIS (Netherlands)

    Berlo, L.A.H.M. van; Dijkmans, T.J.A.; Hendriks, H.; Spekkink, D.; Pel, W.

    2012-01-01

    In 2009 a “BIM QuickScan” for benchmarking BIM performance was created in the Netherlands (Sebastian, Berlo 2010). This instrument aims to provide insight into the current BIM performance of a company. The benchmarking instrument combines quantitative and qualitative assessments of the ‘hard’ and ‘s

  6. A Benchmark Evaluation of Fault Tolerant Wind Turbine Control Concepts

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2015-01-01

    . A benchmark model for wind turbine fault detection and isolation, and FTC has previously been proposed. Based on this benchmark, an international competition on wind turbine FTC was announced. In this brief, the top three solutions from that competition are presented and evaluated. The analysis shows that all...

  7. 29 CFR 1952.103 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.103 Section 1952.103... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  8. Developing Benchmarks to Measure Teacher Candidates' Performance

    Science.gov (United States)

    Frazier, Laura Corbin; Brown-Hobbs, Stacy; Palmer, Barbara Martin

    2013-01-01

    This paper traces the development of teacher candidate benchmarks at one liberal arts institution. Begun as a classroom assessment activity over ten years ago, the benchmarks, through collaboration with professional development school partners, now serve as a primary measure of teacher candidates' performance in the final phases of the…

  9. 29 CFR 1952.263 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.263 Section 1952.263... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  10. What Are the ACT College Readiness Benchmarks? Information Brief

    Science.gov (United States)

    ACT, Inc., 2013

    2013-01-01

    The ACT College Readiness Benchmarks are the minimum ACT® college readiness assessment scores required for students to have a high probability of success in credit-bearing college courses--English Composition, social sciences courses, College Algebra, or Biology. This report identifies the College Readiness Benchmarks on the ACT Compass scale…

  11. 29 CFR 1952.363 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.363 Section 1952.363... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  12. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.153 Section 1952.153....153 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were...

  13. Benchmarking Mentoring Practices: A Case Study in Turkey

    Science.gov (United States)

    Hudson, Peter; Usak, Muhammet; Savran-Gencer, Ayse

    2010-01-01

    Throughout the world standards have been developed for teaching in particular key learning areas. These standards also present benchmarks that can assist to measure and compare results from one year to the next. There appears to be no benchmarks for mentoring. An instrument devised to measure mentees' perceptions of their mentoring in primary…

  14. Benchmarking with the BLASST Sessional Staff Standards Framework

    Science.gov (United States)

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  15. The Use of Educational Standards and Benchmarks in Indicator Publications

    Science.gov (United States)

    Thomas, Sally; Peng, Wen-Jung

    2004-01-01

    This paper examines the use of educational standards and benchmarks in international indicator and other relevant policy publications, particularly those originating in the UK. The authors first examine what is meant by educational standards and benchmarks and how these concepts are defined. Then, they address the use of standards and benchmarks…

  16. Benchmarking ~(232)Th Evaluations With KBR and Thor Experiments

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The n+232Th evaluations from CENDL-3.1, ENDF/B-Ⅶ.0, JENDL-3.3 and JENDL-4.0 were tested with KBR series and THOR benchmark from ICSBEP Handbook. THOR is Plutonium-Metal-Fast (PMF) criticality benchmark reflected with metal thorium.

  17. EU and OECD benchmarking and peer review compared

    NARCIS (Netherlands)

    Groenendijk, Nico

    2009-01-01

    Benchmarking and peer review are essential elements of the so-called EU open method of coordination (OMC) which has been contested in the literature for lack of effectiveness. In this paper we compare benchmarking and peer review procedures as used by the EU with those used by the OECD. Different ty

  18. A Protein Classification Benchmark collection for machine learning

    NARCIS (Netherlands)

    Sonego, P.; Pacurar, M.; Dhir, S.; Kertész-Farkas, A.; Kocsor, A.; Gáspári, Z.; Leunissen, J.A.M.; Pongor, S.

    2007-01-01

    Protein classification by machine learning algorithms is now widely used in structural and functional annotation of proteins. The Protein Classification Benchmark collection (http://hydra.icgeb.trieste.it/benchmark) was created in order to provide standard datasets on which the performance of machin

  19. Logistics Cost Modeling in Strategic Benchmarking Project : cases: CEL Consulting & Cement Group A

    OpenAIRE

    Nguyen Cong Minh, Vu

    2010-01-01

    This thesis deals with logistics cost modeling for a benchmarking project as consulting service from CEL Consulting for Cement Group A. The project aims at providing flows and cost comparison of bagged cement of all cement players to relevant markets in Indonesia. The results of the project yielded strategic elements for Cement Group A in planning their penetration strategy with new investments. Due to the specific needs, Cement Group A requested a flexible costing approach taking into ...

  20. Benchmarking on the management of radioactive waste; Benchmarking sobre la gestion de los residuos radiactivos

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Gomez, M. a.; Gonzalez Gandal, R.; Gomez Castano, N.

    2013-09-01

    In this project, an evaluation of the practices carried out in the waste management field at the Spanish nuclear power plants has been done following the Benchmarking methodology. This process has allowed the identification of aspects to improve waste treatment processes; to reduce the volume of waste; to reduce management costs and to establish ways of management for the waste stream which do not have. (Author)

  1. SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI

    Directory of Open Access Journals (Sweden)

    Ralf Reussner

    2002-01-01

    Full Text Available The main objective of the MPI communication library is to enable portable parallel programming with high performance within the message-passing paradigm. Since the MPI standard has no associated performance model, and makes no performance guarantees, comprehensive, detailed and accurate performance figures for different hardware platforms and MPI implementations are important for the application programmer, both for understanding and possibly improving the behavior of a given program on a given platform, as well as for assuring a degree of predictable behavior when switching to another hardware platform and/or MPI implementation. We term this latter goal performance portability, and address the problem of attaining performance portability by benchmarking. We describe the SKaMPI benchmark which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability. SKaMPI is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.

  2. A new numerical benchmark of a freshwater lens

    Science.gov (United States)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  3. Quantum benchmarks for pure single-mode Gaussian states.

    Science.gov (United States)

    Chiribella, Giulio; Adesso, Gerardo

    2014-01-10

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large-scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian single-mode states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments. PMID:24483875

  4. Intra and inter-organizational learning from benchmarking IS services

    DEFF Research Database (Denmark)

    Mengiste, Shegaw Anagaw; Kræmmergaard, Pernille; Hansen, Bettina

    2016-01-01

    This paper reports a case study of benchmarking IS services in Danish municipalities. Drawing on Holmqvist’s (2004) organizational learning model of exploration and exploitation, the paper explores intra and inter-organizational learning dynamics among Danish municipalities that are involved...... in benchmarking their IS services and functions since 2006. Particularly, this research tackled existing IS benchmarking approaches and methods by turning to a learning-oriented perspective and by empirically exploring the dynamic process of intra and inter-organizational learning from benchmarking IS/IT services....... The paper also makes a contribution by emphasizing the importance of informal cross-municipality consortiums to facilitate learning and experience sharing across municipalities. The findings of the case study demonstrated that the IS benchmarking scheme is relatively successful in sharing good practices...

  5. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    Science.gov (United States)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  6. Off-Balance Sheet Financing.

    Science.gov (United States)

    Adams, Matthew C.

    1998-01-01

    Examines off-balance sheet financing, the facilities use of outsourcing for selected needs, as a means of saving operational costs and using facility assets efficiently. Examples of using outside sources for energy supply and food services, as well as partnering with business for facility expansion are provided. Concluding comments address tax…

  7. Learning from Balance Sheet Visualization

    Science.gov (United States)

    Tanlamai, Uthai; Soongswang, Oranuj

    2011-01-01

    This exploratory study examines alternative visuals and their effect on the level of learning of balance sheet users. Executive and regular classes of graduate students majoring in information technology in business were asked to evaluate the extent of acceptance and enhanced capability of these alternative visuals toward their learning…

  8. Microchemistry in aluminium sheet production

    NARCIS (Netherlands)

    Lok, Z.J.

    2005-01-01

    The production of aluminium sheet alloys from as-cast ingots is a complex process, involving several rolling operations in combination with various thermal heat treatments. Through their influence on the alloy microchemistry and microstructure, these thermomechanical treatments are all aimed at cont

  9. Fact Sheets on Institutional Racism.

    Science.gov (United States)

    Foundation for Change, Inc., New York, NY.

    This fact sheet on institutional racism contains statistics on white control of the economy, health, housing, education, the media, and government. It also shows the oppression of minorities in these areas. The areas of wealth, the stock exchange, business, banks, unions, poverty, and unemployment, are discussed in terms of economy. Health matters…

  10. Magnetic Resonance Facility (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2012-03-01

    This fact sheet provides information about Magnetic Resonance Facility capabilities and applications at NREL's National Bioenergy Center. Liquid and solid-state analysis capability for a variety of biomass, photovoltaic, and materials characterization applications across NREL. NREL scientists analyze solid and liquid samples on three nuclear magnetic resonance (NMR) spectrometers as well as an electron paramagnetic resonance (EPR) spectrometer.

  11. Orientational ordering in crumpled elastic sheets

    OpenAIRE

    Cambou, Anne Dominique; Menon, Narayanan

    2014-01-01

    We report an experimental study of the development of orientational order in a crumpled sheet, with a particular focus on the role played by the geometry of confinement. Our experiments are performed on elastomeric sheets immersed in a fluid, so that the effects of plasticity and friction are suppressed. When the sheet is crumpled either axially or radially within a cylinder, we find that the sheet aligns with the flat or the curved wall, depending on the aspect ratio of the cylinder. Nematic...

  12. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  13. 21 CFR 878.4025 - Silicone sheeting.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Silicone sheeting. 878.4025 Section 878.4025 Food... DEVICES GENERAL AND PLASTIC SURGERY DEVICES Surgical Devices § 878.4025 Silicone sheeting. (a) Identification. Silicone sheeting is intended for use in the management of closed...

  14. Global dynamics of the Antarctic ice sheet

    NARCIS (Netherlands)

    Oerlemans, J.

    2002-01-01

    The total mass budget of the Antarctic ice sheet is studied with a simple axi-symmetrical model. The ice-sheet has a parabolic profile resting on a bed that slopes linearly downwards from the centre of the ice sheet into the ocean. The mean ice velocity at the grounding line is assumed to be proport

  15. Quantitative consistency testing of thermal benchmark lattice experiments

    International Nuclear Information System (INIS)

    The paper sets forth a general method to demonstrate the quantitative consistency (or inconsistency) of results of thermal reactor lattice experiments. The method is of particular importance in selecting standard ''benchmark'' experiments for comparison testing of lattice analysis codes and neutron cross sections. ''Benchmark'' thermal lattice experiments are currently selected by consensus, which usually means the experiment is geometrically simple, well-documented, reasonably complete, and qualitatively consistent. A literature search has not revealed any general quantitative test that has been applied to experimental results to demonstrate consistency, although some experiments must have been subjected to some form or other of quantitative test. The consistency method is based on a two-group neutron balance condition that is capable of revealing the quantitative consistency (or inconsistency) of reported thermal benchmark lattice integral parameters. This equation is used in conjunction with a second equation in the following discussion to assess the consistency (or inconsistency) of: (1) several Cross Section Evaluation Working Group (CSEWG) defined thermal benchmark lattices, (2) SRL experiments on the Mark 5R and Mark 15 lattices, and (3) several D2O lattices encountered as proposed thermal benchmark lattices. Nineteen thermal benchmark lattice experiments were subjected to a quantitative test of consistency between the reported experimental integral parameters. Results of this testing showed only two lattice experiments to be generally useful as ''benchmarks,'' three lattice experiments to be of limited usefulness, three lattice experiments to be potentially useful, and 11 lattice experiments to be not useful. These results are tabulated with the lattices identified

  16. Manufactured solutions and the numerical verification of isothermal, nonlinear, three-dimensional Stokes ice-sheet models

    Directory of Open Access Journals (Sweden)

    W. Leng

    2012-07-01

    Full Text Available The technique of manufactured solutions is used for verification of computational models in many fields. In this paper we construct manufactured solutions for models of three-dimensional, isothermal, nonlinear Stokes flow in glaciers and ice sheets. The solution construction procedure starts with kinematic boundary conditions and is mainly based on the solution of a first-order partial differential equation for the ice velocity that satisfies the incompressibility condition. The manufactured solutions depend on the geometry of the ice sheet and other model parameters. Initial conditions are taken from the periodic geometry of a standard problem of the ISMIP-HOM benchmark tests and altered through the manufactured solution procedure to generate an analytic solution for the time-dependent flow problem. We then use this manufactured solution to verify a parallel, high-order accurate, finite element Stokes ice-sheet model. Results from the computational model show excellent agreement with the manufactured analytic solutions.

  17. Benchmarking risk management within the international water utility sector. Part I: Design of a capability maturity methodology.

    OpenAIRE

    MacGillivray, Brian H.; Sharp, J. V.; Strutt, J.E.; Hamilton, Paul D.; Pollard, Simon J. T.

    2007-01-01

    Risk management in the water utility sector is becoming increasingly explicit. However, due to the novelty and complexity of the discipline, utilities are encountering difficulties in defining and institutionalising their risk management processes. In response, the authors have developed a sector specific capability maturity methodology for benchmarking and improving risk management. The research, conducted in consultation with water utility practitioners, has distilled risk...

  18. Consistency and Magnitude of Differences in Reading Curriculum-Based Measurement Slopes in Benchmark versus Strategic Monitoring

    Science.gov (United States)

    Mercer, Sterett H.; Keller-Margulis, Milena A.

    2015-01-01

    Differences in oral reading curriculum-based measurement (R-CBM) slopes based on two commonly used progress monitoring practices in field-based data were compared in this study. Semester-specific R-CBM slopes were calculated for 150 Grade 1 and 2 students who completed benchmark (i.e., 3 R-CBM probes collected 3 times per year) and strategic…

  19. LHC Benchmarks from Flavored Gauge Mediation

    CERN Document Server

    Ierushalmi, N; Lee, G; Nepomnyashy, V; Shadmi, Y

    2016-01-01

    We present benchmark points for LHC searches from flavored gauge mediation models, in which messenger-matter couplings give flavor-dependent squark masses. Our examples include spectra in which a single squark - stop, scharm, or sup - is much lighter than all other colored superpartners, motivating improved quark flavor tagging at the LHC. Many examples feature flavor mixing; in particular, large stop-scharm mixing is possible. The correct Higgs mass is obtained in some examples by virtue of the large stop A-term. We also revisit the general flavor and CP structure of the models. We find that, even though the A-terms can be substantial, their contributions to EDM's are very suppressed, because of the particular dependence of the A-terms on the messenger coupling. This holds regardless of the messenger-coupling texture. More generally, the special structure of the soft terms often leads to stronger suppression of flavor- and CP-violating processes, compared to naive estimates.

  20. MC benchmarks for GERDA LAr veto designs

    International Nuclear Information System (INIS)

    The Germanium Detector Array (GERDA) experiment is designed to search for neutrinoless beta decay in 76Ge and is able to directly test the present claim by parts of the Heidelberg-Moscow Collaboration. The experiment started recently its first physics phase with eight enriched detectors, after a 17 month long commissioning period. GERDA operates an array of HPGe detectors in liquid argon (LAr), which acts both as a shield for external backgrounds and as a cryogenic cooling. Furthermore, LAr has the potential to be instrumented and therefore be used as an active veto for background events through the detection of the produced scintillation light. In this talk, Monte Carlo studies for benchmarking and optimizing different LAr veto designs will be presented. LAr scintillates at 128 nm which, combined with the cryogenic temperature in which the detector is operated and its optical properties, poses many challenges in the design of an efficient veto that would help the experiment to reduce the total background level by one order of magnitude, as it is the goal for the second physics phase of the experiment.

  1. KENO-IV code benchmark calculation, (4)

    International Nuclear Information System (INIS)

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multi-group constants library MGCL. The present paper describes the results of a test using criticality experiments about slab-cylinder system of uranium nitrate solution. In all, 128 cases of experiments have been calculated for the slab-cylinder configuration with and without plexiglass reflector, having the various critical parameters such as the number of cylinders and height of the uranium nitrate solution. It is shown among several important results that the code and library gives a fairly good multiplication factor, that is, k sub(eff) -- 1.0 for heavily reflected cases, whereas k sub(eff) -- 0.91 for the unreflected ones. This suggests the necessity of more advanced treatment of the criticality calculation for the system where neutrons can easily leak out during slowing down process. (author)

  2. One dimensional benchmark calculations using diffusion theory

    International Nuclear Information System (INIS)

    This is a comparative study by using different one dimensional diffusion codes which are available at our Nuclear Engineering Department. Some modifications have been made in the used codes to fit the problems. One of the codes, DIFFUSE, solves the neutron diffusion equation in slab, cylindrical and spherical geometries by using 'Forward elimination- Backward substitution' technique. DIFFUSE code calculates criticality, critical dimensions and critical material concentrations and adjoint fluxes as well. It is used for the space and energy dependent neutron flux distribution. The whole scattering matrix can be used if desired. Normalisation of the relative flux distributions to the reactor power, plotting of the flux distributions and leakage terms for the other two dimensions have been added. Some modifications also have been made for the code output. Two Benchmark problems have been calculated with the modified version and the results are compared with BBD code which is available at our department and uses same techniques of calculation. Agreements are quite good in results such as k-eff and the flux distributions for the two cases studies. (author)

  3. Benchmarking and accounting for the (private) cloud

    CERN Document Server

    Belleman, J

    2015-01-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible, the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to ...

  4. Hydrologic information server for benchmark precipitation dataset

    Science.gov (United States)

    McEnery, John A.; McKee, Paul W.; Shelton, Gregory P.; Ramsey, Ryan W.

    2013-01-01

    This paper will present the methodology and overall system development by which a benchmark dataset of precipitation information has been made available. Rainfall is the primary driver of the hydrologic cycle. High quality precipitation data is vital for hydrologic models, hydrometeorologic studies and climate analysis,and hydrologic time series observations are important to many water resources applications. Over the past two decades, with the advent of NEXRAD radar, science to measure and record rainfall has improved dramatically. However, much existing data has not been readily available for public access or transferable among the agricultural, engineering and scientific communities. This project takes advantage of the existing CUAHSI Hydrologic Information System ODM model and tools to bridge the gap between data storage and data access, providing an accepted standard interface for internet access to the largest time-series dataset of NEXRAD precipitation data ever assembled. This research effort has produced an operational data system to ingest, transform, load and then serve one of most important hydrologic variable sets.

  5. Effect of noise correlations on randomized benchmarking

    Science.gov (United States)

    Ball, Harrison; Stace, Thomas M.; Flammia, Steven T.; Biercuk, Michael J.

    2016-02-01

    Among the most popular and well-studied quantum characterization, verification, and validation techniques is randomized benchmarking (RB), an important statistical tool used to characterize the performance of physical logic operations useful in quantum information processing. In this work we provide a detailed mathematical treatment of the effect of temporal noise correlations on the outcomes of RB protocols. We provide a fully analytic framework capturing the accumulation of error in RB expressed in terms of a three-dimensional random walk in "Pauli space." Using this framework we derive the probability density function describing RB outcomes (averaged over noise) for both Markovian and correlated errors, which we show is generally described by a Γ distribution with shape and scale parameters depending on the correlation structure. Long temporal correlations impart large nonvanishing variance and skew in the distribution towards high-fidelity outcomes—consistent with existing experimental data—highlighting potential finite-sampling pitfalls and the divergence of the mean RB outcome from worst-case errors in the presence of noise correlations. We use the filter-transfer function formalism to reveal the underlying reason for these differences in terms of effective coherent averaging of correlated errors in certain random sequences. We conclude by commenting on the impact of these calculations on the utility of single-metric approaches to quantum characterization, verification, and validation.

  6. OECD/NEA International Benchmark exercises: Validation of CFD codes applied nuclear industry; OECD/NEA internatiion Benchmark exercices: La validacion de los codigos CFD aplicados a la industria nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Pena-Monferrer, C.; Miquel veyrat, A.; Munoz-Cobo, J. L.; Chiva Vicent, S.

    2016-08-01

    In the recent years, due, among others, the slowing down of the nuclear industry, investment in the development and validation of CFD codes, applied specifically to the problems of the nuclear industry has been seriously hampered. Thus the International Benchmark Exercise (IBE) sponsored by the OECD/NEA have been fundamental to analyze the use of CFD codes in the nuclear industry, because although these codes are mature in many fields, still exist doubts about them in critical aspects of thermohydraulic calculations, even in single-phase scenarios. The Polytechnic University of Valencia (UPV) and the Universitat Jaume I (UJI), sponsored by the Nuclear Safety Council (CSN), have actively participated in all benchmark's proposed by NEA, as in the expert meetings,. In this paper, a summary of participation in the various IBE will be held, describing the benchmark itself, the CFD model created for it, and the main conclusions. (Author)

  7. Coupled models and parallel simulations for three-dimensional full-Stokes ice sheet modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Huai [Graduate University of Chinese Academy of Sciences; Ju, Lili [University of South Carolina; Gunzburger, Max [Florida State University; Ringler, Todd [Los Alamos National Laboratory; Price, Stephen [Los Alamos National Laboratory

    2011-01-01

    A three-dimensional full-Stokes computational model is considered for determining the dynamics, temperature, and thickness of ice sheets. The governing thermomechanical equations consist of the three-dimensional full-Stokes system with nonlinear rheology for the momentum, an advective-diffusion energy equation for temperature evolution, and a mass conservation equation for icethickness changes. Here, we discuss the variable resolution meshes, the finite element discretizations, and the parallel algorithms employed by the model components. The solvers are integrated through a well-designed coupler for the exchange of parametric data between components. The discretization utilizes high-quality, variable-resolution centroidal Voronoi Delaunay triangulation meshing and existing parallel solvers. We demonstrate the gridding technology, discretization schemes, and the efficiency and scalability of the parallel solvers through computational experiments using both simplified geometries arising from benchmark test problems and a realistic Greenland ice sheet geometry.

  8. SPRINGBACK SIMULATION AND ANALYSIS OF STRONG ANISOTROPIC SHEET METALS IN U-CHANNEL BENDING PROCESS

    Institute of Scientific and Technical Information of China (English)

    柳玉起; 胡平; 王锦程

    2002-01-01

    The springback phenomenon of strong anisotropic sheet metals with U-channel bending as well as deep-drawing is numerically studied in detail by using Updating Lagrange FEM based on virtual work-rate principle, Kirchhoff shell element models and the Barlat-Lian planar anisotropic yield function. Simulation results are compared with a benchmark test. Very good agreement is obtained between numerical and test results. The focus of the present study is on the numerical simulation of the springback characteristics of the strong anisotropic sheet metals after unloading.The effects of the planar anisotropy coefficients and yield function exponent in the B-L yield function on the springback characteristics are discussed in detail. Some conclusions are given.

  9. Geometry of thin liquid sheet flows

    Science.gov (United States)

    Chubb, Donald L.; Calfo, Frederick D.; Mcconley, Marc W.; Mcmaster, Matthew S.; Afjeh, Abdollah A.

    1994-01-01

    Incompresible, thin sheet flows have been of research interest for many years. Those studies were mainly concerned with the stability of the flow in a surrounding gas. Squire was the first to carry out a linear, invicid stability analysis of sheet flow in air and compare the results with experiment. Dombrowski and Fraser did an experimental study of the disintegration of sheet flows using several viscous liquids. They also detected the formulation of holes in their sheet flows. Hagerty and Shea carried out an inviscid stability analysis and calculated growth rates with experimental values. They compared their calculated growth rates with experimental values. Taylor studied extensively the stability of thin liquid sheets both theoretically and experimentally. He showed that thin sheets in a vacuum are stable. Brown experimentally investigated thin liquid sheet flows as a method of application of thin films. Clark and Dumbrowski carried out second-order stability analysis for invicid sheet flows. Lin introduced viscosity into the linear stability analysis of thin sheet flows in a vacuum. Mansour and Chigier conducted an experimental study of the breakup of a sheet flow surrounded by high-speed air. Lin et al. did a linear stability analysis that included viscosity and a surrounding gas. Rangel and Sirignano carried out both a linear and nonlinear invisid stability analysis that applies for any density ratio between the sheet liquid and the surrounding gas. Now there is renewed interest in sheet flows because of their possible application as low mass radiating surfaces. The objective of this study is to investigate the fluid dynamics of sheet flows that are of interest for a space radiator system. Analytical expressions that govern the sheet geometry are compared with experimental results. Since a space radiator will operate in a vacuum, the analysis does not include any drag force on the sheet flow.

  10. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Javier Ortensi; Sonat Sen; Hans Hammer

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible for defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III

  11. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W. II [Oak Ridge National Lab., TN (United States); Tsao, C.L. [Duke Univ., Durham, NC (United States). School of the Environment

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more complete documentation of the sources and derivation of all values are presented.

  12. Accelerating progress in Artificial General Intelligence: Choosing a benchmark for natural world interaction

    Science.gov (United States)

    Rohrer, Brandon

    2010-12-01

    Measuring progress in the field of Artificial General Intelligence (AGI) can be difficult without commonly accepted methods of evaluation. An AGI benchmark would allow evaluation and comparison of the many computational intelligence algorithms that have been developed. In this paper I propose that a benchmark for natural world interaction would possess seven key characteristics: fitness, breadth, specificity, low cost, simplicity, range, and task focus. I also outline two benchmark examples that meet most of these criteria. In the first, the direction task, a human coach directs a machine to perform a novel task in an unfamiliar environment. The direction task is extremely broad, but may be idealistic. In the second, the AGI battery, AGI candidates are evaluated based on their performance on a collection of more specific tasks. The AGI battery is designed to be appropriate to the capabilities of currently existing systems. Both the direction task and the AGI battery would require further definition before implementing. The paper concludes with a description of a task that might be included in the AGI battery: the search and retrieve task.

  13. Benchmark and gap analysis of current mask carriers vs future requirements: example of the carrier contamination

    Science.gov (United States)

    Fontaine, H.; Davenet, M.; Cheung, D.; Hoellein, I.; Richsteiger, P.; Dejaune, P.; Torsy, A.

    2007-02-01

    In the frame of the European Medea+ 2T302 MUSCLE project, an extensive mask carriers benchmark was carried out in order to evaluate whether some containers answer to the 65nm technology needs. Ten different containers, currently used or expected in the future all along the mask supply chain (blank, maskhouse and fab carriers) were selected at different steps of their life cycle (new, aged, aged & cleaned). The most critical parameters identified for analysis versus future technologies were: automation, particle contamination, chemical contamination (organic outgassing, ionic contamination), cleanability, ESD, airtightness and purgeability. Furthermore, experimental protocols corresponding to suitable methods were then developed and implemented to test each criterion. The benchmark results are presented giving a "state of the art" of mask carriers currently available and allowing a gap analysis for the tested parameters related to future needs. This approach is detailed through the particular case of carrier contamination measurements. Finally, this benchmark / gap analysis leads to propose advisable mask carrier specifications (and the test protocols associated) on various key parameters which can also be taken as guidelines for a standardization perspective for the 65nm technology. This also indicates that none of tested carriers fulfills all the specifications proposed.

  14. Benchmarking: retos y riesgos para el ingeniero industrial

    Directory of Open Access Journals (Sweden)

    Sergio Humberto Romo Picazo

    2000-01-01

    Full Text Available Los métodos para el análisis y solución de problemas que tradicionalmente ha usado el ingeniero industrial pueden adoptar un enfoque distinto, usando el concepto de benchmarking. El benchmarking representa una herramienta muy importante para los ingenieros industriales, ya que bien aplicada conduce al mejoramiento de los procesos. Sin embargo, todo ingeniero industrial debe conocer las limitaciones y riesgos que implica la decisión de llevar a cabo un proyecto de benchmarking, aquí se señalan algunos de ellos.

  15. Simulating diffusion processes in discontinuous media: Benchmark tests

    Science.gov (United States)

    Lejay, Antoine; Pichot, Géraldine

    2016-06-01

    We present several benchmark tests for Monte Carlo methods simulating diffusion in one-dimensional discontinuous media. These benchmark tests aim at studying the potential bias of the schemes and their impact on the estimation of micro- or macroscopic quantities (repartition of masses, fluxes, mean residence time, …). These benchmark tests are backed by a statistical analysis to filter out the bias from the unavoidable Monte Carlo error. We apply them on four different algorithms. The results of the numerical tests give a valuable insight into the fine behavior of these schemes, as well as rules to choose between them.

  16. The OpenupEd quality label: benchmarks for MOOCs

    OpenAIRE

    Rosewell, Jonathan; Jansen, Darco

    2014-01-01

    In this paper we report on the development of the OpenupEd Quality Label, a self-assessment and review quality assurance process for the new European OpenupEd portal (www.openuped.eu) for MOOCs (massive open online courses). This process is focused on benchmark statements that seek to capture good practice, both at the level of the institution and at the level of individual courses. The benchmark statements for MOOCs are derived from benchmarks produced by the E xcellence e learning quality p...

  17. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  18. Effects of exposure imprecision on estimation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2004-01-01

    In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...

  19. Fault detection of a benchmark wind turbine using interval analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Odgaard, Peter Fogh; Bak, Thomas

    2012-01-01

    This paper investigates a state estimation set- membership approach for fault detection of a benchmark wind turbine. The main challenges in the benchmark are high noise on the wind speed measurement and the nonlinearities in the aerodynamic torque such that the overall model of the turbine...... of the measurement with a closed set that is computed based on the past measurements and a model of the system. If the measurement is not consistent with this set, a fault is detected. The result demonstrates effectiveness of the method for fault detection of the benchmark wind turbine....

  20. Implementation of benchmark management in quality assurance audit activities

    International Nuclear Information System (INIS)

    The concept of Benchmark Management is that the practices of the best competitor are taken as benchmark, to analyze and study the distance between that competitor and the institute, and take efficient actions to catch up and even exceed the competitor. This paper analyzes and rebuilds all the process for quality assurance audit with the concept of Benchmark Management, based on the practices during many years of quality assurance audits, in order to improve the level and effect of quality assurance audit activities. (author)

  1. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts. The blade shape is considered as a fixed parameter...

  2. Imaging with parallel ray-rotation sheets

    CERN Document Server

    Hamilton, Alasdair C

    2008-01-01

    A ray-rotation sheet consists of miniaturized optical components that function - ray optically - as a homogeneous medium that rotates the local direction of transmitted light rays around the sheet normal by an arbitrary angle [A. C. Hamilton et al., arXiv:0809.2646 (2008)]. Here we show that two or more parallel ray-rotation sheets perform imaging between two planes. The image is unscaled and un-rotated. No other planes are imaged. When seen through parallel ray-rotation sheets, planes that are not imaged appear rotated, whereby the rotation angle changes with the ratio between the observer's and the object plane's distance from the sheets.

  3. Ohm's law for a current sheet

    Science.gov (United States)

    Lyons, L. R.; Speiser, T. W.

    1985-01-01

    The paper derives an Ohm's law for single-particle motion in a current sheet, where the magnetic field reverses in direction across the sheet. The result is considerably different from the resistive Ohm's law often used in MHD studies of the geomagnetic tail. Single-particle analysis is extended to obtain a self-consistency relation for a current sheet which agrees with previous results. The results are applicable to the concept of reconnection in that the electric field parallel to the current is obtained for a one-dimensional current sheet with constant normal magnetic field. Dissipated energy goes directly into accelerating particles within the current sheet.

  4. Project ATTACK and Project VISTA: Benchmark studies on the road to NATO's early TNF policy

    International Nuclear Information System (INIS)

    This paper is concerned with those studies and analyses that affected early NATO nuclear policy and force structure. The discussion focuses specifically on two open-quotes benchmarkclose quotes activities. Project VISTA and Project ATTACK. These two studies were chosen less because one can document their direct impact on NATO nuclear policy and more because they capture the state of thinking about tactical nuclear weapons at a particular point of time. Project VISTA offers an especially important benchmark in this respect. Project ATTACK is a rather different kind of benchmark. It is not a pathbreaking study. It is much narrower and more technical than VISTA. It appears to have received no public attention. Project ATTACK is interesting because it seems to capture a open-quotes nuts-and-boltsclose quotes feel for how U.S. (and thereby NATO) theater nuclear policy was evolving prior to MC 48. The background and context for Project VISTA and Project ATTACK are presented and discussed

  5. SUMMARY OF GENERAL WORKING GROUP A+B+D: CODES BENCHMARKING.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.; SHAPOSHNIKOVA, E.; ZIMMERMANN, F.; HOFMANN, I.

    2006-05-29

    Computer simulation is an indispensable tool in assisting the design, construction, and operation of accelerators. In particular, computer simulation complements analytical theories and experimental observations in understanding beam dynamics in accelerators. The ultimate function of computer simulation is to study mechanisms that limit the performance of frontier accelerators. There are four goals for the benchmarking of computer simulation codes, namely debugging, validation, comparison and verification: (1) Debugging--codes should calculate what they are supposed to calculate; (2) Validation--results generated by the codes should agree with established analytical results for specific cases; (3) Comparison--results from two sets of codes should agree with each other if the models used are the same; and (4) Verification--results from the codes should agree with experimental measurements. This is the summary of the joint session among working groups A, B, and D of the HI32006 Workshop on computer codes benchmarking.

  6. IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov

    2012-11-01

    Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.

  7. OECD/NEA main steam line break PWR benchmark simulation by TRACE/S3K coupled code

    International Nuclear Information System (INIS)

    A coupling between the TRACE system thermal-hydraulics code and the SIMULATE-3K (S3K) three-dimensional reactor kinetics code has been developed in a collaboration between the Paul Scherrer Institut (PSI) and Studsvik. In order to verify the coupling scheme and the coupled code capabilities with regards to plant transients, the OECD/NEA Main Steam Line Break PWR benchmark was simulated with the coupled TRACE/S3K code. The core/plant system data were taken from the benchmark specifications, while the nuclear data were generated with the Studsvik's lattice code CASMO-4 and the core analysis code SIMULATE-3. The TRACE/S3K results were compared with the published results obtained by the 17 participants of the benchmark. The comparison shows that the TRACE/S3K code reproduces satisfactory the main transient parameters, namely, the power and reactivity history, steam generator inventory, and pressure response. (author)

  8. Benchmarking the Calculation of Stochastic Heating and Emissivity of Dust Grains in the Context of Radiative Transfer Simulations

    CERN Document Server

    Camps, Peter; Bianchi, Simone; Lunttila, Tuomas; Pinte, Christophe; Natale, Giovanni; Juvela, Mika; Fischera, Joerg; Fitzgerald, Michael P; Gordon, Karl; Baes, Maarten; Steinacker, Juergen

    2015-01-01

    We define an appropriate problem for benchmarking dust emissivity calculations in the context of radiative transfer (RT) simulations, specifically including the emission from stochastically heated dust grains. Our aim is to provide a self-contained guide for implementors of such functionality, and to offer insights in the effects of the various approximations and heuristics implemented by the participating codes to accelerate the calculations. The benchmark problem definition includes the optical and calorimetric material properties, and the grain size distributions, for a typical astronomical dust mixture with silicate, graphite and PAH components; a series of analytically defined radiation fields to which the dust population is to be exposed; and instructions for the desired output. We process this problem using six RT codes participating in this benchmark effort, and compare the results to a reference solution computed with the publicly available dust emission code DustEM. The participating codes implement...

  9. Electromagnetic instability in a magnetic neutral sheet

    International Nuclear Information System (INIS)

    The structure of electromagnetic perturbations in a magnetic neutral sheet was analyzed within the framework of a Vlasov-Maxwellian picture. In the geomagnetic tail, there exists a stable magnetic neutral sheet of about 10 R-E (earth radius) thick. However, it has been reported that the thickness of this sheet is reduced to about 1 R-E just before the onset of substorms, and this shows the critical thickness of sheet destruction. In the laboratory experiment, a stable neutral sheet is produced by induction, and its thickness also becomes as thin as about the ion Larmor radius, which is defined far outside the sheet region, before the sudden destruction of the sheet current. Tearing instability cannot explain this abrupt destruction. Then it can be considered that the tearing instability is stabilized by some mechanisms. Consequently, it becomes important to look for another perturbation unstable in a thin sheet while stable in a thick sheet. Here, a magnetically compressional mode propagating in the direction of the unperturbed current which produced reverse field is considered. Equilibrium configuration and particle orbits, perturbations, resulted eigenfunctions and dispersion relations are described to show that the above mode becomes unstable in a thin sheet, and the critical thickness is about the ion Larmor radius just outside the sheet region. When the typical field intensity in the geomagnetic tail is taken, the time scale of this instability becomes about 800 sec. This value coincides with the observed flare time scale. (Wakatsuki, Y.)

  10. Ice sheet hydrology from observations

    Energy Technology Data Exchange (ETDEWEB)

    Jansson, Peter (Dept. of Physical Geography and Quaternary Geology, Stockholm Univ-, Stockholm (Sweden))

    2010-11-15

    The hydrological systems of ice sheets are complex. Our view of the system is split, largely due to the complexity of observing the systems. Our basic knowledge of processes have been obtained from smaller glaciers and although applicable in general to the larger scales of the ice sheets, ice sheets contain features not observable on smaller glaciers due to their size. The generation of water on the ice sheet surface is well understood and can be satisfactorily modeled. The routing of water from the surface down through the ice is not complicated in terms of procat has been problematic is the way in which the couplings between surface and bed has been accomplished through a kilometer of cold ice, but with the studies on crack propagation and lake drainage on Greenland we are beginning to understand also this process and we know water can be routed through thick cold ice. Water generation at the bed is also well understood but the main problem preventing realistic estimates of water generation is lack of detailed information about geothermal heat fluxes and their geographical distribution beneath the ice. Although some average value for geothermal heat flux may suffice, for many purposes it is important that such values are not applied to sub-regions of significantly higher fluxes. Water generated by geothermal heat constitutes a constant supply and will likely maintain a steady system beneath the ice sheet. Such a system may include subglacial lakes as steady features and reconfiguration of the system is tied to time scales on which the ice sheet geometry changes so as to change pressure gradients in the basal system itself. Large scale re-organization of subglacial drainage systems have been observed beneath ice streams. The stability of an entirely subglacially fed drainage system may hence be perturbed by rapid ice flow. In the case of Antarctic ice streams where such behavior has been observed, the ice streams are underlain by deformable sediments. It is

  11. Benchmarking wastewater treatment plants under an eco-efficiency perspective.

    Science.gov (United States)

    Lorenzo-Toja, Yago; Vázquez-Rowe, Ian; Amores, María José; Termes-Rifé, Montserrat; Marín-Navarro, Desirée; Moreira, María Teresa; Feijoo, Gumersindo

    2016-10-01

    The new ISO 14045 framework is expected to slowly start shifting the definition of eco-efficiency toward a life-cycle perspective, using Life Cycle Assessment (LCA) as the environmental impact assessment method together with a system value assessment method for the economic analysis. In the present study, a set of 22 wastewater treatment plants (WWTPs) in Spain were analyzed on the basis of eco-efficiency criteria, using LCA and Life Cycle Costing (LCC) as a system value assessment method. The study is intended to be useful to decision-makers in the wastewater treatment sector, since the combined method provides an alternative scheme for analyzing the relationship between environmental impacts and costs. Two midpoint impact categories, global warming and eutrophication potential, as well as an endpoint single score indicator were used for the environmental assessment, while LCC was used for value assessment. Results demonstrated that substantial differences can be observed between different WWTPs depending on a wide range of factors such as plant configuration, plant size or even legal discharge limits. Based on these results the benchmarking of wastewater treatment facilities was performed by creating a specific classification and certification scheme. The proposed eco-label for the WWTPs rating is based on the integration of the three environmental indicators and an economic indicator calculated within the study under the eco-efficiency new framework. PMID:27235897

  12. Benchmarking wastewater treatment plants under an eco-efficiency perspective.

    Science.gov (United States)

    Lorenzo-Toja, Yago; Vázquez-Rowe, Ian; Amores, María José; Termes-Rifé, Montserrat; Marín-Navarro, Desirée; Moreira, María Teresa; Feijoo, Gumersindo

    2016-10-01

    The new ISO 14045 framework is expected to slowly start shifting the definition of eco-efficiency toward a life-cycle perspective, using Life Cycle Assessment (LCA) as the environmental impact assessment method together with a system value assessment method for the economic analysis. In the present study, a set of 22 wastewater treatment plants (WWTPs) in Spain were analyzed on the basis of eco-efficiency criteria, using LCA and Life Cycle Costing (LCC) as a system value assessment method. The study is intended to be useful to decision-makers in the wastewater treatment sector, since the combined method provides an alternative scheme for analyzing the relationship between environmental impacts and costs. Two midpoint impact categories, global warming and eutrophication potential, as well as an endpoint single score indicator were used for the environmental assessment, while LCC was used for value assessment. Results demonstrated that substantial differences can be observed between different WWTPs depending on a wide range of factors such as plant configuration, plant size or even legal discharge limits. Based on these results the benchmarking of wastewater treatment facilities was performed by creating a specific classification and certification scheme. The proposed eco-label for the WWTPs rating is based on the integration of the three environmental indicators and an economic indicator calculated within the study under the eco-efficiency new framework.

  13. An experimental phylogeny to benchmark ancestral sequence reconstruction.

    Science.gov (United States)

    Randall, Ryan N; Radford, Caelan E; Roof, Kelsey A; Natarajan, Divya K; Gaucher, Eric A

    2016-01-01

    Ancestral sequence reconstruction (ASR) is a still-burgeoning method that has revealed many key mechanisms of molecular evolution. One criticism of the approach is an inability to validate its algorithms within a biological context as opposed to a computer simulation. Here we build an experimental phylogeny using the gene of a single red fluorescent protein to address this criticism. The evolved phylogeny consists of 19 operational taxonomic units (leaves) and 17 ancestral bifurcations (nodes) that display a wide variety of fluorescent phenotypes. The 19 leaves then serve as 'modern' sequences that we subject to ASR analyses using various algorithms and to benchmark against the known ancestral genotypes and ancestral phenotypes. We confirm computer simulations that show all algorithms infer ancient sequences with high accuracy, yet we also reveal wide variation in the phenotypes encoded by incorrectly inferred sequences. Specifically, Bayesian methods incorporating rate variation significantly outperform the maximum parsimony criterion in phenotypic accuracy. Subsampling of extant sequences had minor effect on the inference of ancestral sequences. PMID:27628687

  14. Community-based benchmarking of the CMIP DECK experiments

    Science.gov (United States)

    Gleckler, P. J.

    2015-12-01

    A diversity of community-based efforts are independently developing "diagnostic packages" with little or no coordination between them. A short list of examples include NCAR's Climate Variability Diagnostics Package (CVDP), ORNL's International Land Model Benchmarking (ILAMB), LBNL's Toolkit for Extreme Climate Analysis (TECA), PCMDI's Metrics Package (PMP), the EU EMBRACE ESMValTool, the WGNE MJO diagnostics package, and CFMIP diagnostics. The full value of these efforts cannot be realized without some coordination. As a first step, a WCRP effort has initiated a catalog to document candidate packages that could potentially be applied in a "repeat-use" fashion to all simulations contributed to the CMIP DECK (Diagnostic, Evaluation and Characterization of Klima) experiments. Some coordination of community-based diagnostics has the additional potential to improve how CMIP modeling groups analyze their simulations during model-development. The fact that most modeling groups now maintain a "CMIP compliant" data stream means that in principal without much effort they could readily adopt a set of well organized diagnostic capabilities specifically designed to operate on CMIP DECK experiments. Ultimately, a detailed listing of and access to analysis codes that are demonstrated to work "out of the box" with CMIP data could enable model developers (and others) to select those codes they wish to implement in-house, potentially enabling more systematic evaluation during the model development process.

  15. Structural Benchmark Testing for Stirling Convertor Heater Heads

    Science.gov (United States)

    Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.

    2007-01-01

    The National Aeronautics and Space Administration (NASA) has identified high efficiency Stirling technology for potential use on long duration Space Science missions such as Mars rovers, deep space missions, and lunar applications. For the long life times required, a structurally significant design limit for the Stirling convertor heater head is creep deformation induced even under relatively low stress levels at high material temperatures. Conventional investigations of creep behavior adequately rely on experimental results from uniaxial creep specimens, and much creep data is available for the proposed Inconel-718 (IN-718) and MarM-247 nickel-based superalloy materials of construction. However, very little experimental creep information is available that directly applies to the atypical thin walls, the specific microstructures, and the low stress levels. In addition, the geometry and loading conditions apply multiaxial stress states on the heater head components, far from the conditions of uniaxial testing. For these reasons, experimental benchmark testing is underway to aid in accurately assessing the durability of Stirling heater heads. The investigation supplements uniaxial creep testing with pneumatic testing of heater head test articles at elevated temperatures and with stress levels ranging from one to seven times design stresses. This paper presents experimental methods, results, post-test microstructural analyses, and conclusions for both accelerated and non-accelerated tests. The Stirling projects use the results to calibrate deterministic and probabilistic analytical creep models of the heater heads to predict their life times.

  16. A comparison and benchmark of two electron cloud packages

    Energy Technology Data Exchange (ETDEWEB)

    Lebrun, Paul L.G.; Amundson, James F; Spentzouris, Panagiotis G; Veitzer, Seth A

    2012-01-01

    We present results from precision simulations of the electron cloud (EC) problem in the Fermilab Main Injector using two distinct codes. These two codes are (i)POSINST, a F90 2D+ code, and (ii)VORPAL, a 2D/3D electrostatic and electromagnetic code used for self-consistent simulations of plasma and particle beam problems. A specific benchmark has been designed to demonstrate the strengths of both codes that are relevant to the EC problem in the Main Injector. As differences between results obtained from these two codes were bigger than the anticipated model uncertainties, a set of changes to the POSINST code were implemented. These changes are documented in this note. This new version of POSINST now gives EC densities that agree with those predicted by VORPAL, within {approx}20%, in the beam region. The root cause of remaining differences are most likely due to differences in the electrostatic Poisson solvers. From a software engineering perspective, these two codes are very different. We comment on the pros and cons of both approaches. The design(s) for a new EC package are briefly discussed.

  17. Technology to Market Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-02-01

    This fact sheet is an overview of the Technology to Market subprogram at the U.S. Department of Energy SunShot Initiative. The SunShot Initiative’s Technology to Market subprogram builds on SunShot’s record of moving groundbreaking and early-stage technologies and business models through developmental phases to commercialization. Technology to Market targets two known funding gaps: those that occur at the prototype commercialization stage and those at the commercial scale-up stage.

  18. Ice Sheets and the Anthropocene

    OpenAIRE

    Wolff, Eric W.

    2014-01-01

    Ice could play a role in identifying and defining the Anthropocene. The recurrence of northern hemisphere glaciation and the stability of the Greenland Ice Sheet are both potentially vulnerable to human impact on the environment. However, only a very long hiatus in either would be unusual in the context of the Quaternary Period, requiring the definition of a geological boundary. Human influence can clearly be discerned in several ice-core measurements. These include a sharp boundary in radioa...

  19. Nomenclatural benchmarking: the roles of digital typification and telemicroscopy

    Directory of Open Access Journals (Sweden)

    Quentin Wheeler

    2012-07-01

    Full Text Available Nomenclatural benchmarking is the periodic realignment of species names with species theories and is necessary for the accurate and uniform use of Linnaean binominals in the face of changing species limits. Gaining access to types, often for little more than a cursory examination by an expert, is a major bottleneck in the advance and availability of biodiversity informatics. For the nearly two million described species it has been estimated that five to six million name-bearing type specimens exist, including those for synonymized binominals. Recognizing that examination of types in person will remain necessary in special cases, we propose a four-part strategy for opening access to types that relies heavily on digitization and that would eliminate much of the bottleneck: (1 modify codes of nomenclature to create registries of nomenclatural acts, such as the proposed ZooBank, that include a requirement for digital representations (e-types for all newly described species to avoid adding to backlog; (2 an “r” strategy that would engineer and deploy a network of automated instruments capable of rapidly creating 3-D images of type specimens not requiring participation of taxon experts; (3 a “K” strategy using remotely operable microscopes to engage taxon experts in targeting and annotating informative characters of types to supplement and extend information content of rapidly acquired e-types, a process that can be done on an as-needed basis as in the normal course of revisionary taxonomy; and (4 creation of a global e-type archive associated with the commissions on nomenclature and species registries providing one-stop-shopping for e-types. We describe a first generation implementation of the “K” strategy that adapts current technology to create a network of Remotely Operable Benchmarkers Of Types (ROBOT specifically engineered to handle the largest backlog of types, pinned insect specimens. The three initial instruments will be in the

  20. Nomenclatural benchmarking: the roles of digital typification and telemicroscopy.

    Science.gov (United States)

    Wheeler, Quentin; Bourgoin, Thierry; Coddington, Jonathan; Gostony, Timothy; Hamilton, Andrew; Larimer, Roy; Polaszek, Andrew; Schauff, Michael; Solis, M Alma

    2012-01-01

    Nomenclatural benchmarking is the periodic realignment of species names with species theories and is necessary for the accurate and uniform use of Linnaean binominals in the face of changing species limits. Gaining access to types, often for little more than a cursory examination by an expert, is a major bottleneck in the advance and availability of biodiversity informatics. For the nearly two million described species it has been estimated that five to six million name-bearing type specimens exist, including those for synonymized binominals. Recognizing that examination of types in person will remain necessary in special cases, we propose a four-part strategy for opening access to types that relies heavily on digitization and that would eliminate much of the bottleneck: (1) modify codes of nomenclature to create registries of nomenclatural acts, such as the proposed ZooBank, that include a requirement for digital representations (e-types) for all newly described species to avoid adding to backlog; (2) an "r" strategy that would engineer and deploy a network of automated instruments capable of rapidly creating 3-D images of type specimens not requiring participation of taxon experts; (3) a "K" strategy using remotely operable microscopes to engage taxon experts in targeting and annotating informative characters of types to supplement and extend information content of rapidly acquired e-types, a process that can be done on an as-needed basis as in the normal course of revisionary taxonomy; and (4) creation of a global e-type archive associated with the commissions on nomenclature and species registries providing one-stop-shopping for e-types. We describe a first generation implementation of the "K" strategy that adapts current technology to create a network of Remotely Operable Benchmarkers Of Types (ROBOT) specifically engineered to handle the largest backlog of types, pinned insect specimens. The three initial instruments will be in the Smithsonian Institution