WorldWideScience

Sample records for preliminary benchmarking comparisons

  1. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  2. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  3. Actinides transmutation - a comparison of results for PWR benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Claro, Luiz H. [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)], e-mail: luizhenu@ieav.cta.br

    2009-07-01

    The physical aspects involved in the Partitioning and Transmutation (P and T) of minor actinides (MA) and fission products (FP) generated by reactors PWR are of great interest in the nuclear industry. Besides these the reduction in the storage of radioactive wastes are related with the acceptability of the nuclear electric power. From the several concepts for partitioning and transmutation suggested in literature, one of them involves PWR reactors to burn the fuel containing plutonium and minor actinides reprocessed of UO{sub 2} used in previous stages. In this work are presented the results of the calculations of a benchmark in P and T carried with WIMSD5B program using its new cross sections library generated from the ENDF-B-VII and the comparison with the results published in literature by other calculations. For comparison, was used the benchmark transmutation concept based in a typical PWR cell and the analyzed results were the k{infinity} and the atomic density of the isotopes Np-239, Pu-241, Pu-242 and Am-242m, as function of burnup considering discharge of 50 GWd/tHM. (author)

  4. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  5. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  6. A benchmark for comparison of dental radiography analysis algorithms.

    Science.gov (United States)

    Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia

    2016-07-01

    Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/).

  7. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    Energy Technology Data Exchange (ETDEWEB)

    White, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Podgorney, Robert [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kelkar, Sharad M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McClure, Mark W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Danko, George [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ghassemi, Ahmad [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fu, Pengcheng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bahrami, Davood [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Barbier, Charlotte [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cheng, Qinglu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chiu, Kit-Kwan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Detournay, Christine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elsworth, Derek [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fang, Yi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Furtney, Jason K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gan, Quan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gao, Qian [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Guo, Bin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hao, Yue [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horne, Roland N. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Huang, Kai [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Im, Kyungjae [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Norbeck, Jack [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rutqvist, Jonny [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Safari, M. R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sesetty, Varahanaresh [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sonnenthal, Eric [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tao, Qingfeng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); White, Signe K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wong, Yang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xia, Yidong [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-12-02

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems

  8. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  9. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  10. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  11. Aagesta-BR3 Decommissioning Cost. Comparison and Benchmarking Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Varley, Geoff [NAC International, Henley on Thames (United Kingdom)

    2002-11-01

    25 is equipment. The BR3 work packages described in this report add up to something like 83,000 labour hours plus about MSEK 13 of investments and consumables costs. At Swedish average team labour rates 83,000 hours would equate to about MSEK 52. Adding the investment cost of MSEK 13 gives a total of about MSEK 65. This of course is quite close to the Aagesta figure but it would be wrong to draw immediate, firm conclusions based on these data. Such a comparison should take into account, inter alia: The number and relative sizes of the equipment decontaminated and dismantled at Aagesta and BR3. The assumed productivity in the Aagesta estimate compared to the actual BR3 figures. The physical scale of the Aagesta reactor is somewhat larger than the BR3 reactor, so all other things being equal, one might expect the Aagesta decommissioning cost estimate to be higher than for BR3. Aagesta has better access overall, which should help to constrain costs. The productivity ratio for workers at BR3 on average was high - generally 80 per cent or more, so this is unlikely to be exceeded at Aagesta and might not be equalled, which would tend to push the Aagesta cost up relative to the BR3 situation. There is an additional question of the possible extra work performed at BR3 due to the R and D nature of the project. The BR3 data analysed has tried to strip away any such 'extra' work but nevertheless there may be some residual effect on the final numbers. Analysis and comparison of individual work packages has raised several conclusions, as follows: The constructed cost for Aagesta using BR3 benchmark data is encouragingly close to the Aagesta estimate value but it is not clear that the way of deriving the Aagesta estimate for decontamination was entirely rigorous. The reliability of the Aagesta estimate on these grounds therefore might reasonably be questioned. A significant discrepancy between the BR3 and Aagesta cases appears to exist in respect of the volumes of waste

  12. Clinical Engineering Benchmarking Comparison Between Zhejiang Province and American Hospitals

    Institute of Scientific and Technical Information of China (English)

    Binseng Wang; Kun Zheng; Jing-yi Feng

    2016-01-01

    Clinical engineering (CE) has evolved rapidly over the last 25 years in China. Among the 34 provincial-level administrative units within China, the Zhejiang Province is one of the most advanced in terms of healthcare technology maintenance and management. In order to determine Zhejiang’s current stage of development and opportunities for further improvement, a comparison of the performance of its CE departments was made against hospitals in the USA. Data were collected from 21 Zhejiang hospitals and compared to those from 270 acute-care hospitals in USA collected by Truven Health Analytics. The benchmarking comparison was made in three categories:operational, ifnancial, and productivity. Within the operational category, the following metrics were compared:equipment inventory size/operating beds, annual repairs/inventory size, and annual scheduled maintenance/inventory size. Within the Financial category, the following metrics were compared:total CE expense/operating beds and total CE expense/total hospital expense. Within the Productivity category, the following metrics were compared:total CE full-time equivalent (FTE)/inventory size and total CE FTE/total hospital expense. These comparisons showed that:(1) While the equipment inventory in Zhejiang tends to be much smaller than USA for hospitals of comparable amount of operating beds, the numbers of repairs and scheduled maintenance per inventory size are similar;(2) The total CE expense/total hospital expense ratio is around 1%in both Zhejiang and USA;however, the total CE expense/operating beds and total CE expense/cost of equipment inventory are signiifcantly lower in Zhejiang than USA;(3) The FTE amount in Zhejiang is significantly higher than in USA relative to both inventory size and total hospital operating expense, but signiifcantly lower relative to the number of operating beds. The fact that repairs and scheduled maintenance are similar in Zhejiang and USA shows that CE leaders are managing equipment in

  13. PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  14. Preliminary uncertainty analysis of OECD/UAM benchmark for the TMI-1 reactor

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Fabiano S.; Faria, Rochkhudson B.; Silva, Lucas M.C.; Pereira, Claubia; Fortini, Angela, E-mail: claubia@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2015-07-01

    Nowadays the demand from nuclear research centers for safety, regulation and better-estimated predictions provided with confidence bounds has been increasing. On that way, studies have pointed out that present uncertainties in the nuclear data should be significantly reduced, to get the full benefit from the advanced modeling and simulation initiatives. The major outcome of NEA/OECD (UAM) workshop took place Italy on 2006, was the preparation of a benchmark work program with steps (exercises) that would be needed to define the uncertainty and modeling tasks. On that direction, this work was performed within the framework of UAM Exercise 1 (I-1) 'Cell Physics' to validate the study, and to be able estimated the accuracies of the model. The objectives of this study were to make a preliminary analysis of criticality values of TMI-1 PWR and the biases of the results from two different nuclear codes multiplication factor. The range of the bias was obtained using the deterministic codes: NEWT (New ESC-based Weighting Transport code), the two-dimensional transport module that uses AMPX-formatted cross-sections processed by other SCALE; and WIMSD5 (Winfrith Improved Multi-Group Scheme) code. The WIMSD5 system consists of a simplified geometric representation of heterogeneous space zones that are coupled with each other and with the boundaries, while the properties of each spacing element are obtained from Carlson DSN method or Collision Probability method. (author)

  15. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mausolff, Zander [Univ. of Florida, Gainesville, FL (United States); Weems, Zach [Univ. of Florida, Gainesville, FL (United States); Popp, Dustin [Univ. of Florida, Gainesville, FL (United States); Smith, Kristin [Univ. of Florida, Gainesville, FL (United States); Shriver, Forrest [Univ. of Florida, Gainesville, FL (United States); Goluoglu, Sedat [Univ. of Florida, Gainesville, FL (United States); Prince, Zachary [Texas A & M Univ., College Station, TX (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  16. Financial Benchmarking

    OpenAIRE

    2012-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  17. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  18. Protein sequence comparison and fold recognition: progress and good-practice benchmarking.

    Science.gov (United States)

    Söding, Johannes; Remmert, Michael

    2011-06-01

    Protein sequence comparison methods have grown increasingly sensitive during the last decade and can often identify distantly related proteins sharing a common ancestor some 3 billion years ago. Although cellular function is not conserved so long, molecular functions and structures of protein domains often are. In combination with a domain-centered approach to function and structure prediction, modern remote homology detection methods have a great and largely underexploited potential for elucidating protein functions and evolution. Advances during the last few years include nonlinear scoring functions combining various sequence features, the use of sequence context information, and powerful new software packages. Since progress depends on realistically assessing new and existing methods and published benchmarks are often hard to compare, we propose 10 rules of good-practice benchmarking.

  19. Benchmark Comparison for a Multi-Processing Ion Mobility Calculator in the Free Molecular Regime

    Science.gov (United States)

    Shrivastav, Vaibhav; Nahin, Minal; Hogan, Christopher J.; Larriba-Andaluz, Carlos

    2017-08-01

    A benchmark comparison between two ion mobility and collision cross-section (CCS) calculators, MOBCAL and IMoS, is presented here as a standard to test the efficiency and performance of both programs. Utilizing 47 organic ions, results are in excellent agreement between IMoS and MOBCAL in He and N2, when both programs use identical input parameters. Due to a more efficiently written algorithm and to its parallelization, IMoS is able to calculate the same CCS (within 1%) with a speed around two orders of magnitude faster than its MOBCAL counterpart when seven cores are used. Due to the high computational cost of MOBCAL in N2, reaching tens of thousands of seconds even for small ions, the comparison between IMoS and MOBCAL is stopped at 70 atoms. Large biomolecules (>10000 atoms) remain computationally expensive when IMoS is used in N2 (even when employing 16 cores). Approximations such as diffuse trajectory methods (DHSS, TDHSS) with and without partial charges and projected area approximation corrections can be used to reduce the total computational time by several folds without hurting the accuracy of the solution. These latter methods can in principle be used with coarse-grained model structures and should yield acceptable CCS results.

  20. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    , and more are underway. As a result, there is an increasing need for an independent benchmark for spatio-temporal indexes. This paper characterizes the spatio-temporal indexing problem and proposes a benchmark for the performance evaluation and comparison of spatio-temporal indexes. Notably, the benchmark...

  1. Comparison of Processor Performance of SPECint2006 Benchmarks of some Intel Xeon Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2012-08-01

    Full Text Available High performance is a critical requirement to all microprocessors manufacturers. The present paper describes the comparison of performance in two main Intel Xeon series processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310. The microarchitecture of these processors is implemented using the basis of a new family of processors from Intel starting with the Pentium 4 processor. These processors can provide a performance boost for many key application areas in modern generation. The scaling of performance in two major series of Intel Xeon processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310 has been analyzed using the performance numbers of 12 CPU2006 integer benchmarks, performance numbers that exhibit significant differences in performance. The results and analysis can be used by performance engineers, scientists and developers to better understand the performance scaling in modern generation processors.

  2. Benchmarking by cross-institutional comparison of student achievement in a progress test

    NARCIS (Netherlands)

    Muijtjens, Arno M. M.; Schuwirth, Lambert W. T.; Cohen-Schotanus, Janke; Thoben, Arnold J. N. M.; van der Vleuten, Cees P. M.; van, der

    2008-01-01

    OBJECTIVE To determine the effectiveness of single-point benchmarking and longitudinal benchmarking for inter-school educational evaluation. METHODS We carried out a mixed, longitudinal, cross-sectional study using data from 24 annual measurement moments (4 tests x 6 year groups) over 4 years for 4

  3. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  4. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  5. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  6. Inchworm Monte Carlo for exact non-adiabatic dynamics. II. Benchmarks and comparison with established methods

    Science.gov (United States)

    Chen, Hsing-Ta; Cohen, Guy; Reichman, David R.

    2017-02-01

    In this second paper of a two part series, we present extensive benchmark results for two different inchworm Monte Carlo expansions for the spin-boson model. Our results are compared to previously developed numerically exact approaches for this problem. A detailed discussion of convergence and error propagation is presented. Our results and analysis allow for an understanding of the benefits and drawbacks of inchworm Monte Carlo compared to other approaches for exact real-time non-adiabatic quantum dynamics.

  7. Inchworm Monte Carlo for exact non-adiabatic dynamics. II. Benchmarks and comparison with established methods

    CERN Document Server

    Chen, Hsing-Ta; Reichman, David R

    2016-01-01

    In this second paper of a two part series, we present extensive benchmark results for two different inchworm Monte Carlo expansions for the spin-boson model. Our results are compared to previously developed numerically exact approaches for this problem. A detailed discussion of convergence and error propagation is presented. Our results and analysis allow for an understanding of the benefits and drawbacks of inchworm Monte Carlo compared to other approaches for exact real-time non-adiabatic quantum dynamics.

  8. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    Science.gov (United States)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT

  9. Preliminary comparison of different reduction methods of graphene oxide

    Indian Academy of Sciences (India)

    Yu Shang; Dong Zhang; Yanyun Liu; Chao Guo

    2015-02-01

    The reduction of graphene oxide (GO) is a promising route to bulk produce graphene-based sheets. Different reduction processes result in reduced graphene oxide (RGO) with different properties. In this paper three reduction methods, chemical, thermal and electrochemical reduction, were compared on three aspects including morphology and structure, reduction degree and electrical conductivity by means of scanning electron microscopy (SEM), X-ray diffraction(XRD), the Fourier transform infrared spectroscopy (FT-IR) spectrum, X-ray photoelectron spectroscopy (XPS) and four-point probe conductivity measurement. Understanding the different characteristics of different RGO by preliminary comparison is helpful in tailoring the characteristics of graphene materials for diverse applications and developing a simple, green, and efficient method for the mass production of graphene.

  10. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    Science.gov (United States)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They

  11. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison phase results

    Science.gov (United States)

    Grenier, Christophe; Rühaak, Wolfram

    2016-04-01

    Climate change impacts in permafrost regions have received considerable attention recently due to the pronounced warming trends experienced in recent decades and which have been projected into the future. Large portions of these permafrost regions are characterized by surface water bodies (lakes, rivers) that interact with the surrounding permafrost often generating taliks (unfrozen zones) within the permafrost that allow for hydrologic interactions between the surface water bodies and underlying aquifers and thus influence the hydrologic response of a landscape to climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model past and future evolution such units (Kurylyk et al. 2014). However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, which can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. A benchmark exercise was initialized at the end of 2014. Participants convened from USA, Canada, Europe, representing 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones (Kurylyk et al. 2014; Grenier et al. in prep.; Rühaak et al. 2015). They range from simpler, purely thermal 1D cases to more complex, coupled 2D TH cases (benchmarks TH1, TH2, and TH3). Some experimental cases conducted in a cold room complement the validation approach. A web site hosted by LSCE (Laboratoire des Sciences du Climat et de l'Environnement) is an interaction platform for the participants and hosts the test case databases at the following address: https://wiki.lsce.ipsl.fr/interfrost. The results of the first stage of the benchmark exercise will be presented. We will mainly focus on the inter-comparison of participant results for the coupled cases TH2 & TH3. Both cases

  12. Benchmarking in Foodservice Operations.

    Science.gov (United States)

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  13. Progress in developing the ASPECT Mantle Convection Code - New Features, Benchmark Comparisons and Applications

    Science.gov (United States)

    Dannberg, Juliane; Bangerth, Wolfgang; Sobolev, Stephan

    2014-05-01

    Since there is no direct access to the deep Earth, numerical simulations are an indispensible tool for exploring processes in the Earth's mantle. Results of these models can be compared to surface observations and, combined with constraints from seismology and geochemistry, have provided insight into a broad range of geoscientific problems. In this contribution we present results obtained from a next-generation finite-element code called ASPECT (Advanced Solver for Problems in Earth's ConvecTion), which is especially suited for modeling thermo-chemical convection due to its use of many modern numerical techniques: fully adaptive meshes, accurate discretizations, a nonlinear artificial diffusion method to stabilize the advection equation, an efficient solution strategy based on a block triangular preconditioner utilizing an algebraic multigrid, parallelization of all of the steps above and finally its modular and easily extensible implementation. In particular the latter features make it a very versatile tool applicable also to lithosphere models. The equations are implemented in the form of the Anelastic Liquid Approximation with temperature, pressure, composition and strain rate dependent material properties including associated non-linear solvers. We will compare computations with ASPECT to common benchmarks in the geodynamics community such as the Rayleigh-Taylor instability (van Keken et al., 1997) and demonstrate recently implemented features such as a melting model with temperature, pressure and composition dependent melt fraction and latent heat. Moreover, we elaborate on a number of features currently under development by the community such as free surfaces, porous flow and elasticity. In addition, we show examples of how ASPECT is applied to develop sophisticated simulations of typical geodynamic problems. These include 3D models of thermo-chemical plumes incorporating phase transitions (including melting) with the accompanying density changes, Clapeyron

  14. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  15. Comparison of different numerical models using a two-dimensional density-driven benchmark of a freshwater lens

    Science.gov (United States)

    Stoeckl, L.; Walther, M.; Schneider, A.; Yang, J.; Gaj, M.; Graf, T.

    2013-12-01

    The physical experiment of Stoeckl and Houben (2012)* was taken as a benchmark to compare results of calculations by several finite volume and finite element programs. In the experiment, an acrylic glass box was used to simulate a cross section of an infinite strip island. Degassed salt water (density 1021 kg m-3) was injected, saturating the sand from bottom to top. Fluorescent tracer dyes (uranine, eosine and indigotine) were used to mark infiltrating fresh water (density 997 kg m-3) from the top. While freshwater constantly infiltrated, saltwater was displaced and a freshwater lens started to develop until reaching equilibrium. The experiment was recorded and analyzed using fast motion mode. The numerical groundwater flow models used for comparison are Feflow, Spring, OpenGeoSys, d3f and HydroGeoSphere. All programs are capable to solve the partial differential equations of coupled flow and transport. To ensure highest level of comparison, the setups are defined as similar as possible: identical temporal and spatial resolutions are applied to all models (triangular grid with 14,432 elements and constant time steps of 8.64 s); furthermore, the same boundary conditions and parameters are used; finally, the output of each model is converted into the same format and post-processed in the open-source program ParaView. Transient as well as steady state flow fields and concentration distributions are compared. Capabilities of the different models are described, showing differences, limitations and advantages. The results show, that all models are capable to represent the benchmark to a high degree. Still, differences are observed, even by keeping the models as similar as possible. Some deviations may be explained by omitted processes, which cannot be represented in certain models, whereas other deviations may be explained by program-specific differences in solving the partial differential equations. * Stoeckl, L., Houben, G. (2012): Flow dynamics and age stratification

  16. Preliminary Experiments with XKaapi on Intel Xeon Phi Coprocessor

    OpenAIRE

    Ferreira Lima, Joao Vicente; Broquedis, Francois; Gautier, Thierry; Raffin, Bruno

    2013-01-01

    International audience; This paper presents preliminary performance comparisons of parallel applications developed natively for the Intel Xeon Phi accelerator using three different parallel programming environments and their associated runtime systems. We compare Intel OpenMP, Intel CilkPlus and XKaapi together on the same benchmark suite and we provide comparisons between an Intel Xeon Phi coprocessor and a Sandy Bridge Xeon-based machine. Our benchmark suite is composed of three computing k...

  17. Benchmarking East Tennessee`s economic capacity

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-04-20

    This presentation is comprised of viewgraphs delineating major economic factors operating in 15 counties in East Tennessee. The purpose of the information presented is to provide a benchmark analysis of economic conditions for use in guiding economic growth in the region. The emphasis of the presentation is economic infrastructure, which is classified into six categories: human resources, technology, financial resources, physical infrastructure, quality of life, and tax and regulation. Data for analysis of key indicators in each of the categories are presented. Preliminary analyses, in the form of strengths and weaknesses and comparison to reference groups, are given.

  18. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  19. Benchmarking the Sandbox: Quantitative Comparisons of Numerical and Analogue Models of Brittle Wedge Dynamics (Invited)

    Science.gov (United States)

    Buiter, S.; Schreurs, G.; Geomod2008 Team

    2010-12-01

    When numerical and analogue models are used to investigate the evolution of deformation processes in crust and lithosphere, they face specific challenges related to, among others, large contrasts in material properties, the heterogeneous character of continental lithosphere, the presence of a free surface, the occurrence of large deformations including viscous flow and offset on shear zones, and the observation that several deformation mechanisms may be active simultaneously. These pose specific demands on numerical software and laboratory models. By combining the two techniques, we can utilize the strengths of each individual method and test the model-independence of our results. We can perhaps even consider our findings to be more robust if we find similar-to-same results irrespective of the modeling method that was used. To assess the role of modeling method and to quantify the variability among models with identical setups, we have performed a direct comparison of results of 11 numerical codes and 15 analogue experiments. We present three experiments that describe shortening of brittle wedges and that resemble setups frequently used by especially analogue modelers. Our first experiment translates a non-accreting wedge with a stable surface slope. In agreement with critical wedge theory, all models maintain their surface slope and do not show internal deformation. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. All models show similar cross-sectional evolutions that demonstrate reproducibility to first order. However

  20. Comparison of Standard Light Water Reactor Cross-Section Libraries using the United States Nuclear Regulatory Commission Boiling Water Reactor Benchmark Problem

    Directory of Open Access Journals (Sweden)

    Kulesza Joel A.

    2016-01-01

    Full Text Available This paper describes a comparison of contemporary and historical light water reactor shielding and pressure vessel dosimetry cross-section libraries for a boiling water reactor calculational benchmark problem. The calculational benchmark problem was developed at Brookhaven National Laboratory by the request of the U. S. Nuclear Regulatory Commission. The benchmark problem was originally evaluated by Brookhaven National Laboratory using the Oak Ridge National Laboratory discrete ordinates code DORT and the BUGLE-93 cross-section library. In this paper, the Westinghouse RAPTOR-M3G three-dimensional discrete ordinates code was used. A variety of cross-section libraries were used with RAPTOR-M3G including the BUGLE93, BUGLE-96, and BUGLE-B7 cross-section libraries developed at Oak Ridge National Laboratory and ALPAN-VII.0 developed at Westinghouse. In comparing the calculated fast reaction rates using the four aforementioned cross-section libraries in the pressure vessel capsule, for six dosimetry reaction rates, a maximum relative difference of 8% was observed. As such, it is concluded that the results calculated by RAPTOR-M3G are consistent with the benchmark and further that the different vintage BUGLE cross-section libraries investigated are largely self-consistent.

  1. Comparison of Standard Light Water Reactor Cross-Section Libraries using the United States Nuclear Regulatory Commission Boiling Water Reactor Benchmark Problem

    Science.gov (United States)

    Kulesza, Joel A.; Arzu Alpan, F.

    2016-02-01

    This paper describes a comparison of contemporary and historical light water reactor shielding and pressure vessel dosimetry cross-section libraries for a boiling water reactor calculational benchmark problem. The calculational benchmark problem was developed at Brookhaven National Laboratory by the request of the U. S. Nuclear Regulatory Commission. The benchmark problem was originally evaluated by Brookhaven National Laboratory using the Oak Ridge National Laboratory discrete ordinates code DORT and the BUGLE-93 cross-section library. In this paper, the Westinghouse RAPTOR-M3G three-dimensional discrete ordinates code was used. A variety of cross-section libraries were used with RAPTOR-M3G including the BUGLE93, BUGLE-96, and BUGLE-B7 cross-section libraries developed at Oak Ridge National Laboratory and ALPAN-VII.0 developed at Westinghouse. In comparing the calculated fast reaction rates using the four aforementioned cross-section libraries in the pressure vessel capsule, for six dosimetry reaction rates, a maximum relative difference of 8% was observed. As such, it is concluded that the results calculated by RAPTOR-M3G are consistent with the benchmark and further that the different vintage BUGLE cross-section libraries investigated are largely self-consistent.

  2. Optimisation methodologies and algorithms for research on catalysis employing high-throughput methods: comparison using the Selox benchmark.

    Science.gov (United States)

    Pereira, Sílvia Raquel Morais; Clerc, Frédéric; Farrusseng, David; van der Waala, Jan Cornelis; Maschmeyer, Thomas

    2007-02-01

    The Selox is a catalytic benchmark for the selective CO oxidation reaction in the presence of H(2), in the form of mathematical equations obtained via modelling of experimental results. The optimisation efficiencies of several Global Optimisation algorithms were studied using the Selox benchmark. Genetic Algorithms, Evolutionary Strategies, Simulated Annealing, Taboo Search and Genetic Algorithms hybridised with Knowledge Discovery procedures were the methods compared. A Design of Experiments search strategy was also exemplified using this benchmark. The main differences regarding the applicability of DoE and Global optimisation techniques are highlighted. Evolutionary strategies, Genetic algorithms, using the sharing procedure, and the Hybrid Genetic algorithms proved to be the most successful in the benchmark optimisation.

  3. Cost benchmarking of railway projects in Europe – can it help to reduce costs?

    DEFF Research Database (Denmark)

    Trabo, Inara; Landex, Alex; Nielsen, Otto Anker

    This paper highlights the methodology of construction cost benchmarking of railway projects in the EU and its preliminary results. Benchmarking helps project managers learn from others, improve particular project areas, and reduce project costs. For railway projects, benchmarking is essential...... for the comparison of unit costs for major cost drivers (e.g. tunnels, bridges, etc.). This methodology was applied to the case study described in this paper, the first high-speed railway project in Denmark, “The New Line Copenhagen-Ringsted”. The aim was to avoid cost overruns and even reduce final budget outcomes...... by looking for the best practices in the construction and implementation of other high-speed lines in Europe and learning from their experience. The paper presents benchmarking from nine railway projects that are comparable with the Copenhagen-Ringsted project. The results of this comparison provide...

  4. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  5. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    distance functions. The frontier is given by an explicit quantile, e.g. “the best 90 %”. Using the explanatory model of the inefficiency, the user can adjust the frontiers by submitting state variables that influence the inefficiency. An efficiency study of Danish dairy farms is implemented......We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  6. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  7. Inter-Comparison of Suomi NPP CrIS Radiances with AIRS and IASI toward Infrared Hyperspectral Benchmark Radiance Measurements

    Science.gov (United States)

    Wang, L.; Han, Y.; Chen, Y.; Jin, X.; Tremblay, D. A.

    2013-12-01

    The Cross-track Infrared Sounder (CrIS) on the newly-launched Suomi National Polar-orbiting Partnership (SNPP) and future Joint Polar Satellite System (JPSS) is a Fourier transform spectrometer that provides soundings of the atmosphere with 1305 spectral channels, over 3 wavelength ranges: LWIR (9.14 - 15.38 μm); MWIR (5.71 - 8.26 μm); and SWIR (3.92 - 4.64 μm). The SNPP CrIS, combined with the existed Atmospheric Infrared Sounder (AIRS) on NASA Aqua and Infrared Atmospheric Sounding Interferometer (IASI) on Metop-A and -B, will accumulate decades of hyperspectral spectral infrared measurements with high accuracy, which have potentials for climate monitoring and model assessments. In this presentation, we will 1) evaluate radiance consistency among AIRS, IASI, and CrIS, and 2) thus demonstrate that the CrIS SDR data from SNPP and JPSS can serve as a long-term reference benchmark for inter-calibration and climate-related study just like AIRS and IASI. In the first part of presentation, we will brief major postlaunch calibration and validation activities for SNPP CrIS performed by the NOAA STAR CrIS sensor data record (SDR) team, including the calibration parameter updates, instrument stability monitoring, and data processing quality assurance. Comprehensive assessments of the radiometric, spectral, geometric calibration of CrIS SDR will be presented. In addition, the preparation of CrIS SDR re-processing toward consistent Climate Data Records (CDRs) will be discussed. The purpose of this part is to provide a comprehensive overview of CrIS SDR data quality to the user community. In the second part, we will compare CrIS hyperspectral radiance measurements with the AIRS and IASI on Metop-A and -B to examine spectral and radiometric consistence and differences among three hyperspectral IR sounders. The SNPP CrIS, combined with AIRS and IASI, provide the first-ever inter-calibration opportunity because three hyperspectral IR sounders can observe the Earth and

  8. Preliminary comparison of monolithic and aperture optics for XRMF

    Energy Technology Data Exchange (ETDEWEB)

    Havrilla, G.J.; Worley, C.G.

    1997-08-01

    Comparisons between standard aperture optics and a custom designed monolithic capillary x-ray optic for the Kevex Omicron are presented. The results demonstrate the feasibility of retrofitting an Omicron with a monolithic capillary. Increased flux is observed especially at lower energies which results in an increase in sensitivity and potentially an increase in spatial resolution. Alignment is a critical factor in achieving optimal performance of the monolithic capillary. Further improvements in flux output, spot size and overall sensitivity are expected with better alignment.

  9. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    Science.gov (United States)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  10. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    Science.gov (United States)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  11. Developing Benchmarks for Solar Radio Bursts

    Science.gov (United States)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.

    2016-12-01

    Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.

  12. Preliminary Comparison of the Response of LHC Tertiary Collimators to Proton and Ion Beam Impacts

    CERN Document Server

    Cauchi, M; Bertarelli, A; Carra, F; Cerutti, F; Lari, L; Mollicone, P; Sammut, N

    2013-01-01

    The CERN Large Hadron Collider is designed to bring into collision protons as well as heavy ions. Accidents involving impacts on collimators can happen for both species. The interaction of lead ions with matter differs to that of protons, thus making this scenario a new interesting case to study as it can result in different damage aspects on the collimator. This paper will present a preliminary comparison of the response of collimators to proton and ion beam impacts.

  13. Preliminary comparisons between measurements and model calculations for the TMI venting of /sup 85/Kr

    Energy Technology Data Exchange (ETDEWEB)

    Dickerson, M.H.

    1980-08-01

    ARAC was on-line calculating hourly concentration values during the TMI-2 venting of /sup 85/Kr gas from June 28 to July 11, 1980. During this time hourly isopleths of normalized instantaneous concentration were calculated and transmitted to EPA in Middletown, PA. These isopleths were used to help locate the EPA and Penn State mobile air samplers and they were used for comparison to the EPA fixed 24 hr sampler measurements and the DOE helicopter measurements. This report summarizes preliminary comparisons for the EPA fixed samplers and the DOE helicopters.

  14. Analysis of the HTTR`s benchmark problems and comparison between the HTTR and the FZJ code systems

    Energy Technology Data Exchange (ETDEWEB)

    Fujimoto, Nozomu; Yamashita, Kiyonobu [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment; Ohlig, Ursula; Brockmann, Hans

    1999-01-01

    The first Research Coordination Meeting for the Coordinated Research Program on the HTTR benchmark problems were held in August 1998. The results and calculation models of JAERI and Forschungszentrum Juelich GmbH (FZJ) by diffusion calculation were compared. Both results showed a good agreement at fully-loaded core but the results of JAERI showed about 1%{Delta}k higher value during fuel loading state. To investigate the cause of the difference, effects of energy group number, neutron streaming from control rod insertion holes and cell models of burnable poison (BP) were studied. As the results, we found that the difference caused by energy group number and neutron streaming were small. The effect of BP cell model was evaluated by sensitivity analysis of dimension of BP cell. Improvements for each calculation model were proposed. (author)

  15. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  16. Benchmark quantum-chemical calculations on a complete set of rotameric families of the DNA sugar-phosphate backbone and their comparison with modern density functional theory.

    Science.gov (United States)

    Mládek, Arnošt; Krepl, Miroslav; Svozil, Daniel; Cech, Petr; Otyepka, Michal; Banáš, Pavel; Zgarbová, Marie; Jurečka, Petr; Sponer, Jiří

    2013-05-21

    The DNA sugar-phosphate backbone has a substantial influence on the DNA structural dynamics. Structural biology and bioinformatics studies revealed that the DNA backbone in experimental structures samples a wide range of distinct conformational substates, known as rotameric DNA backbone conformational families. Their correct description is essential for methods used to model nucleic acids and is known to be the Achilles heel of force field computations. In this study we report the benchmark database of MP2 calculations extrapolated to the complete basis set of atomic orbitals with aug-cc-pVTZ and aug-cc-pVQZ basis sets, MP2(T,Q), augmented by ΔCCSD(T)/aug-cc-pVDZ corrections. The calculations are performed in the gas phase as well as using a COSMO solvent model. This study includes a complete set of 18 established and biochemically most important families of DNA backbone conformations and several other salient conformations that we identified in experimental structures. We utilize an electronically sufficiently complete DNA sugar-phosphate-sugar (SPS) backbone model system truncated to prevent undesired intramolecular interactions. The calculations are then compared with other QM methods. The BLYP and TPSS functionals supplemented with Grimme's D3(BJ) dispersion term provide the best tradeoff between computational demands and accuracy and can be recommended for preliminary conformational searches as well as calculations on large model systems. Among the tested methods, the best agreement with the benchmark database has been obtained for the double-hybrid DSD-BLYP functional in combination with a quadruple-ζ basis set, which is, however, computationally very demanding. The new hybrid density functionals PW6B95-D3 and MPW1B95-D3 yield outstanding results and even slightly outperform the computationally more demanding PWPB95 double-hybrid functional. B3LYP-D3 is somewhat less accurate compared to the other hybrids. Extrapolated MP2(D,T) calculations are not as

  17. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1, a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.

  18. Benchmarking the performance of daily temperature homogenisation algorithms

    Science.gov (United States)

    Warren, Rachel; Bailey, Trevor; Jolliffe, Ian; Willett, Kate

    2015-04-01

    This work explores the creation of realistic synthetic data and its use as a benchmark for comparing the performance of different homogenisation algorithms on daily temperature data. Four different regions in the United States have been selected and three different inhomogeneity scenarios explored for each region. These benchmark datasets are beneficial as, unlike in the real world, the underlying truth is known a priori, thus allowing definite statements to be made about the performance of the algorithms run on them. Performance can be assessed in terms of the ability of algorithms to detect changepoints and also their ability to correctly remove inhomogeneities. The focus is on daily data, thus presenting new challenges in comparison to monthly data and pushing the boundaries of previous studies. The aims of this work are to evaluate and compare the performance of various homogenisation algorithms, aiding their improvement and enabling a quantification of the uncertainty remaining in the data even after they have been homogenised. An important outcome is also to evaluate how realistic the created benchmarks are. It is essential that any weaknesses in the benchmarks are taken into account when judging algorithm performance against them. This information in turn will help to improve future versions of the benchmarks. I intend to present a summary of this work including the method of benchmark creation, details of the algorithms run and some preliminary results. This work forms a three year PhD and feeds into the larger project of the International Surface Temperature Initiative which is working on a global scale and with monthly instead of daily data.

  19. A comparison of global optimization algorithms with standard benchmark functions and real-world applications using Energy Plus

    Energy Technology Data Exchange (ETDEWEB)

    Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael

    2009-09-01

    There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.

  20. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  1. Comparison of Two Approaches for Nuclear Data Uncertainty Propagation in MCNPX for Selected Fast Spectrum Critical Benchmarks

    Science.gov (United States)

    Zhu, T.; Rochman, D.; Vasiliev, A.; Ferroukhi, H.; Wieselquist, W.; Pautz, A.

    2014-04-01

    Nuclear data uncertainty propagation based on stochastic sampling (SS) is becoming more attractive while leveraging modern computer power. Two variants of the SS approach are compared in this paper. The Total Monte Carlo (TMC) method by the Nuclear Research and Consultancy Group (NRG) generates perturbed ENDF-6-formatted nuclear data by varying nuclear reaction model parameters. At Paul Scherrer Institute (PSI) the Nuclear data Uncertainty Stochastic Sampling (NUSS) system generates perturbed ACE-formatted nuclear data files by applying multigroup nuclear data covariances onto pointwise ACE-formatted nuclear data. Uncertainties of 239Pu and 235U from ENDF/B-VII.1, ZZ-SCALE6/COVA-44G and TENDL covariance libraries are considered in NUSS and propagated in MCNPX calculations for well-studied Jezebel and Godiva fast spectrum critical benchmarks. The corresponding uncertainty results obtained by TMC are compared with NUSS results and the deterministic Sensitivity/Uncertainty method of TSUNAMI-3D from SCALE6 package is also applied to serve as a separate verification. The discrepancies in the propagated 239Pu and 235U uncertainties due to method and covariance differences are discussed.

  2. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  3. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  4. The contextual benchmark method: benchmarking e-government services

    NARCIS (Netherlands)

    Jansen, Jurjen; Vries, de Sjoerd; Schaik, van Paul

    2010-01-01

    This paper offers a new method for benchmarking e-Government services. Government organizations no longer doubt the need to deliver their services on line. Instead, the question that is more relevant is how well the electronic services offered by a particular organization perform in comparison with

  5. Comparison and validation of HEU and LEU modeling results to HEU experimental benchmark data for the Massachusetts Institute of Technology MITR reactor.

    Energy Technology Data Exchange (ETDEWEB)

    Newton, T. H.; Wilson, E. H; Bergeron, A.; Horelik, N.; Stevens, J. (Nuclear Engineering Division); (MIT Nuclear Reactor Lab.)

    2011-03-02

    The Massachusetts Institute of Technology Reactor (MITR-II) is a research reactor in Cambridge, Massachusetts designed primarily for experiments using neutron beam and in-core irradiation facilities. It delivers a neutron flux comparable to current LWR power reactors in a compact 6 MW core using Highly Enriched Uranium (HEU) fuel. In the framework of its non-proliferation policies, the international community presently aims to minimize the amount of nuclear material available that could be used for nuclear weapons. In this geopolitical context, most research and test reactors both domestic and international have started a program of conversion to the use of Low Enriched Uranium (LEU) fuel. A new type of LEU fuel based on an alloy of uranium and molybdenum (UMo) is expected to allow the conversion of U.S. domestic high performance reactors like the MITR-II reactor. Towards this goal, comparisons of MCNP5 Monte Carlo neutronic modeling results for HEU and LEU cores have been performed. Validation of the model has been based upon comparison to HEU experimental benchmark data for the MITR-II. The objective of this work was to demonstrate a model which could represent the experimental HEU data, and therefore could provide a basis to demonstrate LEU core performance. This report presents an overview of MITR-II model geometry and material definitions which have been verified, and updated as required during the course of validation to represent the specifications of the MITR-II reactor. Results of calculations are presented for comparisons to historical HEU start-up data from 1975-1976, and to other experimental benchmark data available for the MITR-II Reactor through 2009. This report also presents results of steady state neutronic analysis of an all-fresh LEU fueled core. Where possible, HEU and LEU calculations were performed for conditions equivalent to HEU experiments, which serves as a starting point for safety analyses for conversion of MITR-II from the use of HEU

  6. Connecting the Different Signatures of Interstellar Dust at Low Redshift: A Benchmark for Comparison to the Distant Universe

    Science.gov (United States)

    Kulkarni, Varsha

    Interstellar dust has a profound effect on a number of physical processes in galaxies, and also affects their appearance. Naturally, the properties of dust grains in galaxies are important parameters in constraining galaxy evolution models. Absorption lines arising in galaxies along sightlines toward luminous background quasars provide an invaluable tool to study the gas and dust in and around galaxies. Recent studies of quasar absorption systems (QASs) and other high-z galaxies suggest that the dust in these distant galaxies may be quite different from that in the Milky Way (MW), e.g. more rich in silicates and less rich in carbonaceous materials. In order to understand whether the high-z dust really differs from the low-z dust, it is essential to establish a benchmark database of the dust properties of the MW and nearby galaxies. Moreover, in order to separate the effects of evolution and environmental variations, it is essential to study the dust in different parts of the galaxies. To do this, it is of utmost importance to measure the key dust absorption properties (e.g., the strengths of silicate and carbonaceous absorption features, extinction curve shapes, and element depletions) along the same set of sightlines. Ironically, such a reference set of local measurements does not currently exist. In the MW, measurements cannot be obtained for the same set of background stars, since stars bright enough in the UV for the 2175 A bump measurements are not bright enough in the IR for measurements of the silicate 9.7 and 18 micron features. Suitable ulti-wavelength dust absorption studies do not exist for nearby galaxies either. We propose to construct a homogeneously analyzed sample of dust absorption properties in the MW and local galaxies along sightlines to background active galactic nuclei (AGN). These AGN are bright enough in both UV and IR to allow reliable absorption measurements, and probe diverse regions in the galaxies. Our specific goals are: (1) We will

  7. Controlling for race/ethnicity: a comparison of California commercial health plans CAHPS scores to NCBD benchmarks

    Directory of Open Access Journals (Sweden)

    Lopez Rebeca A

    2010-01-01

    Full Text Available Abstract Background Because California has higher managed care penetration and the race/ethnicity of Californians differs from the rest of the United States, we tested the hypothesis that California's lower health plan Consumer Assessment of Healthcare Providers and Systems (CAHPS® survey results are attributable to the state's racial/ethnic composition. Methods California CAHPS survey responses for commercial health plans were compared to national responses for five selected measures: three global ratings of doctor, health plan and health care, and two composite scores regarding doctor communication and staff courtesy, respect, and helpfulness. We used the 2005 National CAHPS 3.0 Benchmarking Database to assess patient experiences of care. Multiple stepwise logistic regression was used to see if patient experience ratings based on CAHPS responses in California commercial health plans differed from all other states combined. Results CAHPS patient experience responses in California were not significantly different than the rest of the nation after adjusting for age, general health rating, individual health plan, education, time in health plan, race/ethnicity, and gender. Both California and national patient experience scores varied by race/ethnicity. In both California and the rest of the nation Blacks tended to be more satisfied, while Asians were less satisfied. Conclusions California commercial health plan enrollees rate their experiences of care similarly to enrollees in the rest of the nation when seven different variables including race/ethnicity are considered. These findings support accounting for more than just age, gender and general health rating before comparing health plans from one state to another. Reporting on race/ethnicity disparities in member experiences of care could raise awareness and increase accountability for reducing these racial and ethnic disparities.

  8. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    Science.gov (United States)

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

  9. Benchmark Calculations of Energetic Properties of Groups 4 and 6 Transition Metal Oxide Nanoclusters Including Comparison to Density Functional Theory

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Zongtang; Both, Johan; Li, Shenggang; Yue, Shuwen; Aprà, Edoardo; Keçeli, Murat; Wagner, Albert F.; Dixon, David A.

    2016-08-09

    The heats of formation and the normalized clustering energies (NCEs) for the group 4 and group 6 transition metal oxide (TMO) trimers and tetramers have been calculated by the Feller-Peterson-Dixon (FPD) method. The heats of formation predicted by the FPD method do not differ much from those previously derived from the NCEs at the CCSD(T)/aT level except for the CrO3 nanoclusters. New and improved heats of formation for Cr3O9 and Cr4O12 were obtained using PW91 orbitals instead of Hartree-Fock (HF) orbitals. Diffuse functions are necessary to predict accurate heats of formation. The fluoride affinities (FAs) are calculated with the CCSD(T) method. The relative energies (REs) of different isomers, NCEs, electron affinities (EAs), and FAs of (MO2)n ( M = Ti, Zr, Hf, n = 1 – 4 ) and (MO3)n ( M = Cr, Mo, W, n = 1 – 3) clusters have been benchmarked with 55 exchange-correlation DFT functionals including both pure and hybrid types. The absolute errors of the DFT results are mostly less than ±10 kcal/mol for the NCEs and the EAs, and less than ±15 kcal/mol for the FAs. Hybrid functionals usually perform better than the pure functionals for the REs and NCEs. The performance of the two types of functionals in predicting EAs and FAs is comparable. The B1B95 and PBE1PBE functionals provide reliable energetic properties for most isomers. Long range corrected pure functionals usually give poor FAs. The standard deviation of the absolute error is always close to the mean errors and the probability distributions of the DFT errors are often not Gaussian (normal). The breadth of the distribution of errors and the maximum probability are dependent on the energy property and the isomer.

  10. Comparison of Homogeneous and Heterogeneous CFD Fuel Models for Phase I of the IAEA CRP on HTR Uncertainties Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Su-Jong Yoon

    2014-04-01

    Computational Fluid Dynamics (CFD) evaluation of homogeneous and heterogeneous fuel models was performed as part of the Phase I calculations of the International Atomic Energy Agency (IAEA) Coordinate Research Program (CRP) on High Temperature Reactor (HTR) Uncertainties in Modeling (UAM). This study was focused on the nominal localized stand-alone fuel thermal response, as defined in Ex. I-3 and I-4 of the HTR UAM. The aim of the stand-alone thermal unit-cell simulation is to isolate the effect of material and boundary input uncertainties on a very simplified problem, before propagation of these uncertainties are performed in subsequent coupled neutronics/thermal fluids phases on the benchmark. In many of the previous studies for high temperature gas cooled reactors, the volume-averaged homogeneous mixture model of a single fuel compact has been applied. In the homogeneous model, the Tristructural Isotropic (TRISO) fuel particles in the fuel compact were not modeled directly and an effective thermal conductivity was employed for the thermo-physical properties of the fuel compact. On the contrary, in the heterogeneous model, the uranium carbide (UCO), inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers of the TRISO fuel particles are explicitly modeled. The fuel compact is modeled as a heterogeneous mixture of TRISO fuel kernels embedded in H-451 matrix graphite. In this study, a steady-state and transient CFD simulations were performed with both homogeneous and heterogeneous models to compare the thermal characteristics. The nominal values of the input parameters are used for this CFD analysis. In a future study, the effects of input uncertainties in the material properties and boundary parameters will be investigated and reported.

  11. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  12. Benchmarking of RESRAD-OFFSITE : transition from RESRAD (onsite) toRESRAD-OFFSITE and comparison of the RESRAD-OFFSITE predictions with peercodes.

    Energy Technology Data Exchange (ETDEWEB)

    Yu, C.; Gnanapragasam, E.; Cheng, J.-J.; Biwer, B.

    2006-05-22

    The main purpose of this report is to document the benchmarking results and verification of the RESRAD-OFFSITE code as part of the quality assurance requirements of the RESRAD development program. This documentation will enable the U.S. Department of Energy (DOE) and its contractors, and the U.S. Nuclear Regulatory Commission (NRC) and its licensees and other stakeholders to use the quality-assured version of the code to perform dose analysis in a risk-informed and technically defensible manner to demonstrate compliance with the NRC's License Termination Rule, Title 10, Part 20, Subpart E, of the Code of Federal Regulations (10 CFR Part 20, Subpart E); DOE's 10 CFR Part 834, Order 5400.5, ''Radiation Protection of the Public and the Environment''; and other Federal and State regulatory requirements as appropriate. The other purpose of this report is to document the differences and similarities between the RESRAD (onsite) and RESRAD-OFFSITE codes so that users (dose analysts and risk assessors) can make a smooth transition from use of the RESRAD (onsite) code to use of the RESRAD-OFFSITE code for performing both onsite and offsite dose analyses. The evolution of the RESRAD-OFFSITE code from the RESRAD (onsite) code is described in Chapter 1 to help the dose analyst and risk assessor make a smooth conceptual transition from the use of one code to that of the other. Chapter 2 provides a comparison of the predictions of RESRAD (onsite) and RESRAD-OFFSITE for an onsite exposure scenario. Chapter 3 documents the results of benchmarking RESRAD-OFFSITE's atmospheric transport and dispersion submodel against the U.S. Environmental Protection Agency's (EPA's) CAP88-PC (Clean Air Act Assessment Package-1988) and ISCLT3 (Industrial Source Complex-Long Term) models. Chapter 4 documents the comparison results of the predictions of the RESRAD-OFFSITE code and its submodels with the predictions of peer models. This report was prepared

  13. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  14. Benchmarking v ICT

    OpenAIRE

    Blecher, Jan

    2009-01-01

    The aim of this paper is to describe benefits of benchmarking IT in wider context and benchmarking scope at all. I specify benchmarking as a process and mention basic rules and guidelines. Further I define IT benchmarking domains and describe possibilities of their use. Best known type of IT benchmark is cost benchmark which represents only a subset of benchmark opportunities. In this paper, is cost benchmark rather an imaginary first step to benchmarking contribution to company. IT benchmark...

  15. Evaluation and comparison of benchmark QSAR models to predict a relevant REACH endpoint: The bioconcentration factor (BCF)

    Energy Technology Data Exchange (ETDEWEB)

    Gissi, Andrea [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Lombardo, Anna; Roncaglioni, Alessandra [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Gadaleta, Domenico [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Mangiatordi, Giuseppe Felice; Nicolotti, Orazio [Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Benfenati, Emilio, E-mail: emilio.benfenati@marionegri.it [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy)

    2015-02-15

    regression (R{sup 2}=0.85) and sensitivity (average>0.70) for new compounds in the AD but not present in the training set. However, no single optimal model exists and, thus, it would be wise a case-by-case assessment. Yet, integrating the wealth of information from multiple models remains the winner approach. - Highlights: • REACH encourages the use of in silico methods in the assessment of chemicals safety. • The performances of nine BCF models were evaluated on a benchmark database of 851 chemicals. • We compared the models on the basis of both regression and classification performance. • Statistics on chemicals out of the training set and/or within the applicability domain were compiled. • The results show that QSAR models are useful as weight-of-evidence in support to other methods.

  16. DSP Platform Benchmarking : DSP Platform Benchmarking

    OpenAIRE

    Xinyuan, Luo

    2009-01-01

    Benchmarking of DSP kernel algorithms was conducted in the thesis on a DSP processor for teaching in the course TESA26 in the department of Electrical Engineering. It includes benchmarking on cycle count and memory usage. The goal of the thesis is to evaluate the quality of a single MAC DSP instruction set and provide suggestions for further improvement in instruction set architecture accordingly. The scope of the thesis is limited to benchmark the processor only based on assembly coding. The...

  17. SciDB versus Spark: A Preliminary Comparison Based on an Earth Science Use Case

    Science.gov (United States)

    Clune, T.; Kuo, K. S.; Doan, K.; Oloso, A.

    2015-12-01

    We compare two Big Data technologies, SciDB and Spark, for performance, usability, and extensibility, when applied to a representative Earth science use case. SciDB is a new-generation parallel distributed database management system (DBMS) based on the array data model that is capable of handling multidimensional arrays efficiently but requires lengthy data ingest prior to analysis, whereas Spark is a fast and general engine for large scale data processing that can immediately process raw data files and thereby avoid the ingest process. Once data have been ingested, SciDB is very efficient in database operations such as subsetting. Spark, on the other hand, provides greater flexibility by supporting a wide variety of high-level tools including DBMS's. For the performance aspect of this preliminary comparison, we configure Spark to operate directly on text or binary data files and thereby limit the need for additional tools. Arguably, a more appropriate comparison would involve exploring other configurations of Spark which exploit supported high-level tools, but that is beyond our current resources. To make the comparison as "fair" as possible, we export the arrays produced by SciDB into text files (or converting them to binary files) for the intake by Spark and thereby avoid any additional file processing penalties. The Earth science use case selected for this comparison is the identification and tracking of snowstorms in the NASA Modern Era Retrospective-analysis for Research and Applications (MERRA) reanalysis data. The identification portion of the use case is to flag all grid cells of the MERRA high-resolution hourly data that satisfies our criteria for snowstorm, whereas the tracking portion connects flagged cells adjacent in time and space to form a snowstorm episode. We will report the results of our comparisons at this presentation.

  18. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  19. Benchmarking a DSP processor

    OpenAIRE

    Lennartsson, Per; Nordlander, Lars

    2002-01-01

    This Master thesis describes the benchmarking of a DSP processor. Benchmarking means measuring the performance in some way. In this report, we have focused on the number of instruction cycles needed to execute certain algorithms. The algorithms we have used in the benchmark are all very common in signal processing today. The results we have reached in this thesis have been compared to benchmarks for other processors, performed by Berkeley Design Technology, Inc. The algorithms were programm...

  20. Employment impacts of selected solar and conventional energy systems: a framework for comparisons and preliminary findings

    Energy Technology Data Exchange (ETDEWEB)

    Smeltzer, K.K.

    1980-01-01

    Preliminary comprehensive analyses of quantitative and qualitative employment effects of selected solar and conventional energy systems are presented. It proposes a framework for analyzing the direct, indirect, induced, displacement, disposable income, and qualitative employment effects of alternative energy systems. The analyses examine current research findings on these effects for a variety of solar and conventional energy sources and compare expected employment impacts. In general, solar energy systems have higher direct and indirect employment requirements than do conventional energy systems. In addition, employment displaced from conventional sources and employment effects due to changes in consumers' disposable income are highly significant variables in net employment comparisons. Analyses of the size and location of projected energy developments suggest that dispersed solar energy systems have a more beneficial impact on host communities than do large conventional facilities, regardless of the relative magnitude of employment per unit of energy output.

  1. A Preliminary Comparison Between Landsat-8 OLI and Sentinel-2 MSI for Geological Applications

    Science.gov (United States)

    Nikolakopoulos, Konstantinos G.; Papoulis, Dimitrios

    2016-08-01

    A preliminary comparison of multispectral data from Landsat 8 OLI to the respective data from Sentinel-2 for geological applications is performed and the results are presented in this study. The behaviour of different classical Landsat Thematic Mapper band ratios sensitive on mineral (TM5/7, TM5/4, TM3/1) or hydrothermal anomalies (TM5/7, TM3/1, TM4/3) detection were used in synergy with digital processing techniques like the Principal Component Analysis. Data fusion techniques were also applied in order to ameliorate the spatial resolution of the data. In order to assess the performance of these band ratio images different quantitative criteria are used such as, the standard deviation of the image, and the coefficient of variation of each pixel.

  2. Comparison of stroke infarction between CT perfusion and diffusion weighted imaging: preliminary results

    Science.gov (United States)

    Abd. Rahni, Ashrani Aizzuddin; Arka, Israna Hossain; Chellappan, Kalaivani; Mukari, Shahizon Azura; Law, Zhe Kang; Sahathevan, Ramesh

    2016-03-01

    In this paper we present preliminary results of comparison of automatic segmentations of the infarct core, between that obtained from CT perfusion (based on time to peak parameter) and diffusion weighted imaging (DWI). For each patient, the two imaging volumes were automatically co-registered to a common frame of reference based on an acquired CT angiography image. The accuracy of image registration is measured by the overlap of the segmented brain from both images (CT perfusion and DWI), measured within their common field of view. Due to the limitations of the study, DWI was acquired as a follow up scan up to a week after initial CT based imaging. However, we found significant overlap of the segmented brain (Jaccard indices of approximately 0.8) and the percentage of infarcted brain tissue from the two modalities were still fairly highly correlated (correlation coefficient of approximately 0.9). The results are promising with more data needed in future for clinical inference.

  3. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  4. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  5. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  6. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  7. A Comparison of Evidence-Based Estimates and Empirical Benchmarks of the Appropriate Rate of Use of Radiation Therapy in Ontario

    Energy Technology Data Exchange (ETDEWEB)

    Mackillop, William J., E-mail: william.mackillop@krcc.on.ca [Division of Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada); Kong, Weidong; Brundage, Michael; Hanna, Timothy P.; Zhang-Salomons, Jina; McLaughlin, Pierre-Yves [Division of Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada); Tyldesley, Scott [Vancouver Centre, British Columbia Cancer Agency, Vancouver, British Columbia (Canada)

    2015-04-01

    Purpose: Estimates of the appropriate rate of use of radiation therapy (RT) are required for planning and monitoring access to RT. Our objective was to compare estimates of the appropriate rate of use of RT derived from mathematical models, with the rate observed in a population of patients with optimal access to RT. Methods and Materials: The rate of use of RT within 1 year of diagnosis (RT{sub 1Y}) was measured in the 134,541 cases diagnosed in Ontario between November 2009 and October 2011. The lifetime rate of use of RT (RT{sub LIFETIME}) was estimated by the multicohort utilization table method. Poisson regression was used to evaluate potential barriers to access to RT and to identify a benchmark subpopulation with unimpeded access to RT. Rates of use of RT were measured in the benchmark subpopulation and compared with published evidence-based estimates of the appropriate rates. Results: The benchmark rate for RT{sub 1Y}, observed under conditions of optimal access, was 33.6% (95% confidence interval [CI], 33.0%-34.1%), and the benchmark for RT{sub LIFETIME} was 41.5% (95% CI, 41.2%-42.0%). Benchmarks for RT{sub LIFETIME} for 4 of 5 selected sites and for all cancers combined were significantly lower than the corresponding evidence-based estimates. Australian and Canadian evidence-based estimates of RT{sub LIFETIME} for 5 selected sites differed widely. RT{sub LIFETIME} in the overall population of Ontario was just 7.9% short of the benchmark but 20.9% short of the Australian evidence-based estimate of the appropriate rate. Conclusions: Evidence-based estimates of the appropriate lifetime rate of use of RT may overestimate the need for RT in Ontario.

  8. Comparison of the Mortality Probability Admission Model III, National Quality Forum, and Acute Physiology and Chronic Health Evaluation IV hospital mortality models: implications for national benchmarking*.

    Science.gov (United States)

    Kramer, Andrew A; Higgins, Thomas L; Zimmerman, Jack E

    2014-03-01

    Physiology and Chronic Health Evaluation IVa had better accuracy within patient subgroups and for specific admission diagnoses. Acute Physiology and Chronic Health Evaluation IVa offered the best discrimination and calibration on a large common dataset and excluded fewer patients than Mortality Probability Admission Model III or ICU Outcomes Model/National Quality Forum. The choice of ICU performance benchmarks should be based on a comparison of model accuracy using data for identical patients.

  9. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  10. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  11. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  12. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  13. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  14. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  15. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  16. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  17. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  18. State of the art: benchmarking microprocessors for embedded automotive applications

    Directory of Open Access Journals (Sweden)

    Adnan Shaout

    2016-09-01

    Full Text Available Benchmarking microprocessors provides a way for consumers to evaluate the performance of the processors. This is done by using either synthetic or real world applications. There are a number of benchmarks that exist today to assist consumers in evaluating the vast number of microprocessors that are available in the market. In this paper an investigation of the various benchmarks available for evaluating microprocessors for embedded automotive applications will be performed. We will provide an overview of the following benchmarks: Whetstone, Dhrystone, Linpack, standard performance evaluation corporation (SPEC CPU2006, embedded microprocessor benchmark consortium (EEMBC AutoBench and MiBench. A comparison of existing benchmarks will be given based on relevant characteristics of automotive applications which will give the proper recommendation when benchmarking processors for automotive applications.

  19. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  20. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  1. Preliminary Comparison Between Nuclear-Electric and Solar-Electric Propulsion Systems for Future Mars Missions

    Science.gov (United States)

    Koppel, Christophe R.; Valentian, Dominique; Latham, Paul; Fearn, David; Bruno, Claudio; Nicolini, David; Roux, Jean-Pierre; Paganucci, F.; Saverdi, Massimo

    2004-02-01

    of the perceived high Isp of ion engines or future MPD. The comparison, in fact, will show whether the two systems could have the same type of thruster or not, for automatic or for manned missions. The main drawback of SEP is due to photovoltaics and the total solar cell area required, driving spacecraft mass and orbiting costs up. In addition, the question of using superconducting coils holds also for SEP, while no space radiator is, in principle, needed. These and other factors will be considered in this comparison. The goal is to provide preliminary guidelines in evaluating SEP and NEP that may be useful to suggest closer scrutiny of promising concepts, or even potential solutions.

  2. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  3. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  4. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  5. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  6. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  7. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  8. Benchmarking of the Gyrokinetic Microstability Codes GYRO, GS2, and GEM

    Science.gov (United States)

    Bravenec, Ronald; Chen, Yang; Wan, Weigang; Parker, Scott; Candy, Jeff; Barnes, Michael; Howard, Nathan; Holland, Christopher; Wang, Eric

    2012-10-01

    The physics capabilities of modern gyrokinetic microstability codes are now so extensive that they cannot be verified fully for realistic tokamak plasmas using purely analytic approaches. Instead, verification (demonstrating that the codes correctly solve the gyrokinetic-Maxwell equations) must rely on benchmarking (comparing code results for identical plasmas and physics). Benchmarking exercises for a low-power DIII-D discharge at the mid-radius have been presented recently for the Eulerian codes GYRO and GS2 [R.V. Bravenec, J. Candy, M. Barnes, C. Holland, Phys. Plasmas 18, 122505 (2011)]. This work omitted ExB flow shear, but we include it here. We also present GYRO/GS2 comparisons for a high-power Alcator C-Mod discharge. To add further confidence to the verification exercises, we have recently added the particle-in-cell (PIC) code GEM to the efforts. We find good agreement of linear frequencies between GEM and GYRO/GS2 for the DIII-D plasma. We also present preliminary nonlinear comparisons. This benchmarking includes electromagnetic effects, plasma shaping, kinetic electrons and one impurity. In addition, we compare linear results among the three codes for the steep-gradient edge region of a DIII-D plasma between edge-localized modes.

  9. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  10. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  11. Benchmarking File System Benchmarking: It *IS* Rocket Science

    OpenAIRE

    Seltzer, Margo I.; Tarasov, Vasily; Bhanage, Saumitra; Zadok, Erez

    2011-01-01

    The quality of file system benchmarking has not improved in over a decade of intense research spanning hundreds of publications. Researchers repeatedly use a wide range of poorly designed benchmarks, and in most cases, develop their own ad-hoc benchmarks. Our community lacks a definition of what we want to benchmark in a file system. We propose several dimensions of file system benchmarking and review the wide range of tools and techniques in widespread use. We experimentally show that even t...

  12. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  13. Benchmark for Evaluating Moving Object Indexes

    DEFF Research Database (Denmark)

    Chen, Su; Jensen, Christian Søndergaard; Lin, Dan

    2008-01-01

    that targets techniques for the indexing of the current and near-future positions of moving objects. This benchmark enables the comparison of existing and future indexing techniques. It covers important aspects of such indexes that have not previously been covered by any benchmark. Notable aspects covered...... include update efficiency, query efficiency, concurrency control, and storage requirements. Next, the paper applies the benchmark to half a dozen notable moving-object indexes, thus demonstrating the viability of the benchmark and offering new insight into the performance properties of the indexes....

  14. Hierarchical, multi-sensor based classification of daily life activities: comparison with state-of-the-art algorithms using a benchmark dataset.

    Science.gov (United States)

    Leutheuser, Heike; Schuldhaus, Dominik; Eskofier, Bjoern M

    2013-01-01

    Insufficient physical activity is the 4th leading risk factor for mortality. Methods for assessing the individual daily life activity (DLA) are of major interest in order to monitor the current health status and to provide feedback about the individual quality of life. The conventional assessment of DLAs with self-reports induces problems like reliability, validity, and sensitivity. The assessment of DLAs with small and light-weight wearable sensors (e.g. inertial measurement units) provides a reliable and objective method. State-of-the-art human physical activity classification systems differ in e.g. the number and kind of sensors, the performed activities, and the sampling rate. Hence, it is difficult to compare newly proposed classification algorithms to existing approaches in literature and no commonly used dataset exists. We generated a publicly available benchmark dataset for the classification of DLAs. Inertial data were recorded with four sensor nodes, each consisting of a triaxial accelerometer and a triaxial gyroscope, placed on wrist, hip, chest, and ankle. Further, we developed a novel, hierarchical, multi-sensor based classification system for the distinction of a large set of DLAs. Our hierarchical classification system reached an overall mean classification rate of 89.6% and was diligently compared to existing state-of-the-art algorithms using our benchmark dataset. For future research, the dataset can be used in the evaluation process of new classification algorithms and could speed up the process of getting the best performing and most appropriate DLA classification system.

  15. MCNP5 and GEANT4 comparisons for preliminary Fast Neutron Pencil Beam design at the University of Utah TRIGA system

    Science.gov (United States)

    Adjei, Christian Amevi

    The main objective of this thesis is twofold. The starting objective was to develop a model for meaningful benchmarking of different versions of GEANT4 against an experimental set-up and MCNP5 pertaining to photon transport and interactions. The following objective was to develop a preliminary design of a Fast Neutron Pencil Beam (FNPB) Facility to be applicable for the University of Utah research reactor (UUTR) using MCNP5 and GEANT4. The three various GEANT4 code versions, GEANT4.9.4, GEANT4.9.3, and GEANT4.9.2, were compared to MCNP5 and the experimental measurements of gamma attenuation in air. The average gamma dose rate was measured in the laboratory experiment at various distances from a shielded cesium source using a Ludlum model 19 portable NaI detector. As it was expected, the gamma dose rate decreased with distance. All three GEANT4 code versions agreed well with both the experimental data and the MCNP5 simulation. Additionally, a simple GEANT4 and MCNP5 model was developed to compare the code agreements for neutron interactions in various materials. Preliminary FNPB design was developed using MCNP5; a semi-accurate model was developed using GEANT4 (because GEANT4 does not support the reactor physics modeling, the reactor was represented as a surface neutron source, thus a semi-accurate model). Based on the MCNP5 model, the fast neutron flux in a sample holder of the FNPB is obtained to be 6.52×107 n/cm2s, which is one order of magnitude lower than gigantic fast neutron pencil beam facilities existing elsewhere. The MCNP5 model-based neutron spectrum indicates that the maximum expected fast neutron flux is at a neutron energy of ~1 MeV. In addition, the MCNP5 model provided information on gamma flux to be expected in this preliminary FNPB design; specifically, in the sample holder, the gamma flux is to be expected to be around 108 γ/cm 2s, delivering a gamma dose of 4.54×103 rem/hr. This value is one to two orders of magnitudes below the gamma

  16. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  17. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  18. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  19. Benchmarking of Heavy Ion Transport Codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, Igor [ORNL; Ronningen, Reginald M. [Michigan State University, East Lansing; Heilbronn, Lawrence [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required.

  20. Benchmarking Pthreads performance

    Energy Technology Data Exchange (ETDEWEB)

    May, J M; de Supinski, B R

    1999-04-27

    The importance of the performance of threads libraries is growing as clusters of shared memory machines become more popular POSIX threads, or Pthreads, is an industry threads library standard. We have implemented the first Pthreads benchmark suite. In addition to measuring basic thread functions, such as thread creation, we apply the L.ogP model to standard Pthreads communication mechanisms. We present the results of our tests for several hardware platforms. These results demonstrate that the performance of existing Pthreads implementations varies widely; parts of nearly all of these implementations could be further optimized. Since hardware differences do not fully explain these performance variations, optimizations could improve the implementations. 2. Incorporating Threads Benchmarks into SKaMPI is an MPI benchmark suite that provides a general framework for performance analysis [7]. SKaMPI does not exhaustively test the MPI standard. Instead, it

  1. Benchmark Database on Isolated Small Peptides Containing an Aromatic Side Chain: Comparison Between Wave Function and Density Functional Theory Methods and Empirical Force Field

    Energy Technology Data Exchange (ETDEWEB)

    Valdes, Haydee; Pluhackova, Kristyna; Pitonak, Michal; Rezac, Jan; Hobza, Pavel

    2008-03-13

    A detailed quantum chemical study on five peptides (WG, WGG, FGG, GGF and GFA) containing the residues phenylalanyl (F), glycyl (G), tryptophyl (W) and alanyl (A)—where F and W are of aromatic character—is presented. When investigating isolated small peptides, the dispersion interaction is the dominant attractive force in the peptide backbone–aromatic side chain intramolecular interaction. Consequently, an accurate theoretical study of these systems requires the use of a methodology covering properly the London dispersion forces. For this reason we have assessed the performance of the MP2, SCS-MP2, MP3, TPSS-D, PBE-D, M06-2X, BH&H, TPSS, B3LYP, tight-binding DFT-D methods and ff99 empirical force field compared to CCSD(T)/complete basis set (CBS) limit benchmark data. All the DFT techniques with a ‘-D’ symbol have been augmented by empirical dispersion energy while the M06-2X functional was parameterized to cover the London dispersion energy. For the systems here studied we have concluded that the use of the ff99 force field is not recommended mainly due to problems concerning the assignment of reliable atomic charges. Tight-binding DFT-D is efficient as a screening tool providing reliable geometries. Among the DFT functionals, the M06-2X and TPSS-D show the best performance what is explained by the fact that both procedures cover the dispersion energy. The B3LYP and TPSS functionals—not covering this energy—fail systematically. Both, electronic energies and geometries obtained by means of the wave-function theory methods compare satisfactorily with the CCSD(T)/CBS benchmark data.

  2. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  3. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  4. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... and professional performance but only if prior professional performance was low. Supplemental analyses support the robustness of our results. Findings indicate conditions under which bureaucratic benchmarking information may affect professional performance and advance research on professional control and social...

  5. Comparison of Brain Activity during Drawing and Clay Sculpting: A Preliminary qEEG Study

    Science.gov (United States)

    Kruk, Kerry A.; Aravich, Paul F.; Deaver, Sarah P.; deBeus, Roger

    2014-01-01

    A preliminary experimental study examined brain wave frequency patterns of female participants (N = 14) engaged in two different art making conditions: clay sculpting and drawing. After controlling for nonspecific effects of movement, quantitative electroencephalographic (qEEG) recordings were made of the bilateral medial frontal cortex and…

  6. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  7. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    Science.gov (United States)

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and…

  8. HPCS HPCchallenge Benchmark Suite

    Science.gov (United States)

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  9. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...

  10. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  11. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  12. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  13. Comparison of Simulations of Preliminary Breakdown to Observations from the Huntsville Alabama Marx Meter Array

    Science.gov (United States)

    Carlson, B. E.; Liang, C.; Bitzer, P. M.; Christian, H. J., Jr.

    2014-12-01

    Preliminary breakdown pulses in electric field change records are thought to be produced by sudden extensions of the lightning channel. We present detailed time domain electrodynamic simulations of extension of an existing lightning leader channel due to heating processes and compare the results to observations of a natural cloud-to-ground lightning discharge made with the Huntsville Alabama Marx Meter Array (HAMMA) at a variety of locations near the discharge. Varying the geometry and parameters of the simulations in an attempt to reproduce the data allows us to constrain the directionality and physical properties of the channel. We simulate a variety of leader step phenomena, including uniform heating over the entire step, connection with a space leader, and dart leader propagation onto a preconditioned channel. Results support the notion of impulsive channel extension as the mechanism for preliminary breakdown and shed light on the mechanics of the process.

  14. Monte Carlo comparison of preliminary methods for ordering multiple genetic loci.

    OpenAIRE

    Olson, J.M.; Boehnke, M

    1990-01-01

    We carried out a simulation study to compare the power of eight methods for preliminary ordering of multiple genetic loci. Using linkage groups of six loci and a simple pedigree structure, we considered the effects on method performance of locus informativity, interlocus spacing, total distance along the chromosome, and sample size. Method performance was assessed using the mean rank of the true order, the proportion of replicates in which the true order was the best order, and the number of ...

  15. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    OpenAIRE

    van Lent Wineke AM; de Beer Relinde D; van Harten Wim H

    2010-01-01

    Abstract Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations managem...

  16. Global two-channel AVHRR aerosol climatology: effects of stratospheric aerosols and preliminary comparisons with MODIS and MISR retrievals

    Energy Technology Data Exchange (ETDEWEB)

    Geogdzhayev, Igor V. [Department of Applied Physics and Applied Mathematics, Columbia University, 2880 Broadway, New York, NY 10025 (United States); NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Mishchenko, Michael I. [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States)]. E-mail: crmim@giss.nasa.gov; Liu Li [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Department of Earth and Environmental Sciences, Columbia University, 2880 Broadway, New York, NY 10025 (United States); Remer, Lorraine [NASA Goddard Space Flight Center, Code 913, Greenbelt, MD 20771 (United States)

    2004-10-15

    We present an update on the status of the global climatology of the aerosol column optical thickness and Angstrom exponent derived from channel-1 and -2 radiances of the Advanced Very High Resolution Radiometer (AVHRR) in the framework of the Global Aerosol Climatology Project (GACP). The latest version of the climatology covers the period from July 1983 to September 2001 and is based on an adjusted value of the diffuse component of the ocean reflectance as derived from extensive comparisons with ship sun-photometer data. We use the updated GACP climatology and Stratospheric Aerosol and Gas Experiment (SAGE) data to analyze how stratospheric aerosols from major volcanic eruptions can affect the GACP aerosol product. One possible retrieval strategy based on the AVHRR channel-1 and -2 data alone is to infer both the stratospheric and the tropospheric aerosol optical thickness while assuming fixed microphysical models for both aerosol components. The second approach is to use the SAGE stratospheric aerosol data in order to constrain the AVHRR retrieval algorithm. We demonstrate that the second approach yields a consistent long-term record of the tropospheric aerosol optical thickness and Angstrom exponent. Preliminary comparisons of the GACP aerosol product with MODerate resolution Imaging Spectrometer (MODIS) and Multiangle Imaging Spectro-Radiometer aerosol retrievals show reasonable agreement, the GACP global monthly optical thickness being lower than the MODIS one by approximately 0.03. Larger differences are observed on a regional scale. Comparisons of the GACP and MODIS Angstrom exponent records are less conclusive and require further analysis.

  17. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  18. In silico target predictions: defining a benchmarking data set and comparison of performance of the multiclass Naïve Bayes and Parzen-Rosenblatt window.

    Science.gov (United States)

    Koutsoukas, Alexios; Lowe, Robert; Kalantarmotamedi, Yasaman; Mussa, Hamse Y; Klaffke, Werner; Mitchell, John B O; Glen, Robert C; Bender, Andreas

    2013-08-26

    In this study, two probabilistic machine-learning algorithms were compared for in silico target prediction of bioactive molecules, namely the well-established Laplacian-modified Naïve Bayes classifier (NB) and the more recently introduced (to Cheminformatics) Parzen-Rosenblatt Window. Both classifiers were trained in conjunction with circular fingerprints on a large data set of bioactive compounds extracted from ChEMBL, covering 894 human protein targets with more than 155,000 ligand-protein pairs. This data set is also provided as a benchmark data set for future target prediction methods due to its size as well as the number of bioactivity classes it contains. In addition to evaluating the methods, different performance measures were explored. This is not as straightforward as in binary classification settings, due to the number of classes, the possibility of multiple class memberships, and the need to translate model scores into "yes/no" predictions for assessing model performance. Both algorithms achieved a recall of correct targets that exceeds 80% in the top 1% of predictions. Performance depends significantly on the underlying diversity and size of a given class of bioactive compounds, with small classes and low structural similarity affecting both algorithms to different degrees. When tested on an external test set extracted from WOMBAT covering more than 500 targets by excluding all compounds with Tanimoto similarity above 0.8 to compounds from the ChEMBL data set, the current methodologies achieved a recall of 63.3% and 66.6% among the top 1% for Naïve Bayes and Parzen-Rosenblatt Window, respectively. While those numbers seem to indicate lower performance, they are also more realistic for settings where protein targets need to be established for novel chemical substances.

  19. Development of site-specific sediment quality guidelines for North and South Atlantic littoral zones: comparison against national and international sediment quality benchmarks.

    Science.gov (United States)

    Choueri, R B; Cesar, A; Abessa, D M S; Torres, R J; Morais, R D; Riba, I; Pereira, C D S; Nascimento, M R L; Mozeto, A A; DelValls, T A

    2009-10-15

    We aimed to develop site-specific sediment quality guidelines (SQGs) for two estuarine and port zones in Southeastern Brazil (Santos Estuarine System and Paranaguá Estuarine System) and three in Southern Spain (Ría of Huelva, Bay of Cádiz, and Bay of Algeciras), and compare these values against national and traditionally used international benchmark values. Site-specific SQGs were derived based on sediment physical-chemical, toxicological, and benthic community data integrated through multivariate analysis. This technique allowed the identification of chemicals of concern and the establishment of effects range correlatively to individual concentrations of contaminants for each site of study. The results revealed that sediments from Santos channel, as well as inner portions of the SES, are considered highly polluted (exceeding SQGs-high) by metals, PAHs and PCBs. High pollution by PAHs and some metals was found in São Vicente channel. In PES, sediments from inner portions (proximities of the Ponta do Félix port's terminal and the Port of Paranaguá) are highly polluted by metals and PAHs, including one zone inside the limits of an environmental protection area. In Gulf of Cádiz, SQGs exceedences were found in Ria of Huelva (all analysed metals and PAHs), in the surroundings of the Port of Cádiz (Bay of Cádiz) (metals), and in Bay of Algeciras (Ni and PAHs). The site-specific SQGs derived in this study are more restricted than national SQGs applied in Brazil and Spain, as well as international guidelines. This finding confirms the importance of the development of site-specific SQGs to support the characterisation of sediments and dredged material. The use of the same methodology to derive SQGs in Brazilian and Spanish port zones confirmed the applicability of this technique with an international scope and provided a harmonised methodology for site-specific SQGs derivation.

  20. Benchmarking of Percutaneous Injuries at the Ministry of Health Hospitals of Saudi Arabia in Comparison with the United States Hospitals Participating in Exposure Prevention Information Network (EPINet™

    Directory of Open Access Journals (Sweden)

    ZA Memish

    2015-01-01

    Full Text Available Background: Exposure to blood-borne pathogens from needle-stick and sharp injuries continues to pose a significant risk to health care workers. These events are of concern because of the risk to transmit blood-borne diseases such as hepatitis B virus, hepatitis C virus, and the human immunodeficiency virus.Objective: To benchmark different risk factors associated with needle-stick incidents among health care workers in the Ministry of Health hospitals in the Kingdom of Saudi Arabia compared to the US hospitals participating in Exposure Prevention Information Network (EPINet ™.Methods: Prospective surveillance of needle-stick and sharp incidents carried out during the year 2012 using EPINet™ ver 1.5 that provides uniform needle stick and sharp injury report form.Results: The annual percutaneous incidents (PIs rate per 100 occupied beds was 3.2 at the studied MOH hospitals. Nurses were the most affected job category by PIs (59.4%. Most PIs happened in patients' wards in the Ministry of Health hospitals (34.6%. Disposable syringes were the most common cause of PIs (47.20%. Most PIs occurred during use of the syringes (36.4%.Conclusion: Among health care workers, nurses and physicians appear especially at risk of exposure to PIs. Important risk factors of injuries include working in patient room, using disposable syringes, devices without safety features. Preventive strategies such as continuous training of health care workers with special emphasis on nurses and physicians, encouragement of reporting of such incidents, observation of sharp handling, their use and implementation of safety devices are warranted.

  1. Developing scheduling benchmark tests for the Space Network

    Science.gov (United States)

    Moe, Karen L.; Happell, Nadine; Brady, Sean

    1993-01-01

    A set of benchmark tests were developed to analyze and measure Space Network scheduling characteristics and to assess the potential benefits of a proposed flexible scheduling concept. This paper discusses the role of the benchmark tests in evaluating alternative flexible scheduling approaches and defines a set of performance measurements. The paper describes the rationale for the benchmark tests as well as the benchmark components, which include models of the Tracking and Data Relay Satellite (TDRS), mission spacecraft, their orbital data, and flexible requests for communication services. Parameters which vary in the tests address the degree of request flexibility, the request resource load, and the number of events to schedule. Test results are evaluated based on time to process and schedule quality. Preliminary results and lessons learned are addressed.

  2. Radiography benchmark 2014

    Energy Technology Data Exchange (ETDEWEB)

    Jaenisch, G.-R., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Deresch, A., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Bellon, C., E-mail: Gerd-Ruediger.Jaenisch@bam.de [Federal Institute for Materials Research and Testing, Unter den Eichen 87, 12205 Berlin (Germany); Schumm, A.; Lucet-Sanchez, F.; Guerin, P. [EDF R and D, 1 avenue du Général de Gaulle, 92141 Clamart (France)

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  3. 浅析中国基准地价体系的现状与发展%Preliminary Analysis on the Status and Trend of Land Benchmark Pricing System in China

    Institute of Scientific and Technical Information of China (English)

    邹如; 伍育鹏; 章文波

    2012-01-01

    研究目的:对现行基准地价体系的现状及存在的问题进行分析,对未来发展方向进行讨论。研究方法:综述法,反距离权重法和普通协同克里格法插值法。研究结果:地价基本原理及中国土地所有制现状决定了基准地价的地位及发展,合理的样点分布和抽样率是使用插值法的前提,以价定级不符合中国土地市场实际情况,用于更新的地价指数还需改进。研究结论:基准地价在中国地价体系中的地位不可替代,但在基础理论、构成、表现形式及应用等方面仍需进一步研究完善。%The purpose of the paper is to analyze the situation and problems of current land benchmark pricing system, and discuss its possible reform direction. Methods used include documentation analysis, inverse distance weighted method and ordinary Co-Kriging interpolation method. The results show that the role and development of land benchmark price in China is determined by the fundamental theory of land price and the current Chinese land ownership. Reasonable sample distribution and sample rate are the prerequisite of interpolation algorithms for land benchmark pricing. The classification in terms of land price is not feasible for Chinese land market. The land price indexes for classification should be further improved. It is concluded that the role of the benchmark pricing system is irreplaceable; however, its basic theory, elements, manifestation and application need to be further studied and ameliorated.

  4. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperf...

  5. A flexible Monte Carlo tool for patient or phantom specific calculations: comparison with preliminary validation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, S; Followill, D; Ibbott, G [University of Texas M. D. Anderson Cancer Center, Houston, TX (United States); Cui, J; Deasy, J [Washington University, St. Louis, MO (United States)], E-mail: sedavids@mdanderson.org

    2008-02-01

    The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD)

  6. A flexible Monte Carlo tool for patient or phantom specific calculations: comparison with preliminary validation measurements

    Science.gov (United States)

    Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.

    2008-02-01

    The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).

  7. Preliminary comparison of the Essie and PubMed search engines for answering clinical questions using MD on Tap, a PDA-based program for accessing biomedical literature.

    Science.gov (United States)

    Sutton, Victoria R; Hauser, Susan E

    2005-01-01

    MD on Tap, a PDA application that searches and retrieves biomedical literature, is specifically designed for use by mobile healthcare professionals. With the goal of improving the usability of the application, a preliminary comparison was made of two search engines (PubMed and Essie) to determine which provided most efficient path to the desired clinically-relevant information.

  8. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  9. Benchmarking of a treatment planning system for spot scanning proton therapy: Comparison and analysis of robustness to setup errors of photon IMRT and proton SFUD treatment plans of base of skull meningioma

    Energy Technology Data Exchange (ETDEWEB)

    Harding, R., E-mail: ruth.harding2@wales.nhs.uk [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF, United Kingdomand Abertawe Bro Morgannwg University Health Board, Medical Physics and Clinical Engineering, Swansea SA2 8QA (United Kingdom); Trnková, P.; Lomax, A. J. [Paul Scherrer Institute, Centre for Proton Therapy, Villigen 5232 (Switzerland); Weston, S. J.; Lilley, J.; Thompson, C. M.; Cosgrove, V. P. [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF (United Kingdom); Short, S. C. [Leeds Institute of Molecular Medicine, Oncology and Clinical Research, Leeds LS9 7TF, United Kingdomand St James’s Institute of Oncology, Oncology, Leeds LS9 7TF (United Kingdom); Loughrey, C. [St James’s Institute of Oncology, Oncology, Leeds LS9 7TF (United Kingdom); Thwaites, D. I. [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF, United Kingdomand Institute of Medical Physics, School of Physics, University of Sydney, Sydney NSW 2006 (Australia)

    2014-11-01

    Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was to benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.

  10. Monte Carlo comparison of preliminary methods for ordering multiple genetic loci.

    Science.gov (United States)

    Olson, J M; Boehnke, M

    1990-09-01

    We carried out a simulation study to compare the power of eight methods for preliminary ordering of multiple genetic loci. Using linkage groups of six loci and a simple pedigree structure, we considered the effects on method performance of locus informativity, interlocus spacing, total distance along the chromosome, and sample size. Method performance was assessed using the mean rank of the true order, the proportion of replicates in which the true order was the best order, and the number of orders that needed to be considered for subsequent multipoint linkage analysis in order to include the true order with high probability. A new method which maximizes the sum of adjacent two-point maximum lod scores divided by the equivalent number of informative meioses and the previously described method which minimizes the sum of adjacent recombination fraction estimates were found to be the best overall locus-ordering methods for the situations considered, although several other methods also performed well.

  11. Preliminary comparison between real-time in-vivo spectral and transverse oscillation velocity estimates

    DEFF Research Database (Denmark)

    Pedersen, Mads Møller; Pihl, Michael Johannes; Haugaard, Per

    2011-01-01

    angles and std were calculated { 52±18 ; 55±23 ; 60±16 }°. Spectral angles { 52 ; 56 ; 52 }° were obtained from the B-mode images. Obtained values are: PSTO { 76±15 ; 89±28 ; 77±7 } cm/s, spectral PS { 77 ; 110 ; 76 } cm/s, EDTO { 10±3 ; 14±8 ; 15±3 } cm/s, spectral ED { 18 ; 13 ; 20 } cm/s, RITO { 0.......87±0.05 ; 0.79±0.21 ; 0.79±0.06 }, and spectral RI { 0.77 ; 0.88 ; 0.73 }. Vector angles are within ±two std of the spectral angle. TO velocity estimates are within ±three std of the spectral estimates. RITO are within ±two std of the spectral estimates. Preliminary data indicates that the TO and spectral...

  12. PRELIMINARY STUDY TO PRIMARY EDUCATION FACILITIES (A Comparison Study between Indonesia and Developed Countries

    Directory of Open Access Journals (Sweden)

    Lucy Yosita

    2006-01-01

    Full Text Available This writing is a preliminary study to condition of primary education facilities in Indonesia, and then comparing these with theories as well as various relevant cases aimed to know the problem more obviously. Basically, there is difference between primary education facilities in Indonesia with those in developed countries. Meanwhile on the other hand, the condition as well as the completion of education facility is actually as the main factor contributes to address the purpose of learning process. If building design, interior and also site plan were dynamic in form, space, colour and tools, those would be probably more stimulate activity and influence into the growth of students. However, lastly, it is still required further analysis, as an example analysis to student's behaviour in spaces of learning environment, more detail and within enough time, not only at indoor but also at outdoor.

  13. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  14. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts. The blade shape is considered as a fixed parameter...

  15. Measurement Methods in the field of benchmarking

    Directory of Open Access Journals (Sweden)

    István Szűts

    2004-05-01

    Full Text Available In benchmarking we often come across with parameters being difficultto measure while executing comparisons or analyzing performance, yet they haveto be compared and measured so as to be able to choose the best practices. Thesituation is similar in the case of complex, multidimensional evaluation as well,when the relative importance and order of different dimensions, parameters to beevaluated have to be determined or when the range of similar performanceindicators have to be decreased with regard to simpler comparisons. In suchcases we can use the ordinal or interval scales of measurement elaborated by S.S.Stevens.

  16. Benchmarking the next generation of homology inference tools

    OpenAIRE

    Saripella, Ganapathi Varma; Sonnhammer, Erik L.L.; Forslund, Kristoffer

    2016-01-01

    Motivation: Over the last decades, vast numbers of sequences were deposited in public databases. Bioinformatics tools allow homology and consequently functional inference for these sequences. New profile-based homology search tools have been introduced, allowing reliable detection of remote homologs, but have not been systematically benchmarked. To provide such a comparison, which can guide bioinformatics workflows, we extend and apply our previously developed benchmark approach to evaluate t...

  17. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques....... In this paper, we review the modern foundations for frontier-based regulation and we discuss its actual use in several jurisdictions....

  18. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  19. Benchmarking Query Execution Robustness

    Science.gov (United States)

    Wiener, Janet L.; Kuno, Harumi; Graefe, Goetz

    Benchmarks that focus on running queries on a well-tuned database system ignore a long-standing problem: adverse runtime conditions can cause database system performance to vary widely and unexpectedly. When the query execution engine does not exhibit resilience to these adverse conditions, addressing the resultant performance problems can contribute significantly to the total cost of ownership for a database system in over-provisioning, lost efficiency, and increased human administrative costs. For example, focused human effort may be needed to manually invoke workload management actions or fine-tune the optimization of specific queries.

  20. Informal Preliminary Report on Comparisons of Prototype SPN-1 Radiometer to PARSL Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Long, Charles N.

    2014-06-17

    The prototype SPN-1 has been taking measurements for several months collocated with our PNNL Atmospheric Remote Sensing Laboratory (PARSL) solar tracker mounted instruments at the Pacific Northwest National Laboratory (PNNL) located in Richland, Washington, USA. The PARSL radiometers used in the following comparisons consist of an Eppley Normal Incident Pyrheliometer (NIP) and a shaded Eppley model 8-48 “Black and White” pyrgeometer (B&W) to measure the direct and diffuse shortwave irradiance (SW), respectively. These instruments were calibrated in mid-September by comparison to an absolute cavity radiometer directly traceable to the world standard group in Davos, Switzerland. The NIP calibration was determined by direct comparison, while the B&W was calibrated using the shade/unshade technique. All PARSL data prior to mid-September have been reprocessed using the new calibration factors. The PARSL data are logged as 1-minute averages from 1-second samples. Data used in this report span the time period from June 22 through December 1, 2006. All data have been processed through the QCRad code (Long and Shi, 2006), which itself is a more elaborately developed methodology along the lines of that applied by the Baseline Surface Radiation Network (BSRN) Archive (Long and Dutton, 2004), for quality control. The SPN-1 data are the standard total and diffuse SW values obtained from the analog data port of the instrument. The comparisons use only times when both the PARSL and SPN-1 data passed all QC testing. The data were further processed and analyzed by application of the SW Flux Analysis methodology (Long and Ackerman, 2000; Long and Gaustad, 2004, Long et al., 2006) to detect periods of clear skies, calculate continuous estimates of clear-sky SW irradiance and the effect of clouds on the downwelling SW, and estimate fractional sky cover.

  1. Super-Phenix benchmark used for comparison of PNC and CEA calculation methods, and of JENDL-3.2 and CARNAVAL IV nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, S.N. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1998-02-01

    The study was carried out within the framework of the PNC-CEA collaboration agreement. Data were provided, by CEA, for an experimental loading of a start-up core in Super-Phenix. This data was used at PNC to produce core flux snapshot calculations. CEA undertook a comparison of the PNC results with the equivalent calculations carried out by CEA, and also with experimental measurements from SPX. The results revealed a systematic radial flux tilt between the calculations and the reactor measurements, with the PNC tilts only {approx}30-40% of those from CEA. CEA carried out an analysis of the component causes of the radial tilt. It was concluded that a major cause of radial tilt differences between the PNC and CEA calculations lay in the nuclear datasets used: JENDL-3.2 and CARNAVAL IV. For the final stage of the study, PNC undertook a sensitivity analysis, to examine the detailed differences between the two sets of nuclear data. The sensitivity analysis showed that a relatively small number of nuclear data items contributed the bulk of the radial tilt difference between calculations with JENDL-3.2 and with CARNAVAL IV. A direct comparison between JENDL-3.2 and CARNAVAL IV data revealed the following. The Nu values showed little difference. The only large fission cross-section differences were at low energy. Although down-scattering reactions showed some large fractional differences, absolute differences were negligible compared with in-group scattering; for in-group scattering fractional differences were up to {approx}75%, but generally <20%. There were many large differences in capture cross-sections, generally {approx}30-200%. (J.P.N.)

  2. A preliminary diffusional kurtosis imaging study of Parkinson disease: comparison with conventional diffusion tensor imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kamagata, Koji; Kamiya, Kouhei; Suzuki, Michimasa; Hori, Masaaki; Yoshida, Mariko; Aoki, Shigeki [Juntendo University School of Medicine, Department of Radiology, Bunkyo-ku, Tokyo (Japan); Tomiyama, Hiroyuki; Hatano, Taku; Motoi, Yumiko; Hattori, Nobutaka [Juntendo University School of Medicine, Department of Neurology, Tokyo (Japan); Abe, Osamu [Nihon University School of Medicine, Department of Radiology, Tokyo (Japan); Shimoji, Keigo [National Center of Neurology and Psychiatry Hospital, Department of Radiology, Tokyo (Japan)

    2014-03-15

    Diffusional kurtosis imaging (DKI) is a more sensitive technique than conventional diffusion tensor imaging (DTI) for assessing tissue microstructure. In particular, it quantifies the microstructural integrity of white matter, even in the presence of crossing fibers. The aim of this preliminary study was to compare how DKI and DTI show white matter alterations in Parkinson disease (PD). DKI scans were obtained with a 3-T magnetic resonance imager from 12 patients with PD and 10 healthy controls matched by age and sex. Tract-based spatial statistics were used to compare the mean kurtosis (MK), mean diffusivity (MD), and fractional anisotropy (FA) maps of the PD patient group and the control group. In addition, a region-of-interest analysis was performed for the area of the posterior corona radiata and superior longitudinal fasciculus (SLF) fiber crossing. FA values in the frontal white matter were significantly lower in PD patients than in healthy controls. Reductions in MK occurred more extensively throughout the brain: in addition to frontal white matter, MK was lower in the parietal, occipital, and right temporal white matter. The MK value of the area of the posterior corona radiata and SLF fiber crossing was also lower in the PD group. DKI detects changes in the cerebral white matter of PD patients more sensitively than conventional DTI. In addition, DKI is useful for evaluating crossing fibers. By providing a sensitive index of brain pathology in PD, DKI may enable improved monitoring of disease progression. (orig.)

  3. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  4. Entropy-based benchmarking methods

    OpenAIRE

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  5. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  6. CT and endoscopic ultrasound in comparison to endoluminal MRI-Preliminary results in staging gastric carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Heye, Tobias [Department of Diagnostic Radiology, University Hospital, Heidelberg (Germany)], E-mail: tobias.heye@med.uni-heidelberg.de; Kuntz, Christian [Department of Surgery, Hospital Wetzlar-Braunfels (Germany)], E-mail: christian.kuntz@lahn-dill-kliniken.de; Duex, Markus [Department of Diagnostic Radiology, Hospital Nordwest, Frankfurt am Main (Germany)], E-mail: markusduex@aol.de; Encke, Jens [Department of Gastroenterology, University Hospital, Heidelberg (Germany)], E-mail: jens.encke@med.uni-heidelberg.de; Palmowski, Moritz [Department of Diagnostic Radiology, University Hospital, Heidelberg (Germany)], E-mail: moritz.palmowski@med.uni-heidelberg.de; Autschbach, Frank [Institute of Pathology, Ruprecht-Karls University, Heidelberg (Germany)], E-mail: frank.autschbach@med.uni-heidelberg.de; Volke, Frank [Fraunhofer Institute for Biomedical Engineering (IBMT), St. Ingbert (Germany)], E-mail: frank.volke@ibmt.fhg.de; Kauffmann, Guenter Werner [Department of Diagnostic Radiology, University Hospital, Heidelberg (Germany)], E-mail: guenter.kauffmann@med.uni-heidelberg.de; Grenacher, Lars [Department of Diagnostic Radiology, University Hospital, Heidelberg (Germany)], E-mail: lars.grenacher@med.uni-heidelberg.de

    2009-05-15

    Purpose: To prospectively compare diagnostic parameters of a newly developed endoluminal MRI (endo-MRI) concept with endoscopic ultrasound (EUS) and hydro-computer tomography (Hydro-CT) in T-staging of gastric carcinoma on one patient collective. Material and methods: 28 consecutive patients (11 females, 17 males, age range 46-87 years, median 67 years) referred for surgery due to a gastric malignancy were included. Preoperative staging by EUS was performed in 14 cases and by Hydro-CT in 14 cases within a time frame of 2 weeks. Ex vivo endo-MRI examination of gastric specimens was performed directly after gastrectomy within a time interval of 2-3 h. EUS data were acquired from the clinical setting whereas Hydro-CT and endo-MRI data were evaluated in blinded fashion by two experienced radiologists and one surgeon well experienced in EUS on gastric carcinomas. Results: Histopathology resulted in 4 pT1, 17 pT2, 3 pT3 and 2 pT4 carcinomas with 2 gastric lymphomas which were excluded. Overall accuracy for endo-MRI was 75% for T-Staging of the 26 carcinomas. EUS achieved 42.9% accuracy; endo-MRI in this subgroup was accurate in 71.4%. Hydro-CT was correct in 28.6%, accuracy for endo-MRI in this subgroup was 71.4%. Conclusion: The direct comparison of all three modalities on one patient collective shows that endo-MRI is able to achieve adequate staging results in comparison with clinically accepted methods like EUS and Hydro-CT in classifying the extent of tumor invasion into the gastric wall. However the comparison is limited as we compared in vivo routine clinical data with experimental ex vivo data. Future investigations need to show if the potential of endo-MRI can be transferred into a clinical in vivo setting.

  7. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  8. Preliminary comparison between real-time in-vivo spectral and transverse oscillation velocity estimates

    Science.gov (United States)

    Pedersen, Mads Møller; Pihl, Michael Johannes; Haugaard, Per; Hansen, Jens Munk; Lindskov Hansen, Kristoffer; Bachmann Nielsen, Michael; Jensen, Jørgen Arendt

    2011-03-01

    Spectral velocity estimation is considered the gold standard in medical ultrasound. Peak systole (PS), end diastole (ED), and resistive index (RI) are used clinically. Angle correction is performed using a flow angle set manually. With Transverse Oscillation (TO) velocity estimates the flow angle, peak systole (PSTO), end diastole (EDTO), and resistive index (RITO) are estimated. This study investigates if these clinical parameters are estimated equally good using spectral and TO data. The right common carotid arteries of three healthy volunteers were scanned longitudinally. Average TO flow angles and std were calculated { 52+/-18 ; 55+/-23 ; 60+/-16 }°. Spectral angles { 52 ; 56 ; 52 }° were obtained from the B-mode images. Obtained values are: PSTO { 76+/-15 ; 89+/-28 ; 77+/-7 } cm/s, spectral PS { 77 ; 110 ; 76 } cm/s, EDTO { 10+/-3 ; 14+/-8 ; 15+/-3 } cm/s, spectral ED { 18 ; 13 ; 20 } cm/s, RITO { 0.87+/-0.05 ; 0.79+/-0.21 ; 0.79+/-0.06 }, and spectral RI { 0.77 ; 0.88 ; 0.73 }. Vector angles are within +/-two std of the spectral angle. TO velocity estimates are within +/-three std of the spectral estimates. RITO are within +/-two std of the spectral estimates. Preliminary data indicates that the TO and spectral velocity estimates are equally good. With TO there is no manual angle setting and no flow angle limitation. TO velocity estimation can also automatically handle situations where the angle varies over the cardiac cycle. More detailed temporal and spatial vector estimates with diagnostic potential are available with the TO velocity estimation.

  9. Comparison of temperament and character profiles of anesthesiologists and surgeons : a preliminary study.

    Directory of Open Access Journals (Sweden)

    Mitra S

    2003-10-01

    Full Text Available BACKGROUND: Given the high levels of stress in anesthesiologists and also their close working liaison with surgeons, it may be worthwhile to compare the personality profiles of these two groups of professionals. AIM: To compare the personality profiles of surgeons and anesthesiologists, using a well-standardized and validated instrument. SETTINGS AND DESIGN: Survey (cross-sectional on surgeons and anesthesiologists working in several medical institutes in India. MATERIAL & METHODS: The self-report Temperament and Character Inventory, 125-item version (TCI-125 was mailed out to an incidental sample of surgeons and anesthesiologists working in medical institutes in India. Of the 200 questionnaires sent (100 to anesthesiologists and surgeons each, 93 completed responses were returned (46 anesthesiologists, 47 surgeons; return rate 46.5%. STATISTICAL ANALYSIS: Student′s unpaired ′t′ test; P<0.05 was considered statistically significant. RESULTS: The mean scores of anesthesiologists vis-a-vis surgeons on the various temperament dimensions were Novelty seeking: 8.6 vs. 9.2; Harm avoidance: 7.3 vs. 8.1; Reward dependence: 8.1 vs. 8.0; and Persistence: 3.0 vs. 3.1, respectively. Similar scores for the character dimensions were Self-directedness: 16.9 vs. 15.9; Cooperativeness: 17.5 vs. 16.5; and Self-transcendence: 7.0 vs. 6.7, respectively. There was no significant difference between the surgeons and anesthesiologists on any of the temperament and character variables of personality chosen for the study. CONCLUSION: Personality measures did not differ significantly between surgeons and anesthesiologists in this preliminary investigation. If replicated on a larger and more representative sample, the findings have clinical relevance to improve the working relationship between these two groups of closely working professionals.

  10. An Analytical Benchmark for the Calculation of Current Distribution in Superconducting Cables

    CERN Document Server

    Bottura, L; Fabbri, M G

    2002-01-01

    The validation of numerical codes for the calculation of current distribution and AC loss in superconducting cables versus experimental results is essential, but could be affected by approximations in the electromagnetic model or incertitude in the evaluation of the model parameters. A preliminary validation of the codes by means of a comparison with analytical results can therefore be very useful, in order to distinguish among different error sources. We provide here a benchmark analytical solution for current distribution that applies to the case of a cable described using a distributed parameters electrical circuit model. The analytical solution of current distribution is valid for cables made of a generic number of strands, subjected to well defined symmetry and uniformity conditions in the electrical parameters. The closed form solution for the general case is rather complex to implement, and in this paper we give the analytical solutions for different simplified situations. In particular we examine the ...

  11. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  12. Preliminary interlaboratory comparison of the ex vivo dual human placental perfusion system

    DEFF Research Database (Denmark)

    Myllynen, Päivi; Mathiesen, Line; Weimer, Marc

    2010-01-01

    As a part of EU-project ReProTect, a comparison of the dual re-circulating human placental perfusion system was carried out, by two independent research groups. The detailed placental transfer data of model compounds [antipyrine, benzo(a)pyrene, PhIP (2-amino-1-methyl-6-phenylimidazo(4,5-b......)pyridine) and IQ (2-amino-3-methylimidazo(4,5-f)quinoline] has been/will be published separately. For this project, a comparative re-analysis was done, by curve fitting the data and calculating two endpoints: AUC(120), defined as the area under the curve between time 0 and time 120min and as t(0.5), defined...

  13. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  14. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  15. A Preliminary Comparison of Three Dimensional Particle Tracking and Sizing using Plenoptic Imaging and Digital In-line Holography

    Energy Technology Data Exchange (ETDEWEB)

    Guildenbecher, Daniel Robert; Munz, Elise Dahnke; Farias, Paul Abraham; Thurow, Brian S [Auburn U

    2015-12-01

    Digital in-line holography and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a preliminary comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with digital in-line holography. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and digital in-line holography successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-component velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. On the other hand, plenotpic imaging allows for a simpler experimental configuration. Furthermore, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments. Additional work is needed to better quantify sources of uncertainty, particularly in the plenoptic experiments, as well as develop data processing methodologies optimized for the plenoptic measurement.

  16. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  17. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  18. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  19. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth pre

  20. Chaos tool implementation for non-singer and singer voice comparison (preliminary study)

    Science.gov (United States)

    Dajer, Me; Pereira, Jc; Maciel, Cd

    2007-11-01

    Voice waveform is linked to the stretch, shorten, widen or constrict vocal tract. The articulation effects of the singer's vocal tract modify the voice acoustical characteristics and differ from the non-singer voices. In the last decades, Chaos Theory has shown the possibility to explore the dynamic nature of voice signals from a different point of view. The purpose of this paper is to apply the chaos technique of phase space reconstruction to analyze non- singers and singer voices in order to explore the signal nonlinear dynamic, and correlate them with traditional acoustic parameters. Eight voice samples of sustained vowel /i/ from non-singers and eight from singers were analyzed with "ANL" software. The samples were also acoustically analyzed with "Analise de Voz 5.0" in order to extract acoustic perturbation measures jitter and shimmer, and the coefficient of excess - (EX). The results showed different visual patterns for the two groups correlated with different jitter, shimmer, and coefficient of excess values. We conclude that these results clearly indicate the potential of phase space reconstruction technique for analysis and comparison of non-singers and singer voices. They also show a promising tool for training voices application.

  1. Research on the societal impacts of nanotechnology: a preliminary comparison of USA, Europe and Japan.

    Science.gov (United States)

    Matsuda, Masami; Hunt, Geoffrey

    2009-01-01

    We initiate some comparisons between Japan, Europe and USA on how far there is governmental support for the ethical, legal, social and environmental dimensions of nanotechnology development. It is evident that in the USA and Europe nanotechnology is now firmly embedded in the consideration of ELSI. Yet Japan has not yet adequately recognized the importance of these dimensions. The history of bioethics in Japan is short. In Europe, as early as 2004, a nanotechnology report by the UK's Royal Society referred to the possibility of some nanotubes and fibres having asbestos-like toxicity. The negative history of asbestos in Europe and USA is not yet fully identified as a Japanese problem. Japan is therefore in the process of seeking how best to address societal aspects of nanotechnology. Should the precautionary principle be applied to Japan's nanotechnology initiative as in Europe? Should 5-10% of the government's nanotechnology budget be allocated to ELSI research and measures? We propose that the government and industrial sector in Japan play a much more proactive part in the regional and international growth of research into the wider risk assessment, social, health and environmental context of nanotechnologies, not simply try to borrow lessons from the West at a later date.

  2. The Jefferson Scale of Physician Empathy: preliminary psychometrics and group comparisons in Italian physicians.

    Science.gov (United States)

    Di Lillo, Mariangela; Cicchetti, Americo; Lo Scalzo, Alessandra; Taroni, Francesco; Hojat, Mohammadreza

    2009-09-01

    To examine the psychometrics of the Jefferson Scale of Physician Empathy (JSPE) among a sample of Italian physicians. The JSPE was translated into Italian using back-translation procedures to ensure the accuracy of the translation. The translated JSPE was administered to 778 physicians at three hospitals in Rome, Italy in 2002. Individual empathy scores were calculated, as well as descriptive statistics at the item and scale level. Group comparisons of empathy scores were also made among men and women, physicians practicing in medical or surgical specialties, physicians working in different hospitals, and at physicians at various levels of career rank. Results are reported for 289 participants who completed the JSPE. Item-total score correlations were all positive and statistically significant. The prominent component of "perspective taking," which is the most important underlying construct of the scale, emerged in the factor analysis of the JSPE and was similar in both Italian and American samples. However, more factors appeared among Italian physicians, indicating that the underlying construct of empathy may be more complex among Italians. Cronbach coefficient alpha was .85. None of the group differences observed among physicians classified by gender, hospital of practice, specialty, or level of career rank reached statistical significance. Findings generally provide support for the construct validity and reliability of the Italian version of the JSPE. Further research is needed to determine whether the lack of statistically significant differences in empathy by gender and specialty is related to cultural peculiarities, the translation of the scale, or sampling.

  3. Chaos tool implementation for non-singer and singer voice comparison (preliminary study)

    Energy Technology Data Exchange (ETDEWEB)

    Dajer, Me; Pereira, Jc; Maciel, Cd [Department of Electric Engineering, School of Engineering of Sao Carlos, University of Sao Paulo, Sao Carlos (Brazil); Av. Trabalhador Sao-Carlesnse, 400. CEP 13566-590. Sao Carlos. SP (Brazil)

    2007-11-15

    Voice waveform is linked to the stretch, shorten, widen or constrict vocal tract. The articulation effects of the singer's vocal tract modify the voice acoustical characteristics and differ from the non-singer voices. In the last decades, Chaos Theory has shown the possibility to explore the dynamic nature of voice signals from a different point of view. The purpose of this paper is to apply the chaos technique of phase space reconstruction to analyze non- singers and singer voices in order to explore the signal nonlinear dynamic, and correlate them with traditional acoustic parameters. Eight voice samples of sustained vowel /i/ from non-singers and eight from singers were analyzed with 'ANL' software. The samples were also acoustically analyzed with 'Analise de Voz 5.0' in order to extract acoustic perturbation measures jitter and shimmer, and the coefficient of excess - (EX). The results showed different visual patterns for the two groups correlated with different jitter, shimmer, and coefficient of excess values. We conclude that these results clearly indicate the potential of phase space reconstruction technique for analysis and comparison of non-singers and singer voices. They also show a promising tool for training voices application.

  4. Applications of Integral Benchmark Data

    Energy Technology Data Exchange (ETDEWEB)

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. (Skip) Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  5. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  6. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport...

  7. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.;

    2013-01-01

    Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade......A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...

  8. Comparison of lip prints in two different populations of India: Reflections based on a preliminary examination

    Directory of Open Access Journals (Sweden)

    Anila Koneru

    2013-01-01

    Full Text Available Background: Dental records, fingerprint, and DNA comparisons are probably the most common techniques used for a person′s identification, allowing fast and secure identification processes. However, sometimes it is necessary to apply different and less known techniques such as lip prints. The potential of lip prints to determine sex has been well exhibited and documented. However, very few studies have been conducted using lip prints for population identification. Objective: To determine the predominant lip print patterns in males and females in relation to Kerala and Manipuri population and also to compare the lip print patterns between these populations. Materials and Methods: The sample comprised of 60 subjects, which included 30 each from Kerala and Manipuri. Lipstick was applied evenly, and the lip print was obtained by dabbing a strip of cellophane. The classification scheme proposed by Tsuchihashi was used to classify the lip print patterns and the data were statistically analyzed using the z-test for proportions. Results: Type 4 and Type 5 lip print patterns were predominant in males, whereas in females it was Type 1 and Type 1′. Type 1 pattern was most common in both the populations, with an incidence of 28.33%. Furthermore, Type 1 pattern was found to be more in Kerala females and Manipuri males when compared to their counterparts. Type 1 was most common in upper right, upper left, and lower left quadrants whereas in lower right quadrant, Type 1′ and Type 4 were predominant in Kerala and Type 5 in Manipuri population. Conclusion: Difference between the lip print patterns in two populations exists, although subtle. However, larger sample size is necessary to derive concrete conclusions.

  9. Comparison of lip prints in two different populations of India: Reflections based on a preliminary examination

    Science.gov (United States)

    Koneru, Anila; Surekha, R; Nellithady, Ganesh Shreekanth; Vanishree, M; Ramesh, DNSV; Patil, Ramesh S

    2013-01-01

    Background: Dental records, fingerprint, and DNA comparisons are probably the most common techniques used for a person's identification, allowing fast and secure identification processes. However, sometimes it is necessary to apply different and less known techniques such as lip prints. The potential of lip prints to determine sex has been well exhibited and documented. However, very few studies have been conducted using lip prints for population identification. Objective: To determine the predominant lip print patterns in males and females in relation to Kerala and Manipuri population and also to compare the lip print patterns between these populations. Materials and Methods: The sample comprised of 60 subjects, which included 30 each from Kerala and Manipuri. Lipstick was applied evenly, and the lip print was obtained by dabbing a strip of cellophane. The classification scheme proposed by Tsuchihashi was used to classify the lip print patterns and the data were statistically analyzed using the z-test for proportions. Results: Type 4 and Type 5 lip print patterns were predominant in males, whereas in females it was Type 1 and Type 1’. Type 1 pattern was most common in both the populations, with an incidence of 28.33%. Furthermore, Type 1 pattern was found to be more in Kerala females and Manipuri males when compared to their counterparts. Type 1 was most common in upper right, upper left, and lower left quadrants whereas in lower right quadrant, Type 1’ and Type 4 were predominant in Kerala and Type 5 in Manipuri population. Conclusion: Difference between the lip print patterns in two populations exists, although subtle. However, larger sample size is necessary to derive concrete conclusions. PMID:23960409

  10. Comparison of conventional and low dose steroid in the treatment of PFAPA syndrome: preliminary study.

    Science.gov (United States)

    Yazgan, Hamza; Gültekin, Erhan; Yazıcılar, Osman; Sagun, Ömer Faruk; Uzun, Lokman

    2012-11-01

    Steroids have been widely used to relief symptoms in the patients with PFAPA syndrome. This study was constructed to show the effectiveness of low-dose steroid therapy in patients diagnosed with PFAPA syndrome. 41 patients (86 febrile attacks) who were diagnosed using the criteria suggested by Thomas et al. were involved in the study. The cases were classified into two groups and the selection of patients in groups was made randomly. Twenty patients received prednisolone at a dose of 2 mg/kg/day (first group: 40 attacks) and 21 patients received a dose of 0.5 mg/kg/day (second group: 46 attacks). The effectiveness of the treatment was especially determined by the time needed to reduce the fever and the effect on the duration between the two attacks. The patients were re-examined 24 hours later, after a steroid treatment. The patients who were in the first group received 2mg/kg/day dose of prednisolone and their fever was dramatically decreased in 6-8 hours (7.6 ± 0.9 hours). The second group received 0.5mg/kg/day dose and 19 of these patients' fever was decreased in 8-12 hours. Two patients whose temperature did not decrease, received another dose of prednisolone 24 hours after the first dose and their fever was reduced 12 hours after the second dose (11.3 ± 6.4 hours). A comparison of the rate of fever reduction and the interval between the attacks (Group I: 5.11 ± 1.01 week and Group II: 5.2 ± 1.13 week) in the two groups did not show any statistical significance (p=0.104). Low-dose steroid treatment is as effective as normal dose in PFAPA syndrome but there is need to study with a larger group. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  12. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  13. BENCHMARKING ON-LINE SERVICES INDUSTRIES

    Institute of Scientific and Technical Information of China (English)

    John HAMILTON

    2006-01-01

    The Web Quality Analyser (WQA) is a new benchmarking tool for industry. It hasbeen extensively tested across services industries. Forty five critical success features are presented as measures that capture the user's perception of services industry websites. This tool differs to previous tools, in that it captures the information technology (IT) related driver sectors of website performance, along with the marketing-services related driver sectors. These driver sectors capture relevant structure, function and performance components.An 'on-off' switch measurement approach determines each component. Relevant component measures scale into a relative presence of the applicable feature, with a feature block delivering one of the sector drivers. Although it houses both measurable and a few subjective components, the WQA offers a proven and useful means to compare relevant websites.The WQA defines website strengths and weaknesses, thereby allowing for corrections to the website structure of the specific business. WQA benchmarking against services related business competitors delivers a position on the WQA index, facilitates specific website driver rating comparisons, and demonstrates where key competitive advantage may reside. This paper reports on the marketing-services driver sectors of this new benchmarking WQA tool.

  14. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  15. Correlational effect size benchmarks.

    Science.gov (United States)

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  16. Using postgraduate students' evaluations of research experience to benchmark departments and faculties: issues and challenges.

    Science.gov (United States)

    Ginns, Paul; Marsh, Herbert W; Behnia, Masud; Cheng, Jacqueline H S; Scalas, L Francesca

    2009-09-01

    The introduction of the Australian Research Training Scheme has been a strong reason for assuring the quality of the research higher degree (RHD) experience; if students experience poor supervision, an unsupportive climate, and inadequate infrastructure, prior research suggests RHD students will be less likely to complete their degree, with negative consequences for the student, the university, and society at large. The present study examines the psychometric properties of a survey instrument, the Student Research Experience Questionnaire (SREQ), for measuring the RHD experience of currently enrolled students. The core scales of the SREQ focus on student experiences of Supervision; Infrastructure; Intellectual and Social Climate; and Generic Skills Development. Participants were 2,213 postgraduate research students of a large, research-intensive Australian university. Preliminary factor analyses conducted at the student level supported the a priori four factors that the SREQ was designed to measure. However, multi-level analyses indicated that there was almost no differentiation between faculties or departments nested with faculties, suggesting that the SREQ responses are not appropriate for benchmarking faculties or departments. Consistent with earlier research based on comparisons across universities, the SREQ is shown to be almost completely unreliable in terms of benchmarking faculties or departments within a university.

  17. Benchmarking result diversification in social image retrieval

    DEFF Research Database (Denmark)

    Ionescu, Bogdan; Popescu, Adrian; Müller, Henning

    2014-01-01

    This article addresses the issue of retrieval result diversification in the context of social image retrieval and discusses the results achieved during the MediaEval 2013 benchmarking. 38 runs and their results are described and analyzed in this text. A comparison of the use of expert vs....... crowdsourcing annotations shows that crowdsourcing results are slightly different and have higher inter observer differences but results are comparable at lower cost. Multimodal approaches have best results in terms of cluster recall. Manual approaches can lead to high precision but often lower diversity....... With this detailed results analysis we give future insights on this matter....

  18. Benchmarking research of steel companies in Europe

    Directory of Open Access Journals (Sweden)

    M. Antošová

    2013-07-01

    Full Text Available In present time steelworks are at a stage of permanent changes that are marked with still stronger competition pressure. Therefore managers must solve questions of how to decrease production costs, how to overcome competition and how to survive in the world market. Still more attention should be paid to the modern managerial methods of market research and comparison with competition. Benchmarking research is one of the effective tools for such research. The goal of this contribution is to compare chosen steelworks and to indicate new directions for their development with the possibility of increasing the productivity of steel production.

  19. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  20. Preliminary Comparison of Properties between Ni-electroplated Stainless Steel Parts Fabricated with Laser Additive Manufacturing and Conventional Machining

    Science.gov (United States)

    Mäkinen, Mika; Jauhiainen, Eeva; Matilainen, Ville-Pekka; Riihimäki, Jaakko; Ritvanen, Jussi; Piili, Heidi; Salminen, Antti

    Laser additive manufacturing (LAM) is a fabrication technology, which enables production of complex parts from metallic materials with mechanical properties comparable to those of conventionally machined parts. These LAM parts are manufactured via melting metallic powder layer by layer with laser beam. Aim of this study is to define preliminarily the possibilities of using electroplating to supreme surface properties. Electrodeposited nickel and chromium as well as electroless (autocatalytic) deposited nickel was used to enhance laser additive manufactured and machined parts properties, like corrosion resistance, friction and wearing. All test pieces in this study were manufactured with a modified research AM equipment, equal to commercial EOS M series. The laser system used for tests was IPG 200 W CW fiber laser. The material used in this study for additive manufacturing was commercial stainless steel powder grade named SS316L. This SS316L is not equal to AISI 316L grade, but commercial name of this kind of powder is widely known in additive manufacturing as SS316L. Material used for fabrication of comparison test pieces (i.e. conventionally manufactured) was AISI 316L stainless steel bar. Electroplating was done in matrix cell and electroless was done in plastic sink properties of plated parts were tested within acetic acid salt spray corrosion chamber (AASS, SFS-EN-ISO 9227 standard). Adhesion of coating, friction and wearing properties were tested with Pin-On-Rod machine. Results show that in these preliminary tests, LAM parts and machined parts have certain differences due to manufacturing route and surface conditions. These have an effect on electroplated and electroless parts features on adhesion, corrosion, wearing and friction. However, further and more detailed studies are needed to fully understand these phenomena.

  1. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  2. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...... already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity. © IWA Publishing 2013....... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  3. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  4. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  5. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  6. The design of a scalable, fixed-time computer benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, J.; Rover, D.; Elbert, S.; Carter, M.

    1990-10-01

    By using the principle of fixed time benchmarking, it is possible to compare a very wide range of computers, from a small personal computer to the most powerful parallel supercomputer, an a single scale. Fixed-time benchmarks promise far greater longevity than those based on a particular problem size, and are more appropriate for grand challenge'' capability comparison. We present the design of a benchmark, SLALOM{trademark}, that scales automatically to the computing power available, and corrects several deficiencies in various existing benchmarks: it is highly scalable, it solves a real problem, it includes input and output times, and it can be run on parallel machines of all kinds, using any convenient language. The benchmark provides a reasonable estimate of the size of problem solvable on scientific computers. Results are presented that span six orders of magnitude for contemporary computers of various architectures. The benchmarks also can be used to demonstrate a new source of superlinear speedup in parallel computers. 15 refs., 14 figs., 3 tabs.

  7. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  8. Towards benchmarking of multivariable controllers in chemical/biochemical industries: Plantwide control for ethylene glycol production

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Bialas, Dawid Jan; Jørgensen, John Bagterp

    2011-01-01

    In this paper we discuss a simple yet realistic benchmark plant for evaluation and comparison of advanced multivariable control for chemical and biochemical processes. The benchmark plant is based on recycle-separator-recycle systems for ethylene glycol production and implemented in Matlab...

  9. Towards benchmarking of multivariable controllers in chemical/biochemical industries: Plantwide control for ethylene glycol production

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Bialas, Dawid Jan; Jørgensen, John Bagterp

    2011-01-01

    In this paper we discuss a simple yet realistic benchmark plant for evaluation and comparison of advanced multivariable control for chemical and biochemical processes. The benchmark plant is based on recycle-separator-recycle systems for ethylene glycol production and implemented in Matlab-Simulink...

  10. Ship Propulsion System as a Benchmark for Fault-Tolerant Control

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Blanke, M.

    1998-01-01

    -tolerant control is a fairly new area. The paper presents a ship propulsion system as a benchmark that should be useful as a platform for development of new ideas and comparison of methods. The benchmark has two main elements. One is development of efficient FDI algorithms, the other is analysis and implementation...

  11. Benchmarking of 50 nm features in thermal nanoimprint

    DEFF Research Database (Denmark)

    Gourgon, C.; Chaix, N.; Schift, H.;

    2007-01-01

    The objective of this benchmarking is to establish a comparison of several tools and processes used in thermal NIL with Si stamps at the nanoscale among the authors' laboratories. The Si stamps have large arrays of 50 nm dense lines and were imprinted in all these laboratories in a similar to 100...

  12. Towards benchmarking an in-stream water quality model

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available A method of model evaluation is presented which utilises a comparison with a benchmark model. The proposed benchmarking concept is one that can be applied to many hydrological models but, in this instance, is implemented in the context of an in-stream water quality model. The benchmark model is defined in such a way that it is easily implemented within the framework of the test model, i.e. the approach relies on two applications of the same model code rather than the application of two separate model codes. This is illustrated using two case studies from the UK, the Rivers Aire and Ouse, with the objective of simulating a water quality classification, general quality assessment (GQA, which is based on dissolved oxygen, biochemical oxygen demand and ammonium. Comparisons between the benchmark and test models are made based on GQA, as well as a step-wise assessment against the components required in its derivation. The benchmarking process yields a great deal of important information about the performance of the test model and raises issues about a priori definition of the assessment criteria.

  13. Benchmarking survey for recycling.

    Energy Technology Data Exchange (ETDEWEB)

    Marley, Margie Charlotte; Mizner, Jack Harry

    2005-06-01

    This report describes the methodology, analysis and conclusions of a comparison survey of recycling programs at ten Department of Energy sites including Sandia National Laboratories/New Mexico (SNL/NM). The goal of the survey was to compare SNL/NM's recycling performance with that of other federal facilities, and to identify activities and programs that could be implemented at SNL/NM to improve recycling performance.

  14. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  15. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  16. Assessment of Usability Benchmarks: Combining Standardized Scales with Specific Questions

    Directory of Open Access Journals (Sweden)

    Stephanie Bettina Linek

    2011-12-01

    Full Text Available The usability of Web sites and online services is of rising importance. When creating a completely new Web site, qualitative data are adequate for identifying the most usability problems. However, changes of an existing Web site should be evaluated by a quantitative benchmarking process. The proposed paper describes the creation of a questionnaire that allows a quantitative usability benchmarking, i.e. a direct comparison of the different versions of a Web site and an orientation on general standards of usability. The questionnaire is also open for qualitative data. The methodology will be explained by the digital library services of the ZBW.

  17. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 1, Third comparison with 40 CFR 191, Subpart B

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-01

    Before disposing of transuranic radioactive wastes in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments of the WIPP for the DOE to provide interim guidance while preparing for final compliance evaluations. This volume contains an overview of WIPP performance assessment and a preliminary comparison with the long-term requirements of the Environmental Radiation Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B).

  18. Randomized benchmarking of multiqubit gates.

    Science.gov (United States)

    Gaebler, J P; Meier, A M; Tan, T R; Bowler, R; Lin, Y; Hanneke, D; Jost, J D; Home, J P; Knill, E; Leibfried, D; Wineland, D J

    2012-06-29

    We describe an extension of single-qubit gate randomized benchmarking that measures the error of multiqubit gates in a quantum information processor. This platform-independent protocol evaluates the performance of Clifford unitaries, which form a basis of fault-tolerant quantum computing. We implemented the benchmarking protocol with trapped ions and found an error per random two-qubit Clifford unitary of 0.162±0.008, thus setting the first benchmark for such unitaries. By implementing a second set of sequences with an extra two-qubit phase gate inserted after each step, we extracted an error per phase gate of 0.069±0.017. We conducted these experiments with transported, sympathetically cooled ions in a multizone Paul trap-a system that can in principle be scaled to larger numbers of ions.

  19. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  20. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  1. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart

    2015-11-01

    Full Text Available Benchmarks   Terrence C. Stewart 1* , Travis DeWolf 1 , Ashley Kleinhans 2 , Chris Eliasmith 1   1 University of Waterloo, Canada, 2 Council for Scientific and Industrial Research, South Africa   Submitted to Journal:   Frontiers in Neuroscience   Specialty... the study was exempt from ethical approval procedures.) Did the study presented in the manuscript involve human or animal subjects: No I v i w 1Closed-loop Neuromorphic Benchmarks Terrence C. Stewart 1,∗, Travis DeWolf 1, Ashley Kleinhans 2 and Chris...

  2. Benchmarking analogue models of brittle thrust wedges

    Science.gov (United States)

    Schreurs, Guido; Buiter, Susanne J. H.; Boutelier, Jennifer; Burberry, Caroline; Callot, Jean-Paul; Cavozzi, Cristian; Cerca, Mariano; Chen, Jian-Hong; Cristallini, Ernesto; Cruden, Alexander R.; Cruz, Leonardo; Daniel, Jean-Marc; Da Poian, Gabriela; Garcia, Victor H.; Gomes, Caroline J. S.; Grall, Céline; Guillot, Yannick; Guzmán, Cecilia; Hidayah, Triyani Nur; Hilley, George; Klinkmüller, Matthias; Koyi, Hemin A.; Lu, Chia-Yu; Maillot, Bertrand; Meriaux, Catherine; Nilfouroushan, Faramarz; Pan, Chang-Chih; Pillot, Daniel; Portillo, Rodrigo; Rosenau, Matthias; Schellart, Wouter P.; Schlische, Roy W.; Take, Andy; Vendeville, Bruno; Vergnaud, Marine; Vettori, Matteo; Wang, Shih-Hsien; Withjack, Martha O.; Yagupsky, Daniel; Yamada, Yasuhiro

    2016-11-01

    We performed a quantitative comparison of brittle thrust wedge experiments to evaluate the variability among analogue models and to appraise the reproducibility and limits of model interpretation. Fifteen analogue modeling laboratories participated in this benchmark initiative. Each laboratory received a shipment of the same type of quartz and corundum sand and all laboratories adhered to a stringent model building protocol and used the same type of foil to cover base and sidewalls of the sandbox. Sieve structure, sifting height, filling rate, and details on off-scraping of excess sand followed prescribed procedures. Our analogue benchmark shows that even for simple plane-strain experiments with prescribed stringent model construction techniques, quantitative model results show variability, most notably for surface slope, thrust spacing and number of forward and backthrusts. One of the sources of the variability in model results is related to slight variations in how sand is deposited in the sandbox. Small changes in sifting height, sifting rate, and scraping will result in slightly heterogeneous material bulk densities, which will affect the mechanical properties of the sand, and will result in lateral and vertical differences in peak and boundary friction angles, as well as cohesion values once the model is constructed. Initial variations in basal friction are inferred to play the most important role in causing model variability. Our comparison shows that the human factor plays a decisive role, and even when one modeler repeats the same experiment, quantitative model results still show variability. Our observations highlight the limits of up-scaling quantitative analogue model results to nature or for making comparisons with numerical models. The frictional behavior of sand is highly sensitive to small variations in material state or experimental set-up, and hence, it will remain difficult to scale quantitative results such as number of thrusts, thrust spacing

  3. Thermal Performance Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui; Moreno, Gilbert; Bennion, Kevin

    2016-06-07

    The goal for this project is to thoroughly characterize the thermal performance of state-of-the-art (SOA) in-production automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The thermal performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY16, the 2012 Nissan LEAF power electronics and 2014 Honda Accord Hybrid power electronics thermal management system were characterized. Comparison of the two power electronics thermal management systems was also conducted to provide insight into the various cooling strategies to understand the current SOA in thermal management for automotive power electronics and electric motors.

  4. Benchmarking Internet of Things devices

    CSIR Research Space (South Africa)

    Kruger, CP

    2014-07-01

    Full Text Available International Conference on Industrial Informatics (INDIN), 27-30 July 2014 Benchmarking Internet of Things devices C.P. Kruger y and G.P. Hancke yz *Advanced Sensor Networks Research Group, Counsil for Scientific and Industrial Research, South...

  5. Engine Benchmarking - Final CRADA Report

    Energy Technology Data Exchange (ETDEWEB)

    Wallner, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    Detailed benchmarking of the powertrains of three light-duty vehicles was performed. Results were presented and provided to CRADA partners. The vehicles included a MY2011 Audi A4, a MY2012 Mini Cooper and a MY2014 Nissan Versa.

  6. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  7. Benchmark Lisp And Ada Programs

    Science.gov (United States)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  8. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  9. Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)

    Science.gov (United States)

    Hughes, Steven; Knittel, Jeremy; Shoan, Wendy; Kim, Youngkwang; Conway, Claire; Conway, Darrel J.

    2017-01-01

    This paper describes the processes and results of Verification and Validation (VV) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The VV effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.

  10. Plasma Waves as a Benchmark Problem

    CERN Document Server

    Kilian, Patrick; Schreiner, Cedric; Spanier, Felix

    2016-01-01

    A large number of wave modes exist in a magnetized plasma. Their properties are determined by the interaction of particles and waves. In a simulation code, the correct treatment of field quantities and particle behavior is essential to correctly reproduce the wave properties. Consequently, plasma waves provide test problems that cover a large fraction of the simulation code. The large number of possible wave modes and the freedom to choose parameters make the selection of test problems time consuming and comparison between different codes difficult. This paper therefore aims to provide a selection of test problems, based on different wave modes and with well defined parameter values, that is accessible to a large number of simulation codes to allow for easy benchmarking and cross validation. Example results are provided for a number of plasma models. For all plasma models and wave modes that are used in the test problems, a mathematical description is provided to clarify notation and avoid possible misunderst...

  11. Numerical simulation of the RAMAC benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Leblanc, J.E.; Sugihara, M.; Fujiwara, T. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; Nusca, M. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; U.S. Army Research Lab., Ballistics and Weapons Concepts Div., AMSRL-WM-BE, Aberdeen Proving Ground, MD (United States); Wang, X. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; School of Mechanical and Production Engineering, Nanyang Technological Univ. (Singapore); Seiler, F. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; French-German Research Inst. of Saint-Louis, ISL, Saint-Louis (France)

    2000-11-01

    Numerical simulations of the same ramac geometry and boundary conditions by different numerical and physical models highlight the variety of solutions possible and the strong effect of the chemical kinetics model on the solution. The benchmark test was defined and announced within the community of ramac researchers. Three laboratories undertook the project. The numerical simulations include Navier-Stokes and Euler simulations with various levels of physical models and equations of state. The non-reactive part of the simulation produced similar steady state results in the three simulations. The chemically reactive part of the simulation produced widely different outcomes. The original experimental data and experimental conditions are presented. A description of each computer code and the resulting flowfield is included. A comparison between codes and results is achieved. The most critical choice for the simulation was the chemical kinetics model. (orig.)

  12. Identifying best practice through benchmarking and outcome measurement.

    Science.gov (United States)

    Lanier, Lynne

    2004-01-01

    Collecting and analyzing various types of data are essential to identifying areas for improvement. Data collection and analysis are routinely performed in hospitals and are even required by some regulatory agencies. Realization of the full benefits, which may be achieved through collection and analysis of data, should be actively pursued to prevent a meaningless exercise in paperwork. Internal historical comparison of data may be helpful but does not achieve the ultimate goal of identifying external benchmarks in order to determine best practice. External benchmarks provide a means of comparison with similar facilities, allowing the identification of processes needing improvement. The specialty of ophthalmology presents unique practice situations that are not comparable with other specialties, making it imperative to benchmark against other facilities where quick surgical case time, efficient surgical turnover times, low infection rates, and cost containment are essential and standard operations. Important data to benchmark include efficiency data, financial data, and quality or patient outcome data. After identifying facilities that excel in certain aspects of performance, it is necessary to analyze how their procedures help them achieve these favorable results. Careful data collection and analysis lead to improved practice and patient care.

  13. Benchmarking neuromorphic vision: lessons learnt from computer vision.

    Science.gov (United States)

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  14. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  15. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  16. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  17. Status of benchmark calculations of the neutron characteristics of the cascade molten salt ADS for the nuclear waste incineration

    Energy Technology Data Exchange (ETDEWEB)

    Dudnikov, A.A.; Alekseev, P.N.; Subbotin, S.A.; Vasiliev, A.V.; Abagyan, L.P.; Alexeyev, N.I.; Gomin, E.A.; Ponomarev, L.I.; Kolyaskin, O.E.; Men' shikov, L.I. [Russian Research Centre Kurchatov Inst., Moscow (Russian Federation); Kolesov, V.F.; Ivanin, I.A.; Zavialov, N.V. [Russian Federal Nuclear Center, RFNC-VNIIEF, Nizhnii Novgorod region (Russian Federation)

    2001-07-01

    The facility for incineration of long-lived minor actinides and some dangerous fission products should be an important feature of the future nuclear power (NP). For many reasons the liquid-fuel reactor driven by accelerator can be considered as the perspective reactor- burner for radioactive waste. The fuel of such reactor is the fluoride molten salt composition with minor actinides (Np, Cm, Am) and some fission products ({sup 99}Tc, {sup 129}I, etc.). Preliminary analysis shows that the values of keff, calculated with different codes and nuclear data differ up to several percents for such fuel compositions. Reliable critical and subcritical benchmark experiments with molten salt fuel compositions with significant quantities of minor actinides are absent. One of the main tasks for the numerical study of this problem is the estimation of nuclear data for such fuel compositions and verification of the different numerical codes used for the calculation of keff, neutron spectra and reaction rates. It is especially important for the resonance region where experimental data are poor or absent. The calculation benchmark of the cascade subcritical molten salt reactor is developed. For the chosen nuclear fuel composition the comparison of the results obtained by three different Monte-Carlo codes (MCNP4A, MCU, and C95) using three different nuclear data libraries are presented. This report concerns the investigation of subcritical molten salt reactor unit main peculiarities carried out at the beginning of ISTC project 1486. (author)

  18. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  19. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  20. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection.

  1. A Benchmark for Management Effectiveness

    OpenAIRE

    Zimmermann, Bill; Chanaron, Jean-Jacques; Klieb, Leslie

    2007-01-01

    International audience; This study presents a tool to gauge managerial effectiveness in the form of a questionnaire that is easy to administer and score. The instrument covers eight distinct areas of organisational climate and culture of management inside a company or department. Benchmark scores were determined by administering sample-surveys to a wide cross-section of individuals from numerous firms in Southeast Louisiana, USA. Scores remained relatively constant over a seven-year timeframe...

  2. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  3. A comprehensive benchmarking system for evaluating global vegetation models

    Directory of Open Access Journals (Sweden)

    D. I. Kelley

    2012-11-01

    Full Text Available We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM, and the Lund-Potsdam-Jena (LPJ and Land Processes and eXchanges (LPX dynamic global vegetation models (DGVMs. SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP is too high. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2, but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

  4. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  5. Performance Benchmarking of Tsunami-HySEA Model for NTHMP's Inundation Mapping Activities

    Science.gov (United States)

    Macías, Jorge; Castro, Manuel J.; Ortega, Sergio; Escalante, Cipriano; González-Vida, José Manuel

    2017-08-01

    The Tsunami-HySEA model is used to perform some of the numerical benchmark problems proposed and documented in the "Proceedings and results of the 2011 NTHMP Model Benchmarking Workshop". The final aim is to obtain the approval for Tsunami-HySEA to be used in projects funded by the National Tsunami Hazard Mitigation Program (NTHMP). Therefore, this work contains the numerical results and comparisons for the five benchmark problems (1, 4, 6, 7, and 9) required for such aim. This set of benchmarks considers analytical, laboratory, and field data test cases. In particular, the analytical solution of a solitary wave runup on a simple beach, and its laboratory counterpart, two more laboratory tests: the runup of a solitary wave on a conically shaped island and the runup onto a complex 3D beach (Monai Valley) and, finally, a field data benchmark based on data from the 1993 Hokkaido Nansei-Oki tsunami.

  6. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  7. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  8. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  9. A proposed benchmark problem for cargo nuclear threat monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, Thomas Wesley, E-mail: twholmes@ncsu.edu [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States); Calderon, Adan; Peeples, Cody R.; Gardner, Robin P. [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States)

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, ). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.x4 in.x16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.x16 in. side facing the system. The two sources used in the benchmark are {sup 137}Cs and {sup 235}U.

  10. A physician's due: measuring physician billing performance, benchmarking results.

    Science.gov (United States)

    Woodcock, Elizabeth W; Browne, Robert C; Jenkins, Jennifer L

    2008-07-01

    A 2008 study focused on four key performance indicators (KPIs) and staffing levels to benchmark the FYO7 performance of physician group billing operations. A comparison of the change in the KPIs from FYO3 to FYO7 for a number of these billing operations disclosed across-the-board improvements. Billing operations did not show significant changes in staffing levels during this time, pointing to the existence of obstacles that prevent staff reductions in this area.

  11. Benchmarking energy use and costs in salt-and-dry fish processing and lobster processing

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    The Canadian fish processing sector was the focus of this benchmarking analysis, which was conducted jointly by the Canadian Industry Program for Energy Conservation and the Fisheries Council of Canada, who retained Corporate Renaissance Group (CRG) to establish benchmarks for salt-and-dry processing operations in Nova Scotia and lobster processing operations in Prince Edward Island. The analysis was limited to the ongoing operations of the processing plants, and started with the landing of the fish/lobster and ended with freezer/cooler storage of the final products. Fuel used by the fishing fleet and in delivery trucks was not included in this study. The initial phase of each study involved interviews with management personnel at a number of plants in order to lay out process flow diagrams which were used to identify the series of stages of production for which energy consumption could be separately analyzed. Detailed information on annual plant production and total plant energy consumption and costs for the year by fuel type were collected, as well as inventories of energy-consuming machinery and equipment. At the completion of the data collection process, CRG prepared a summary of energy use, production data, assumptions and a preliminary analysis of each plant's energy use profile. Energy consumption and costs per short ton were calculated for each stage of production. Information derived from the calculations includes revised estimates of energy consumption by stage of production; energy costs per ton of fish; total energy consumption and costs associated with production of a standard product; and a detailed inter-plant comparison of energy consumption and costs per ton among the participating plants. Details of greenhouse gas (GHG) emissions and potential energy savings were also presented. 7 tabs., 3 figs.

  12. Benchmarking the next generation of homology inference tools.

    Science.gov (United States)

    Saripella, Ganapathi Varma; Sonnhammer, Erik L L; Forslund, Kristoffer

    2016-09-01

    Over the last decades, vast numbers of sequences were deposited in public databases. Bioinformatics tools allow homology and consequently functional inference for these sequences. New profile-based homology search tools have been introduced, allowing reliable detection of remote homologs, but have not been systematically benchmarked. To provide such a comparison, which can guide bioinformatics workflows, we extend and apply our previously developed benchmark approach to evaluate the 'next generation' of profile-based approaches, including CS-BLAST, HHSEARCH and PHMMER, in comparison with the non-profile based search tools NCBI-BLAST, USEARCH, UBLAST and FASTA. We generated challenging benchmark datasets based on protein domain architectures within either the PFAM + Clan, SCOP/Superfamily or CATH/Gene3D domain definition schemes. From each dataset, homologous and non-homologous protein pairs were aligned using each tool, and standard performance metrics calculated. We further measured congruence of domain architecture assignments in the three domain databases. CSBLAST and PHMMER had overall highest accuracy. FASTA, UBLAST and USEARCH showed large trade-offs of accuracy for speed optimization. Profile methods are superior at inferring remote homologs but the difference in accuracy between methods is relatively small. PHMMER and CSBLAST stand out with the highest accuracy, yet still at a reasonable computational cost. Additionally, we show that less than 0.1% of Swiss-Prot protein pairs considered homologous by one database are considered non-homologous by another, implying that these classifications represent equivalent underlying biological phenomena, differing mostly in coverage and granularity. Benchmark datasets and all scripts are placed at (http://sonnhammer.org/download/Homology_benchmark). forslund@embl.de Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. Preliminary environmental assessment of selected geopressured - geothermal prospect areas: Louisiana Gulf coast region. Volume I. Comparison of prospect areas on the basis of potential environmental impacts

    Energy Technology Data Exchange (ETDEWEB)

    Newchurch, E.J.; Bachman, A.L.; Bryan, C.F.; Harrison, D.P.; Muller, R.A.; Newman, J.P. Jr.; Smith, C.G. Jr.; Bailey, J.I. Jr.; Kelly, G.G.; Reibert, K.C.

    1978-10-15

    The results of a preliminary environmental assessment of the following geopressured-geothermal prospect areas in the Louisiana Gulf coast region are presented: South Johnson's Bayou, Sweet Lake, Rockefeller Refuge, Southeast Pecan Island, Atchafalaya Bay, and Lafourche Crossing. These prospect areas have been compared to determine their relative environmental acceptability for the test program. Trade-offs among the prospects in terms of potential impacts are highlighted. This assessment was made on the basis of the nature and extent of the proposed testing activities in view of the environmental characteristics of each prospect area: land use, geology and geohydrology, air quality, water resources and quality, ecological systems, and natural hazards. The comparison of prospect areas includes consideration of worst case situations. However, we believe that the test program activities, because they are so small in scale, will not result in major adverse impacts.

  14. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  15. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  16. Comparison of automated and manual determination of HER2 status in breast cancer for diagnostic use: a comparative methodological study using the Ventana BenchMark automated staining system and manual tests.

    Science.gov (United States)

    Bánkfalvi, Agnes; Boecker, Werner; Reiner, Angelika

    2004-10-01

    This study was performed to test the validity of manual and automated HER2 tests in one hundred routinely formalin-fixed and paraffin-embedded diagnostic breast carcinoma tissues. Immunohistochemical (IHC) and fluorescence in situ hybridization (FISH) assays for HER2 were separately carried out in two institutes of pathology specialised in diagnostics of breast diseases. Manual immunostaining was performed by the Dako-HercepTest. Automated IHC and FISH were carried out in the Ventana BenchMark platform by using the Pathway-CB11 antibody and the INFORM(R) HER2 probe, respectively. Positivity rates varied between HercepTest (26%), automated CB11 IHC (23%) and automated FISH (22%). Overall concordance between positive (2+, 3+) and negative (0; 1+) results of manual and automated IHC was 97%, between automated FISH and IHC 92%, and between automated FISH and HercepTest 89%. The frequency of 2+ IHC scores was 13% using the BenchMark and 14% with the HercepTest; 6/12 and 8/14 of the respective cases were not amplified by FISH. Automated FISH was not interpretable in 11 of 100 specimens. In the 89 informative cases, automated IHC resulted in increased specificity (92% vs. 88%), increased positive predictive value (73% vs. 64%) and increased efficiency (92% vs. 89%). We conclude that automation improves the accuracy of HER2 detection in diagnostic breast carcinoma tissues and provides a new approach for the global standardization of clinical HER2 tests.

  17. Verification of the code DYN3D/R with the help of international benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Grundmann, U.; Rohde, U.

    1997-10-01

    Different benchmarks for reactors with quadratic fuel assemblies were calculated with the code DYN3D/R. In this report comparisons with the results of the reference solutions are carried out. The results of DYN3D/R and the reference calculation for the eigenvalue k{sub eff} and the power distribution are shown for the steady-state 3-dimensional IAEA-Benchmark. The results of NEACRP-Benchmarks on control rod ejections in a standard PWR were compared with the reference solutions published by the NEA Data Bank. For assessing the accuracy of DYN3D/R results in comparison to other codes the deviations to the reference solutions are considered. Detailed comparisons with the published reference solutions of the NEA-NSC Benchmarks on uncontrolled withdrawal of control rods are made. The influence of the axial nodalization is also investigated. All in all, a good agreement of the DYN3D/R results with the reference solutions can be seen for the considered benchmark problems. (orig.) [Deutsch] Verschiedene Benchmarks fuer Reaktoren mit quadratischen Brennelementen wurden mit dem Code DYN3D/R berechnet. In diesem Bericht erfolgen Vergleiche mit den Ergebnissen der Referenzloesungen. Die Ergebnisse von DYN3D/R und der Referenzrechnung fuer Eigenwert k{sub eff} und Leistungsverteilung des stationaeren 3-dimensionalen IAEA-Benchmarks werden dargestellt. Die Ergebnisse der NEACRP-Benchmarks fuer die Auswuerfe von Steuerstaeben in einem typischen DWR werden mit den von der NEA Data Bank veroeffentlichten Referenzloesungen verglichen. Zur Einschaetzung der Genauigkeit der DYN3D/R Resultate im Vergleich zu anderen Rechenprogrammen werden die Abweichungen zu den Referenzloesungen betrachtet. Detaillierte Vergleiche mit den Referenzloesungen erfolgen fuer die NEA-NSC Benchmarks zum unkontrollierten Ausfahren von Steuerstaeben. Dabei wird der Einfluss der axialen Nodalisierung untersucht. Insgesamt wird eine gute Uebereinstimmung der DYN3D/R Resultate mit den Referenzloesungen fuer die

  18. The implementation of benchmarking process in marketing education services by Ukrainian universities

    Directory of Open Access Journals (Sweden)

    G.V. Okhrimenko

    2016-03-01

    Full Text Available The aim of the article. The consideration of theoretical and practical aspects of benchmarking at universities is the main task of this research. At first, the researcher identified the essence of benchmarking. It involves comparing the characteristics of college or university leading competitors in the industry and copying of proven designs. Benchmarking tries to eliminate the fundamental problem of comparison – the impossibility of being better than the one from whom they borrow solution. Benchmarking involves, therefore, self-evaluation including systematic collection of data and information with the view to making relevant comparisons of strengths and weaknesses of performance aspects. Benchmarking identifies gaps in performance, seeks new approaches for improvements, monitors progress, reviews benefits and assures adoption of good practices. The results of the analysis. There are five types of benchmarking: internal, competitive, functional, procedural and general. Benchmarking is treated as a process of systematically applied and has specific stages: 1 identification of study object; 2 identification of businesses for comparison; 3 selection of data collection methods; 4 determining variations in terms of efficiency and determination of the levels of future results; 5 communicating of the results of benchmarking; 6 development of implementation plan, initiating the implementation, monitoring implementation; 7 new benchmarks definition. The researcher gave the results of practical use of the benchmarking algorithm at universities. In particular, the monitoring and SWOT-analysis were identified competitive practices used at Ukrainian universities. The main criteria for determining the potential for benchmarking of universities were: 1 the presence of new teaching methods at universities; 2 the involvement of foreign lecturers, partners of other universities for cooperation; 3 promoting education services for target groups; 4 violation of

  19. Quality benchmarking methodology: Case study of finance and culture industries in Latvia

    Directory of Open Access Journals (Sweden)

    Ieva Zemīte

    2011-01-01

    Full Text Available Political, socio-economic and cultural changes that have taken place in the world during the last years have influenced all the spheres. Constant improvements are necessary to sustain in rival and shrinking markets. This sets high quality standards for the service industries. Therefore it is important to conduct comparison of quality criteria to ascertain which practices are achieving superior performance levels. At present companies in Latvia do not carry out mutual benchmarking, and as a result of that do not know how they rank against their peers in terms of quality, as well as they do not see benefits in sharing of information and in benchmarking.The purpose of this paper is to determine the criteria of qualitative benchmarking, and to investigate the use of the benchmarking quality in service industries, particularly: finance and culture sectors in Latvia in order to determine the key driving factors of quality, to explore internal and foreign benchmarks, and to reveal the full potential of inputs’ reduction and efficiency growth for the aforementioned industries.Case study and other tools are used to define the readiness of the company for benchmarking. Certain key factors are examined for their impact on quality criteria. The results are based on the research conducted in professional associations in defined fields (insurance and theatre.Originality/value – this is the first study that adopts the benchmarking models for measuring quality criteria and readiness for mutual comparison in insurance and theatre industries in Latvia.

  20. Benchmarking NMR experiments: a relational database of protein pulse sequences.

    Science.gov (United States)

    Senthamarai, Russell R P; Kuprov, Ilya; Pervushin, Konstantin

    2010-03-01

    Systematic benchmarking of multi-dimensional protein NMR experiments is a critical prerequisite for optimal allocation of NMR resources for structural analysis of challenging proteins, e.g. large proteins with limited solubility or proteins prone to aggregation. We propose a set of benchmarking parameters for essential protein NMR experiments organized into a lightweight (single XML file) relational database (RDB), which includes all the necessary auxiliaries (waveforms, decoupling sequences, calibration tables, setup algorithms and an RDB management system). The database is interfaced to the Spinach library (http://spindynamics.org), which enables accurate simulation and benchmarking of NMR experiments on large spin systems. A key feature is the ability to use a single user-specified spin system to simulate the majority of deposited solution state NMR experiments, thus providing the (hitherto unavailable) unified framework for pulse sequence evaluation. This development enables predicting relative sensitivity of deposited implementations of NMR experiments, thus providing a basis for comparison, optimization and, eventually, automation of NMR analysis. The benchmarking is demonstrated with two proteins, of 170 amino acids I domain of alphaXbeta2 Integrin and 440 amino acids NS3 helicase.

  1. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  2. Benchmarking in the Academic Departments using Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad M. Rayeni

    2010-01-01

    Full Text Available Problem statement: The purpose of this study is to analyze efficiency and benchmarking using Data Envelopment Analysis (DEA in departments of University. Benchmarking is a process of defining valid measures of performance comparison among peer decision making units (DMUs, using them to determine the relative positions of the peer DMUs and, ultimately, establishing a standard of excellence. Approach: DEA can be regarded as a benchmarking tool, because the frontier identified can be regarded as an empirical standard of excellence. Once the frontier is established, then one may compare a set of DMUs to the frontier. Results: We apply benchmarking to detect mistakes of inefficient departments to become efficient and to learn better managerial practice. Conclusion: The results indicated 9 departments are inefficient between 21 departments. The average inefficiency is 0.8516. Inefficient departments dont have excess in the number of teaching staff, but all of them have excess the number of registered student. The shortage of performed research works is the most important indicators of outputs in inefficient departments, which must be corrected.

  3. Comparison of force, power, and striking efficiency for a Kung Fu strike performed by novice and experienced practitioners: preliminary analysis.

    Science.gov (United States)

    Neto, Osmar Pinto; Magini, Marcio; Saba, Marcelo M F; Pacheco, Marcos Tadeu Tavares

    2008-02-01

    This paper presents a comparison of force, power, and efficiency values calculated from Kung Fu Yau-Man palm strikes, when performed by 7 experienced and 6 novice men. They performed 5 palm strikes to a freestanding basketball, recorded by high-speed camera at 1000 Hz. Nonparametric comparisons and correlations showed experienced practitioners presented larger values of mean muscle force, mean impact force, mean muscle power, mean impact power, and mean striking efficiency, as is noted in evidence obtained for other martial arts. Also, an interesting result was that for experienced Kung Fu practitioners, muscle power was linearly correlated with impact power (p = .98) but not for the novice practitioners (p = .46).

  4. The IEA Annex 20 Two-Dimensional Benchmark Test for CFD Predictions

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Rong, Li; Cortes, Ines Olmedo

    2010-01-01

    This paper describes a benchmark test which can be used for tests of CFD predictions of room air distribution. The benchmark describes a box-like room with a supply slot along the side wall. Laser-Doppler measurements and hot-wire measurements are given for comparison with the obtained CFD...... in a supply opening, study of local emission and study of airborne chemical reactions. Therefore the web page is also a collection of information which describes the importance of the different elements of a CFD procedure. The benchmark is originally developed for test of two-dimensional flow, but the paper...

  5. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  6. The Preliminary Study on Benchmark- Price of the Collective Land in Village---Take LiangYu Village in Anhui Province as an Example%农村集体建设用地基准地价初步研究——以安徽省良玉村为例

    Institute of Scientific and Technical Information of China (English)

    揣小伟; 黄贤金; 许益林

    2012-01-01

    以安徽省良玉村为例,对农村集体建设用地的地价内涵、估价方法、地价影响因素进行了研究,并从6个方面选取了影响土地价格最为密切的17个因子,采用层次分析法定级,其定级综合分值位于0.35—0.65之间,土地等级被划分为四个,主要评价单元属于二级水平。考虑集体建设用地的特点,采用收益还原法、成本逼近法、市场比较法计算样点地价,并采用平均值法分别确定商业用地、住宅用地、工业用地的基准地价。结果表明:对于商业用地和住宅用地,一级土地和二级土地之间的价格差异最大,其余各级土地之间的地价差异依次递减,表明土地质量越好,价格反映越敏感。同级土地价格变化范围的差异:商业用地价格级差大于住宅用地,住宅用地大于工业用地,表明与城镇建设用地相同,集体建设用地地价对土地质量敏感性的反应也是商业用地〉住宅用地〉工业用地。%Take LiangYu village in AnHui Province as an example, the land price connotation, estimating method and the influenced factors to the collective construction land in village had been studied. This paper had chosen 17 factors in 6 aspects which affect land quality and land grade closely, using the method of AHP, the value for land grade was calculated between 0.35 and 0.65, and according to the calculated values, four land grades had been divided, most of the evaluation units belonged to the second grade. This paper used the income capitalization method, the direct sales comparison method and the cost approximation method to calculate the sample land price, and according to the calculated results, benchmark-price had been made to commercial land, residential land and industrial land respectively. The results showed that: the price between the first and the second land grade had much difference to commercial land and residential land, the Benchmark

  7. Ship Propulsion System as a Benchmark for Fault-Tolerant Control

    OpenAIRE

    Izadi-Zamanabadi, Roozbeh; Blanke, M.

    1998-01-01

    Fault-tolerant control combines fault detection and isolation techniques with supervisory control to achieve autonomous accommodation of faults before they develop into failures. While fault detection and isolation (FDI) methods have matured during the past decade the extension to fault-tolerant control is a fairly new area. The paper presents a ship propulsion system as a benchmark that should be useful as a platform for development of new ideas and comparison of methods. The benchmark has t...

  8. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  9. Benchmarking of human resources management

    OpenAIRE

    David M. Akinnusi

    2008-01-01

    This paper reviews the role of human resource management (HRM) which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HR...

  10. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    Science.gov (United States)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  11. OECD/NEA benchmark for time-dependent neutron transport calculations without spatial homogenization

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Jason, E-mail: jason.hou@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Ivanov, Kostadin N. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Boyarinov, Victor F.; Fomichenko, Peter A. [National Research Centre “Kurchatov Institute”, Kurchatov Sq. 1, Moscow (Russian Federation)

    2017-06-15

    Highlights: • A time-dependent homogenization-free neutron transport benchmark was created. • The first phase, known as the kinetics phase, was described in this work. • Preliminary results for selected 2-D transient exercises were presented. - Abstract: A Nuclear Energy Agency (NEA), Organization for Economic Co-operation and Development (OECD) benchmark for the time-dependent neutron transport calculations without spatial homogenization has been established in order to facilitate the development and assessment of numerical methods for solving the space-time neutron kinetics equations. The benchmark has been named the OECD/NEA C5G7-TD benchmark, and later extended with three consecutive phases each corresponding to one modelling stage of the multi-physics transient analysis of the nuclear reactor core. This paper provides a detailed introduction of the benchmark specification of Phase I, known as the “kinetics phase”, including the geometry description, supporting neutron transport data, transient scenarios in both two-dimensional (2-D) and three-dimensional (3-D) configurations, as well as the expected output parameters from the participants. Also presented are the preliminary results for the initial state 2-D core and selected transient exercises that have been obtained using the Monte Carlo method and the Surface Harmonic Method (SHM), respectively.

  12. Validation of Refractivity Profiles Retrieved from FORMOSAT-3/COSMIC Radio Occultation Soundings: Preliminary Results of Statistical Comparisons Utilizing Balloon-Borne Observations

    Directory of Open Access Journals (Sweden)

    Hiroo Hayashi

    2009-01-01

    Full Text Available The GPS radio occultation (RO soundings by the FORMOSAT-3/COSMIC (Taiwan¡¦s Formosa Satellite Misssion #3/Constellation Observing System for Meteorology, Ionosphere and Climate satellites launched in mid-April 2006 are compared with high-resolution balloon-borne (radiosonde and ozonesonde observations. This paper presents preliminary results of validation of the COSMIC RO measurements in terms of refractivity through the troposphere and lower stratosphere. With the use of COSMIC RO soundings within 2 hours and 300 km of sonde profiles, statistical comparisons between the collocated refractivity profiles are erformed for some tropical regions (Malaysia and Western Pacific islands where moisture-rich air is expected in the lower troposphere and for both northern and southern polar areas with a very dry troposphere. The results of the comparisons show good agreement between COSMIC RO and sonde refractivity rofiles throughout the troposphere (1 - 1.5% difference at most with a positive bias generally becoming larger at progressively higher altitudes in the lower stratosphere (1 - 2% difference around 25 km, and a very small standard deviation (about 0.5% or less for a few kilometers below the tropopause level. A large standard deviation of fractional differences in the lowermost troposphere, which reaches up to as much as 3.5 - 5%at 3 km, is seen in the tropics while a much smaller standard deviation (1 - 2% at most is evident throughout the polar troposphere.

  13. [Benchmarking in health care: conclusions and recommendations].

    Science.gov (United States)

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  14. An Effective Approach for Benchmarking Implementation

    OpenAIRE

    B. M. Deros; Tan, J.; M.N.A. Rahman; N. A.Q.M. Daud

    2011-01-01

    Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty res...

  15. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  16. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  17. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  18. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  19. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  20. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  1. Benchmarking Implementations of Functional Languages with "Pseudoknot", a float-intensive benchmark

    NARCIS (Netherlands)

    Hartel, Pieter H.; Feeley, M.; Alt, M.; Augustsson, L.

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  2. Multifractal detrended fluctuation analysis of human EEG: preliminary investigation and comparison with the wavelet transform modulus maxima technique.

    Directory of Open Access Journals (Sweden)

    Todd Zorick

    Full Text Available Recently, many lines of investigation in neuroscience and statistical physics have converged to raise the hypothesis that the underlying pattern of neuronal activation which results in electroencephalography (EEG signals is nonlinear, with self-affine dynamics, while scalp-recorded EEG signals themselves are nonstationary. Therefore, traditional methods of EEG analysis may miss many properties inherent in such signals. Similarly, fractal analysis of EEG signals has shown scaling behaviors that may not be consistent with pure monofractal processes. In this study, we hypothesized that scalp-recorded human EEG signals may be better modeled as an underlying multifractal process. We utilized the Physionet online database, a publicly available database of human EEG signals as a standardized reference database for this study. Herein, we report the use of multifractal detrended fluctuation analysis on human EEG signals derived from waking and different sleep stages, and show evidence that supports the use of multifractal methods. Next, we compare multifractal detrended fluctuation analysis to a previously published multifractal technique, wavelet transform modulus maxima, using EEG signals from waking and sleep, and demonstrate that multifractal detrended fluctuation analysis has lower indices of variability. Finally, we report a preliminary investigation into the use of multifractal detrended fluctuation analysis as a pattern classification technique on human EEG signals from waking and different sleep stages, and demonstrate its potential utility for automatic classification of different states of consciousness. Therefore, multifractal detrended fluctuation analysis may be a useful pattern classification technique to distinguish among different states of brain function.

  3. ACAMPROSATE AND BACLOFEN WERE NOT EFFECTIVE IN THE TREATMENT OF PATHOLOGICAL GAMBLING: PRELIMINARY BLIND RATER COMPARISON STUDY

    Directory of Open Access Journals (Sweden)

    Pinhas N Dannon

    2011-06-01

    Full Text Available Objectives: Pathological gambling (PG is a highly prevalent and disabling impulse control disorder. A range of psychopharmacological options are available for the treatment of PG, including selective serotonin reuptake inhibitors (SSRI, opioid receptor antagonists, anti-addiction drugs and mood stabilizers. In our preliminary study, we examined the efficacy of two anti-addiction drugs, Baclofen and Acamprosate, in the treatment of PG. Materials & Methods: 17 male gamblers were randomly divided into two groups. Each group received one of the two drugs without being blind to treatment. All patients underwent a comprehensive psychiatric diagnostic evaluation and completed a series of semi-structured interviews. During the six months of study, monthly evaluations were carried out to assess improvement and relapses. Relapse was defined as recurrent gambling behavior. Results: None of the 17 patients reached the six months abstinence. One patient receiving Baclofen sustained abstinence for 4 months. 14 patients succeeded in sustaining abstinence for 1-3 months. 2 patients stopped attending monthly evaluations. Conclusion: Baclofen and Acamprosate did not prove efficient in treating pathological gamblers.

  4. Benchmarking of neutron production of heavy-ion transport codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I. [Oak Ridge National Laboratory, Oak Ridge, TN 37831-6172 (United States); Ronningen, R. M. [Michigan State Univ., National Superconductiong Cyclotron Laboratory, East Lansing, MI 48824-1321 (United States); Heilbronn, L. [Univ. of Tennessee, 1004 Estabrook Rd., Knoxville, TN 37996-2300 (United States)

    2011-07-01

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)

  5. Isprs Benchmark for Multi-Platform Photogrammetry

    Science.gov (United States)

    Nex, F.; Gerke, M.; Remondino, F.; Przybilla, H.-J.; Bäumker, M.; Zurhorst, A.

    2015-03-01

    Airborne high resolution oblique imagery systems and RPAS/UAVs are very promising technologies that will keep on influencing the development of geomatics in the future years closing the gap between terrestrial and classical aerial acquisitions. These two platforms are also a promising solution for National Mapping and Cartographic Agencies (NMCA) as they allow deriving complementary mapping information. Although the interest for the registration and integration of aerial and terrestrial data is constantly increasing, only limited work has been truly performed on this topic. Several investigations still need to be undertaken concerning algorithms ability for automatic co-registration, accurate point cloud generation and feature extraction from multiplatform image data. One of the biggest obstacles is the non-availability of reliable and free datasets to test and compare new algorithms and procedures. The Scientific Initiative "ISPRS benchmark for multi-platform photogrammetry", run in collaboration with EuroSDR, aims at collecting and sharing state-of-the-art multi-sensor data (oblique airborne, UAV-based and terrestrial images) over an urban area. These datasets are used to assess different algorithms and methodologies for image orientation and dense matching. As ground truth, Terrestrial Laser Scanning (TLS), Aerial Laser Scanning (ALS) as well as topographic networks and GNSS points were acquired to compare 3D coordinates on check points (CPs) and evaluate cross sections and residuals on generated point cloud surfaces. In this paper, the acquired data, the pre-processing steps, the evaluation procedures as well as some preliminary results achieved with commercial software will be presented.

  6. A preliminary comparison of Na lidar and meteor radar zonal winds during quiet and sub-storm conditions

    Science.gov (United States)

    Grandhi, Kishore Kumar; Nesse Tyssøy, Hilde; Williams, Bifford P.; Stober, Gunter

    2017-04-01

    It is speculated that sufficiently large electric fields during geomagnetic disturbed conditions may decouple the meteor trail electron motions from the background neutral winds and leads to erroneous neutral wind estimation. As per our knowledge, the potential errors have never been reported. In the present case study, we have been using co-located meteor radar and sodium resonance lidar zonal wind measurements over Andenes (69.27oN,16.04oE) during intense sub storms in the declining phase of Jan 2005 solar proton event (21-22 Jan 2005). In total 14 hours of continuous measurements are available for the comparison, which covers both quiet and disturbed conditions. For comparison, the lidar zonal winds are averaged in meteor radar time and height bins. High cross correlations (˜0.8) are found in all height regions. The discrepancies can be explained in the light of differences in the observational volumes of the two instruments. Further, we extended the comparison to address the ionization impact on the meteor radar winds. For quiet hours, the observed meteor radar winds are quite consistent with lidar winds. While during the disturbed hours comparatively large differences are noticed at higher most altitudes. This might be due to ionization impact on meteor radar winds. At the present one event is not sufficient to make any consolidate conclusion. However, at least from this study we found some effect on the neutral wind measurements for the meteor radar. Further study with more co-located measurements are needed to test statistical significance of the result.

  7. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  8. Benchmarking ICRF simulations for ITER

    Energy Technology Data Exchange (ETDEWEB)

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  9. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  10. COG validation: SINBAD Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  11. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  12. Preliminary report of the comparison of multiple non-destructive assay techniques on LANL Plutonium Facility waste drums

    Energy Technology Data Exchange (ETDEWEB)

    Bonner, C.; Schanfein, M.; Estep, R. [and others

    1999-03-01

    Prior to disposal, nuclear waste must be accurately characterized to identify and quantify the radioactive content. The DOE Complex faces the daunting task of measuring nuclear material with both a wide range of masses and matrices. Similarly daunting can be the selection of a non-destructive assay (NDA) technique(s) to efficiently perform the quantitative assay over the entire waste population. In fulfilling its role of a DOE Defense Programs nuclear User Facility/Technology Development Center, the Los Alamos National Laboratory Plutonium Facility recently tested three commercially built and owned, mobile nondestructive assay (NDA) systems with special nuclear materials (SNM). Two independent commercial companies financed the testing of their three mobile NDA systems at the site. Contained within a single trailer is Canberra Industries segmented gamma scanner/waste assay system (SGS/WAS) and neutron waste drum assay system (WDAS). The third system is a BNFL Instruments Inc. (formerly known as Pajarito Scientific Corporation) differential die-away imaging passive/active neutron (IPAN) counter. In an effort to increase the value of this comparison, additional NDA techniques at LANL were also used to measure these same drums. These are comprised of three tomographic gamma scanners (one mobile unit and two stationary) and one developmental differential die-away system. Although not certified standards, the authors hope that such a comparison will provide valuable data for those considering these different NDA techniques to measure their waste as well as the developers of the techniques.

  13. Preliminary Comparison of Reaction Rate theory and Object Kinetic Monte Carlo Simulations of Defect Cluster Dynamics under Irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Stoller, Roger E [ORNL; Golubov, Stanislav I [ORNL; Becquart, C. S. [Universite de Lille; Domain, C. [EDF R& D, Clamart, France

    2006-09-01

    The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory (RT), Monte Carlo (MC), or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales ( m to >mm), and timescales from diffusion (~ s) to long-term microstructural evolution (~years). Phenomena at this scale have the most direct impact on mechanical properties in structural materials of interest to nuclear energy systems, and are also the most accessible to direct comparison between the results of simulations and experiments. Recent advances in computational power have substantially expanded the range of application for MC models. Although the RT and MC models can be used simulate the same phenomena, many of the details are handled quite differently in the two approaches. A direct comparison of the RT and MC descriptions has been made in the domain of point defect cluster dynamics modeling, which is relevant to both the nucleation and evolution of radiation-induced defect structures. The relative merits and limitations of the two approaches are discussed, and the predictions of the two approaches are compared for specific irradiation conditions.

  14. Effectiveness comparison between Thai traditional massage and Chinese acupuncture for myofascial back pain in Thai military personnel: a preliminary report.

    Science.gov (United States)

    Kumnerddee, Wipoo

    2009-02-01

    The objective of this randomized comparative study was to provide preliminary data of comparative effectiveness of Thai traditional massage (TTM) and Chinese acupuncture for the treatment of myofascial back pain in young military personnel. Eighteen Thai military personnel, aged ranging from 20-40 years were randomly divided into TTM and acupuncture groups. Each group received 5 sessions of massage or acupuncture during a 10-day period. The Thai version McGill Pain Questionnaire, 100-mm, visual analog scale (VAS) and summation of pain threshold in each trigger point measured by pressure algometer were assessed at day 0, 3, 8 and 10. At the end of treatment protocols, McGill scores decreased significantly in TTM and acupuncture groups (p = 0.024 and 0.002, respectively). VAS also decreased significantly (p = 0.029 and 0.003, respectively). However, the pain pressure threshold increased significantly in the acupuncture group but not in the TTM group (p = 0.006 and 0.08, respectively). When outcomes were compared between the two groups, no significant difference was found in the VAS (p = 0.115) and pain pressure threshold (p = 0.116), whereas the acupuncture group showed significantly lower McGill scores than the TTM group (p = 0.039). In conclusion, five sessions of Thai traditional massage and Chinese acupuncture were effective for the treatment of myofascial back pain in young Thai military personnel. Significant effects in both groups begin after the first session. Acupuncture is more effective than Thai traditional massage when affective aspect is also evaluated.

  15. Comparison of image quality between mammography dedicated monitor and UHD 4K monitor, using standard mammographic phantom: A preliminary study

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ji Young; Cha, Soon Joo; Hong, Sung Hwan; Kim, Su Young; Kim, Yong Hoon; Kim, You Sung; Kim, Jeong A [Dept. of Radiology, Inje Unveristy Ilsan Paik Hospital, Goyang (Korea, Republic of)

    2017-03-15

    Using standard mammographic phantom images, we compared the image quality obtained between a mammography dedicated 5 megapixel monitor (5M) and a UHD 4K (4K) monitor with digital imaging and communications in medicine display, to investigate the possibility of clinical application of 4K monitors. Three different exposures (autoexposure, overexposure and underexposure) images of mammographic phantom were obtained, and six radiologists independently evaluated the images in 5M and 4K without image modulation, by scoring of fibers, groups of specks and masses within the phantom image. The mean score of each object on both monitors was independently analyzed, using t-test and interobserver reliability by intraclass correlation coefficient (ICC) of SPSS. The overall mean scores of fiber, group of specks, and mass in 5M were 4.25, 3.92, and 3.28 respectively, and scores obtained in 4K monitor were 3.81, 3.58, and 3.14, respectively. No statistical difference was seen in scores of fiber and mass between the two monitors at all exposure conditions, but the score of group of specks in 4K was statistically lower in the overall (p = 0.0492) and in underexposure conditions (p = 0.012). The ICC for interobserver reliability was excellent (0.874). Our study suggests that since the mammographic phantom images are appropriate with no significant difference in image quality observed between the two monitors, the 4K monitor could be used for clinical studies. Since this is a small preliminary study using phantom images, the result may differ in actual mammographic images, and subsequent investigation with clinical mammographic images is required.

  16. Comparison of effects of uncomplicated canine babesiosis and canine normovolaemic anaemia on abdominal splanchnic Doppler characteristics - a preliminary investigation

    Directory of Open Access Journals (Sweden)

    L.M. Koma

    2005-06-01

    Full Text Available A preliminary study was conducted to compare uncomplicated canine babesiosis (CB and experimentally induced normovolaemic anaemia (EA using Doppler ultrasonography of abdominal splanchnic vessels. Fourteen dogs with uncomplicated CB were investigated together with 11 healthy Beagles during severe EA, moderate EA and the physiological state as a control group. Canine babesiosis was compared with severe EA, moderate EA and the physiological state using Doppler variables of the abdominal aorta, cranial mesenteric artery (CMA, coeliac, left renal and interlobar, and hilar splenic arteries, and the main portal vein. Patterns of haemodynamic changes during CB and EA were broadly similar and were characterised by elevations in velocities and reductions in resistance indices in all vessels except the renal arteries when compared with the physiological state. Aortic and CMA peak systolic velocities and CMA end diastolic and time-averaged mean velocities in CB were significantly lower (P < 0.023 than those in severe EA. Patterns of renal haemodynamic changes during CB and EA were similar. However, the renal patterns differed from those of aortic and gastrointestinal arteries, having elevations in vascular resistance indices, a reduction in end diastolic velocity and unchanged time-averaged mean velocity. The left renal artery resistive index in CB was significantly higher (P < 0.025 than those in EA and the physiological state. Renal interlobar artery resistive and pulsatility indices in CB were significantly higher (P < 0.016 than those of moderate EA and the physiological state. The similar haemodynamic patterns in CB and EA are attributable to anaemia, while significant differencesmayadditionally be attributed to pathophysiological factors peculiar to CB.

  17. A comparison of preliminary oncologic outcome and postoperative complications between patients undergoing either open or robotic radical cystectomy

    Directory of Open Access Journals (Sweden)

    Antonio Cusano

    Full Text Available ABSTRACT Purpose: To compare complications and outcomes in patients undergoing either open radical cystectomy (ORC or robotic-assisted radical cystectomy (RRC. Materials and Methods: We retrospectively identified patients that underwent ORC or RRC between 2003- 2013. We statistically compared preliminary oncologic outcomes of patients for each surgical modality. Results: 92 (43.2% and 121 (56.8% patients underwent ORC and RRC, respectively. While operative time was shorter for ORC patients (403 vs. 508 min; p<0.001, surgical blood loss and transfusion rates were significantly lower in RRC patients (p<0.001 and 0.006. Length of stay was not different between groups (p=0.221. There was no difference in the proportion of lymph node-positive patients between groups. However, RRC patients had a greater number of lymph nodes removed during surgery (18 vs. 11.5; p<0.001. There was no significant difference in the incidence of pre-existing comorbidities or in the Clavien distribution of complications between groups. ORC and RRC patients were followed for a median of 1.38 (0.55-2.7 and 1.40 (0.582.59 years, respectively (p=0.850. During this period, a lower proportion (22.3% of RRC patients experienced disease recurrence vs. ORC patients (34.8%. However, there was no significant difference in time to recurrence between groups. While ORC was associated with a higher all-cause mortality rate (p=0.049, there was no significant difference in disease-free survival time between groups. Conclusions: ORC and RRC patients experience postoperative complications of similar rates and severity. However, RRC may offer indirect benefits via reduced surgical blood loss and need for transfusion.

  18. Benchmarking protein classification algorithms via supervised cross-validation.

    Science.gov (United States)

    Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor

    2008-04-24

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and

  19. Using chemical benchmarking to determine the persistence of chemicals in a Swedish lake.

    Science.gov (United States)

    Zou, Hongyan; Radke, Michael; Kierkegaard, Amelie; MacLeod, Matthew; McLachlan, Michael S

    2015-02-03

    It is challenging to measure the persistence of chemicals under field conditions. In this work, two approaches for measuring persistence in the field were compared: the chemical mass balance approach, and a novel chemical benchmarking approach. Ten pharmaceuticals, an X-ray contrast agent, and an artificial sweetener were studied in a Swedish lake. Acesulfame K was selected as a benchmark to quantify persistence using the chemical benchmarking approach. The 95% confidence intervals of the half-life for transformation in the lake system ranged from 780-5700 days for carbamazepine to <1-2 days for ketoprofen. The persistence estimates obtained using the benchmarking approach agreed well with those from the mass balance approach (1-21% difference), indicating that chemical benchmarking can be a valid and useful method to measure the persistence of chemicals under field conditions. Compared to the mass balance approach, the benchmarking approach partially or completely eliminates the need to quantify mass flow of chemicals, so it is particularly advantageous when the quantification of mass flow of chemicals is difficult. Furthermore, the benchmarking approach allows for ready comparison and ranking of the persistence of different chemicals.

  20. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  1. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  2. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  3. Synergetic effect of benchmarking competitive advantages

    Directory of Open Access Journals (Sweden)

    N.P. Tkachova

    2011-12-01

    Full Text Available It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  4. Synergetic effect of benchmarking competitive advantages

    OpenAIRE

    N.P. Tkachova; P.G. Pererva

    2011-01-01

    It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  5. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  6. Machines are benchmarked by code, not algorithms

    NARCIS (Netherlands)

    Poss, R.

    2013-01-01

    This article highlights how small modifications to either the source code of a benchmark program or the compilation options may impact its behavior on a specific machine. It argues that for evaluating machines, benchmark providers and users be careful to ensure reproducibility of results based on th

  7. Benchmark analysis of railway networks and undertakings

    NARCIS (Netherlands)

    Hansen, I.A.; Wiggenraad, P.B.L.; Wolff, J.W.

    2013-01-01

    Benchmark analysis of railway networks and companies has been stimulated by the European policy of deregulation of transport markets, the opening of national railway networks and markets to new entrants and separation of infrastructure and train operation. Recent international railway benchmarking s

  8. Benchmark Assessment for Improved Learning. AACC Report

    Science.gov (United States)

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  9. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  10. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  11. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  12. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  13. TU-CD-304-03: Dosimetric Verification and Preliminary Comparison of Dynamic Wave Arc for SBRT Treatments

    Energy Technology Data Exchange (ETDEWEB)

    Burghelea, M [UZ BRUSSEL, Brussels (Belgium); BRAINLAB AG, Munich (Germany); Babes Bolyai University, Cluj-Napoca (Romania); Poels, K; Gevaert, T; Tournel, K; Dhont, J; De Ridder, M; Verellen, D [UZ BRUSSEL, Brussels (Belgium); Hung, C [BRAINLAB AG, Munich (Germany); Eriksson, K [RAYSEARCH LABORATORIES AB, Stockholm (Sweden); Simon, V [Babes Bolyai University, Cluj-Napoca (Romania)

    2015-06-15

    Purpose: To evaluate the potential dosimetric benefits and verify the delivery accuracy of Dynamic Wave Arc, a novel treatment delivery approach for the Vero SBRT system. Methods: Dynamic Wave Arc (DWA) combines simultaneous movement of gantry/ring with inverse planning optimization, resulting in an uninterrupted non-coplanar arc delivery technique. Thirteen SBRT complex cases previously treated with 8–10 conformal static beams (CRT) were evaluated in this study. Eight primary centrally-located NSCLC (prescription dose 4×12Gy or 8×7.5Gy) and five oligometastatic cases (2×2 lesions, 10×5Gy) were selected. DWA and coplanar VMAT plans, partially with dual arcs, were generated for each patient using identical objective functions for target volumes and OARs on the same TPS (RayStation, RaySearch Laboratories). Dosimetric differences and delivery time among these three planning schemes were evaluated. The DWA delivery accuracy was assessed using the Delta4 diode array phantom (ScandiDos AB). The gamma analysis was performed with the 3%/3mm dose and distance-to-agreement criteria. Results: The target conformity for CRT, VMAT and DWA were 0.95±0.07, 0.96±0.04 and 0.97±0.04, while the low dose spillage gradient were 5.52±1.36, 5.44±1.11, and 5.09±0.98 respectively. Overall, the bronchus, esophagus and spinal cord maximum doses were similar between VMAT and DWA, but highly reduced compared with CRT. For the lung cases, the mean dose and V20Gy were lower for the arc techniques compares with CRT, while for the liver cases, the mean dose and the V30Gy presented slightly higher values. The average delivery time of VMAT and DWA were 2.46±1.10 min and 4.25±1.67 min, VMAT presenting shorter treatment time in all cases. The DWA dosimetric verification presented an average gamma index passing rate of 95.73±1.54% (range 94.2%–99.8%). Conclusion: Our preliminary data indicated that the DWA is deliverable with clinically acceptable accuracy and has the potential to

  14. Benchmarking the QUAD4/TRIA3 element

    Science.gov (United States)

    Pitrof, Stephen M.; Venkayya, Vipperla B.

    1993-01-01

    The QUAD4 and TRIA3 elements are the primary plate/shell elements in NASTRAN. These elements enable the user to analyze thin plate/shell structures for membrane, bending and shear phenomena. They are also very new elements in the NASTRAN library. These elements are extremely versatile and constitute a substantially enhanced analysis capability in NASTRAN. However, with the versatility comes the burden of understanding a myriad of modeling implications and their effect on accuracy and analysis quality. The validity of many aspects of these elements were established through a series of benchmark problem results and comparison with those available in the literature and obtained from other programs like MSC/NASTRAN and CSAR/NASTRAN. Never-the-less such a comparison is never complete because of the new and creative use of these elements in complex modeling situations. One of the important features of QUAD4 and TRIA3 elements is the offset capability which allows the midsurface of the plate to be noncoincident with the surface of the grid points. None of the previous elements, with the exception of bar (beam), has this capability. The offset capability played a crucial role in the design of QUAD4 and TRIA3 elements. It allowed modeling layered composites, laminated plates and sandwich plates with the metal and composite face sheets. Even though the basic implementation of the offset capability is found to be sound in the previous applications, there is some uncertainty in relatively simple applications. The main purpose of this paper is to test the integrity of the offset capability and provide guidelines for its effective use. For the purpose of simplicity, references in this paper to the QUAD4 element will also include the TRIA3 element.

  15. Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples

    Science.gov (United States)

    Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.

    2012-12-01

    The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.

  16. The conjugate gradient NAS parallel benchmark on the IBM SP1

    Energy Technology Data Exchange (ETDEWEB)

    Trefethen, A.E.; Zhang, T. [Cornell Univ., Ithaca, NY (United States)

    1994-12-31

    The NAS Parallel Benchmarks are a suite of eight benchmark problems developed at the NASA Ames Research Center. They are specified in such a way that the benchmarkers are free to choose the language and method of implementation to suit the system in which they are interested. In this presentation the authors will discuss the Conjugate Gradient benchmark and its implementation on the IBM SP1. The SP1 is a parallel system which is comprised of RS/6000 nodes connected by a high performance switch. They will compare the results of the SP1 implementation with those reported for other machines. At this time, such a comparison shows the SP1 to be very competitive.

  17. Preliminary comparison of the conventional and quasi-snowflake divertor configurations with the 2D code EDGE2D/EIRENE in the FAST tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Viola, B.; Maddaluno, G.; Pericoli Ridolfini, V. [EURATOM-ENEA Association, C.R. Frascati, Via E. Fermi 45, 00044 Frascati (Rome) (Italy); Corrigan, G.; Harting, D. [Culham Centre of Fusion Energy, EURATOM-Association, Abingdon (United Kingdom); Mattia, M. [Dipartimento di Informatica, Sistemi e Produzione, Universita di Roma, Tor Vergata, Via del Politecnico, 00133 Roma (Italy); Zagorski, R. [Institute of Plasma Physics and Laser Microfusion-EURATOM Association, 01-497 Warsaw (Poland)

    2014-06-15

    The new magnetic configurations for tokamak divertors, snowflake and super-X, proposed to mitigate the problem of the power exhaust in reactors have clearly evidenced the need for an accurate and reliable modeling of the physics governing the interaction with the plates. The initial effort undertaken jointly by ENEA and IPPLM has been focused to exploit a simple and versatile modeling tool, namely the 2D TECXY code, to obtain preliminary comparison between the conventional and snowflake configurations for the proposed new device FAST that should realize an edge plasma with properties quite close to those of a reactor. The very interesting features found for the snowflake, namely a power load mitigation much larger than expected directly from the change of the magnetic topology, has further pushed us to check these results with the more sophisticated computational tool EDGE2D coupled with the neutral code module EIRENE. After a preparatory work that has been carried out in order to adapt this code combination to deal with non-conventional, single null equilibria and in particular with second order nulls in the poloidal field generated in the snowflake configuration, in this paper we describe the first activity to compare these codes and discuss the first results obtained for FAST. The outcome of these EDGE2D runs is in qualitative agreement with those of TECXY, confirming the potential benefit obtainable from a snowflake configuration. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  18. The potent microtubule-stabilizing agent (+)-discodermolide induces apoptosis in human breast carcinoma cells--preliminary comparisons to paclitaxel.

    Science.gov (United States)

    Balachandran, R; ter Haar, E; Welsh, M J; Grant, S G; Day, B W

    1998-01-01

    (+)-Discodermolide, a sponge-derived natural product, stabilizes microtubules more potently than paclitaxel despite the lack of any obvious structural similarities between the drugs. It competitively inhibits the binding of paclitaxel to tubulin polymers, hypernucleates microtubule assembly more potently than paclitaxel, and inhibits the growth of paclitaxel-resistant ovarian and colon carcinoma cells. Because paclitaxel shows clinical promise for breast cancer treatment, its effects in a series of human breast cancer cells were compared to those of (+)-discodermolide. Growth inhibition, cell and nuclear morphological, and electrophoretic and flow cytometric analyses were performed on (+)-discodermolide-treated MCF-7 and MDA-MB231 cells. (+)-Discodermolide potently inhibited the growth of both cell types (IC50 Discodermolide-treated cells exhibited condensed and highly fragmented nuclei. Flow cytometric comparison of cells treated with either drug at 10 nM, a concentration well below that achieved clinically with paclitaxel, showed both caused cell cycle perturbation and induction of a hypodiploid cell population. (+)-Discodermolide caused these effects more extensively and at earlier time points. The timing and type of high molecular weight DNA fragmentation induced by the two agents was consistent with induction of apoptosis. The results suggest that (+)-discodermolide has promise as a new chemotherapeutic agent against breast and other cancers.

  19. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    Directory of Open Access Journals (Sweden)

    van Lent Wineke AM

    2010-08-01

    Full Text Available Abstract Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC, three chemotherapy day units (CDU were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found

  20. The results of the pantograph-catenary interaction benchmark

    Science.gov (United States)

    Bruni, Stefano; Ambrosio, Jorge; Carnicero, Alberto; Cho, Yong Hyeon; Finner, Lars; Ikeda, Mitsuru; Kwon, Sam Young; Massat, Jean-Pierre; Stichel, Sebastian; Tur, Manuel; Zhang, Weihua

    2015-03-01

    This paper describes the results of a voluntary benchmark initiative concerning the simulation of pantograph-catenary interaction, which was proposed and coordinated by Politecnico di Milano and participated by 10 research institutions established in 9 different countries across Europe and Asia. The aims of the benchmark are to assess the dispersion of results on the same simulation study cases, to demonstrate the accuracy of numerical methodologies and simulation models and to identify the best suited modelling approaches to study pantograph-catenary interaction. One static and three dynamic simulation cases were defined for a non-existing but realistic high-speed pantograph-catenary couple. These cases were run using 10 of the major simulation codes presently in use for the study of pantograph-catenary interaction, and the results are presented and critically discussed here. All input data required to run the study cases are also provided, allowing the use of this benchmark as a term of comparison for other simulation codes.

  1. 75 FR 16439 - Certain Welded Carbon Steel Standard Pipe From Turkey: Preliminary Results of Countervailing Duty...

    Science.gov (United States)

    2010-04-01

    ... interest rates for comparable commercial loans. See 19 CFR 351.505(a). Where no company-specific benchmark... Benchmark (March 25, 2010). We then compared that interest rate with the interest rates that the company... rate for each company under review is de minimis. See the ``Preliminary Results of Review'' section...

  2. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  3. Plans to update benchmarking tool.

    Science.gov (United States)

    Stokoe, Mark

    2013-02-01

    The use of the current AssetMark system by hospital health facilities managers and engineers (in Australia) has decreased to a point of no activity occurring. A number of reasons have been cited, including cost, time to do, slow process, and level of information required. Based on current levels of activity, it would not be of any value to IHEA, or to its members, to continue with this form of AssetMark. For AssetMark to remain viable, it needs to be developed as a tool seen to be of value to healthcare facilities managers, and not just healthcare facility engineers. Benchmarking is still a very important requirement in the industry, and AssetMark can fulfil this need provided that it remains abreast of customer needs. The proposed future direction is to develop an online version of AssetMark with its current capabilities regarding capturing of data (12 Key Performance Indicators), reporting, and user interaction. The system would also provide end-users with access to live reporting features via a user-friendly web nterface linked through the IHEA web page.

  4. Academic Benchmarks for Otolaryngology Leaders.

    Science.gov (United States)

    Eloy, Jean Anderson; Blake, Danielle M; D'Aguillo, Christine; Svider, Peter F; Folbe, Adam J; Baredes, Soly

    2015-08-01

    This study aimed to characterize current benchmarks for academic otolaryngologists serving in positions of leadership and identify factors potentially associated with promotion to these positions. Information regarding chairs (or division chiefs), vice chairs, and residency program directors was obtained from faculty listings and organized by degree(s) obtained, academic rank, fellowship training status, sex, and experience. Research productivity was characterized by (a) successful procurement of active grants from the National Institutes of Health and prior grants from the American Academy of Otolaryngology-Head and Neck Surgery Foundation Centralized Otolaryngology Research Efforts program and (b) scholarly impact, as measured by the h-index. Chairs had the greatest amount of experience (32.4 years) and were the least likely to have multiple degrees, with 75.8% having an MD degree only. Program directors were the most likely to be fellowship trained (84.8%). Women represented 16% of program directors, 3% of chairs, and no vice chairs. Chairs had the highest scholarly impact (as measured by the h-index) and the greatest external grant funding. This analysis characterizes the current picture of leadership in academic otolaryngology. Chairs, when compared to their vice chair and program director counterparts, had more experience and greater research impact. Women were poorly represented among all academic leadership positions. © The Author(s) 2015.

  5. Benchmarking Measures of Network Influence

    Science.gov (United States)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  6. Toward the Rational Benchmarking of Homogeneous H2-Evolving Catalysts

    Science.gov (United States)

    Artero, Vincent; Saveant, Jean-Michel

    2015-01-01

    Molecular electrocatalysts for H2 evolution are usually studied under various conditions (solvent, proton sources) that prevent direct comparison of their performances. We provide here a rational method for such a benchmark based on (i) the recent analysis of the current-potential response for two-electron-two-step mechanisms and (ii) the derivation of catalytic Tafel plots reflecting the interdependency of turnover frequency and overpotential based on the intrinsic properties of the catalyst, independently of contingent factors such as the cell characteristics. Such a methodology is exemplified on a series of molecular catalysts among the most efficient in recent literature. PMID:26269710

  7. Benchmarking of Control Strategies for Wastewater Treatment Plants

    DEFF Research Database (Denmark)

    Wastewater treatment plants are large non-linear systems subject to large perturbations in wastewater flow rate, load and composition. Nevertheless these plants have to be operated continuously, meeting stricter and stricter regulations. Many control strategies have been proposed in the literature...... for improved and more efficient operation of wastewater treatment plants. Unfortunately, their evaluation and comparison – either practical or based on simulation – is difficult. This is partly due to the variability of the influent, to the complexity of the biological and biochemical phenomena......, plant layout, controllers, sensors, performance criteria and test procedures, i.e. a complete benchmarking protocol....

  8. The BOUT Project; Validation and Benchmark of BOUT Code and Experimental Diagnostic Tools for Fusion Boundary Turbulence

    Institute of Scientific and Technical Information of China (English)

    徐学桥

    2001-01-01

    A boundary plasma turbulence code BOUT is presented. The preliminary encour aging results have been obtained when comparing with probe measurements for a typical Ohmic discharge in HT-7 tokamak. The validation and benchmark of BOUT code and experimental diagnostic tools for fusion boundary plasma turbulence is proposed.

  9. Is macroporosity absolutely required for preliminary in vitro bone biomaterial study? A comparison between porous materials and flat materials.

    Science.gov (United States)

    Lee, Juliana T Y; Chow, King L; Wang, Kefeng; Tsang, Wai-Hung

    2011-11-08

    Porous materials are highly preferred for bone tissue engineering due to space for blood vessel ingrowth, but this may introduce extra experimental variations because of the difficulty in precise control of porosity. In order to decide whether it is absolutely necessary to use porous materials in in vitro comparative osteogenesis study of materials with different chemistries, we carried out osteoinductivity study using C3H/10T1/2 cells, pluripotent mesenchymal stem cells (MSCs), on seven material types: hydroxyapatite (HA), α-tricalcium phosphate (α-TCP) and b-tricalcium phosphate (β-TCP) in both porous and dense forms and tissue culture plastic. For all materials under test, dense materials give higher alkaline phosphatase gene (Alp) expression compared with porous materials. In addition, the cell density effects on the 10T1/2 cells were assessed through alkaline phosphatase protein (ALP) enzymatic assay. The ALP expression was higher for higher initial cell plating density and this explains the greater osteoinductivity of dense materials compared with porous materials for in vitro study as porous materials would have higher surface area. On the other hand, the same trend of Alp mRNA level (HA > β-TCP > α-TCP) was observed for both porous and dense materials, validating the use of dense flat materials for comparative study of materials with different chemistries for more reliable comparison when well-defined porous materials are not available. The avoidance of porosity variation would probably facilitate more reproducible results. This study does not suggest porosity is not required for experiments related to bone regeneration application, but emphasizes that there is often a tradeoff between higher clinical relevance, and less variation in a less complex set up, which facilitates a statistically significant conclusion. Technically, we also show that the base of normalization for ALP activity may influence the conclusion and there may be ALP activity from

  10. NERSC-6 Workload Analysis and Benchmark Selection Process

    Energy Technology Data Exchange (ETDEWEB)

    Antypas, Katie; Shalf, John; Wasserman, Harvey

    2008-08-29

    This report describes efforts carried out during early 2008 to determine some of the science drivers for the"NERSC-6" next-generation high-performance computing system acquisition. Although the starting point was existing Greenbooks from DOE and the NERSC User Group, the main contribution of this work is an analysis of the current NERSC computational workload combined with requirements information elicited from key users and other scientists about expected needs in the 2009-2011 timeframe. The NERSC workload is described in terms of science areas, computer codes supporting research within those areas, and description of key algorithms that comprise the codes. This work was carried out in large part to help select a small set of benchmark programs that accurately capture the science and algorithmic characteristics of the workload. The report concludes with a description of the codes selected and some preliminary performance data for them on several important systems.

  11. NERSC-6 Workload Analysis and Benchmark Selection Process

    Energy Technology Data Exchange (ETDEWEB)

    Antypas, Katie; Shalf, John; Wasserman, Harvey

    2008-08-29

    This report describes efforts carried out during early 2008 to determine some of the science drivers for the"NERSC-6" next-generation high-performance computing system acquisition. Although the starting point was existing Greenbooks from DOE and the NERSC User Group, the main contribution of this work is an analysis of the current NERSC computational workload combined with requirements information elicited from key users and other scientists about expected needs in the 2009-2011 timeframe. The NERSC workload is described in terms of science areas, computer codes supporting research within those areas, and description of key algorithms that comprise the codes. This work was carried out in large part to help select a small set of benchmark programs that accurately capture the science and algorithmic characteristics of the workload. The report concludes with a description of the codes selected and some preliminary performance data for them on several important systems.

  12. Microbially Mediated Kinetic Sulfur Isotope Fractionation: Reactive Transport Modeling Benchmark

    Science.gov (United States)

    Wanner, C.; Druhan, J. L.; Cheng, Y.; Amos, R. T.; Steefel, C. I.; Ajo Franklin, J. B.

    2014-12-01

    Microbially mediated sulfate reduction is a ubiquitous process in many subsurface systems. Isotopic fractionation is characteristic of this anaerobic process, since sulfate reducing bacteria (SRB) favor the reduction of the lighter sulfate isotopologue (S32O42-) over the heavier isotopologue (S34O42-). Detection of isotopic shifts have been utilized as a proxy for the onset of sulfate reduction in subsurface systems such as oil reservoirs and aquifers undergoing uranium bioremediation. Reactive transport modeling (RTM) of kinetic sulfur isotope fractionation has been applied to field and laboratory studies. These RTM approaches employ different mathematical formulations in the representation of kinetic sulfur isotope fractionation. In order to test the various formulations, we propose a benchmark problem set for the simulation of kinetic sulfur isotope fractionation during microbially mediated sulfate reduction. The benchmark problem set is comprised of four problem levels and is based on a recent laboratory column experimental study of sulfur isotope fractionation. Pertinent processes impacting sulfur isotopic composition such as microbial sulfate reduction and dispersion are included in the problem set. To date, participating RTM codes are: CRUNCHTOPE, TOUGHREACT, MIN3P and THE GEOCHEMIST'S WORKBENCH. Preliminary results from various codes show reasonable agreement for the problem levels simulating sulfur isotope fractionation in 1D.

  13. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    these issues, and describes how effects are closely connected to the perception of benchmarking, the intended users of the system and the application of the benchmarking results. The fundamental basis of this paper is taken from the development of benchmarking in the Danish construction sector. Two distinct...... perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind...... of benchmarking. In conclusion it is argued that clients and the Danish government are the intended users of the benchmarking system. The benchmarking results are primarily used by the government for monitoring and regulation of the construction sector and by clients for contractor selection. The dominating use...

  14. Benchmarking ENDF/B-VII.0

    Science.gov (United States)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  15. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  16. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and the...

  17. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  18. XWeB: the XML Warehouse Benchmark

    CERN Document Server

    Mahboubi, Hadj

    2011-01-01

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  19. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically receive bureaucratic benchmarking information from the administration. We find that more frequent bureaucratic...

  20. Benchmarking of PR Function in Serbian Companies

    National Research Council Canada - National Science Library

    Nikolić, Milan; Sajfert, Zvonko; Vukonjanski, Jelena

    2009-01-01

    The purpose of this paper is to present methodologies for carrying out benchmarking of the PR function in Serbian companies and to test the practical application of the research results and proposed...

  1. Comparison of characteristics between patients with H7N9 living in rural and urban areas of Zhejiang Province, China: a preliminary report.

    Directory of Open Access Journals (Sweden)

    Jimin Sun

    Full Text Available A total of 134 cases of H7N9 influenza infection were identified in 12 provinces of China between March 25 and September 31, 2013. Of these, 46 cases occurred in Zhejiang Province. We carried out a preliminary comparison of characteristics between rural and urban H7N9 cases from Zhejiang Province, China. Field investigations were conducted for each confirmed H7N9 case. A standardized questionnaire was used to collect information about demographics, exposure history, clinical signs and symptoms, timelines of medical visits and care after onset of illness. Of the 46 H7N9 cases in Zhejiang Province identified between March 25 and September 31, 2013, there were 16 rural cases and 30 urban cases. Compared to urban cases, there was a higher proportion of females among the rural cases [11/16 (69% vs. 6/30 (20%, P = 0.001]. Among the rural cases, 14/15 (93% with available data had a history of recent poultry exposure, which was significantly higher than that among urban cases (64%, P = 0.038. More patients from the rural group had a history of breeding poultry compared with those from the urban group [38% (6/16 vs. 10% (3/30, respectively; P = 0.025]. Interestingly, the median number of medical visits of patients from rural areas was higher than that of patients from urban areas (P = 0.046. There was no difference between the two groups in terms of age distribution, fatality rate, incubation period, symptoms, and underlying medical conditions. In conclusion, compared to patients from urban areas, more patients from rural areas were female, had an exposure history, had a history of breeding poultry, and had a higher number of medical visits. These findings indicate that there are different exposure patterns between patients living in rural and urban areas and that more rural cases were infected through backyard poultry breeding.

  2. Comparison of {sup 18}F-FACBC and {sup 11}C-choline PET/CT in patients with radically treated prostate cancer and biochemical relapse: preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Nanni, Cristina; Boschi, Stefano [Azienda Ospedaliero-Universitaria di Bologna Policlinico S.Orsola-Malpighi, OU Nuclear Medicine, Bologna (Italy); Schiavina, Riccardo; Ambrosini, Valentina; Brunocilla, Eugenio; Martorana, Giuseppe; Fanti, Stefano [Azienda Ospedaliero-Universitaria di Bologna Policlinico S.Orsola-Malpighi, OU Urology, Bologna (Italy); Pettinato, Cinzia [Azienda Ospedaliero-Universitaria di Bologna Policlinico S.Orsola-Malpighi, OU Medical Physics, Bologna (Italy)

    2013-07-15

    We assessed the rate of detection rate of recurrent prostate cancer by PET/CT using anti-3-{sup 18}F-FACBC, a new synthetic amino acid, in comparison to that using {sup 11}C-choline as part of an ongoing prospective single-centre study. Included in the study were 15 patients with biochemical relapse after initial radical treatment of prostate cancer. All the patients underwent anti-3-{sup 18}F-FACBC PET/CT and {sup 11}C-choline PET/CT within a 7-day period. The detection rates using the two compounds were determined and the target-to-background ratios (TBR) of each lesion are reported. No adverse reactions to anti-3-{sup 18}F-FACBC PET/CT were noted. On a patient basis, {sup 11}C-choline PET/CT was positive in 3 patients and negative in 12 (detection rate 20 %), and anti-3-{sup 18}F-FACBC PET/CT was positive in 6 patients and negative in 9 (detection rate 40 %). On a lesion basis, {sup 11}C-choline detected 6 lesions (4 bone, 1 lymph node, 1 local relapse), and anti-3-{sup 18}F-FACBC detected 11 lesions (5 bone, 5 lymph node, 1 local relapse). All {sup 11}C-choline-positive lesions were also identified by anti-3-{sup 18}F-FACBC PET/CT. The TBR of anti-3-{sup 18}F-FACBC was greater than that of {sup 11}C-choline in 8/11 lesions, as were image quality and contrast. Our preliminary results indicate that anti-3-{sup 18}F-FACBC may be superior to {sup 11}C-choline for the identification of disease recurrence in the setting of biochemical failure. Further studies are required to assess efficacy of anti-3-{sup 18}F-FACBC in a larger series of prostate cancer patients. (orig.)

  3. A framework of benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-02-01

    Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1) targeted aspects of model performance to be evaluated; (2) a set of benchmarks as defined references to test model performance; (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4) model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  4. A framework of benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-02-01

    Full Text Available Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1 targeted aspects of model performance to be evaluated; (2 a set of benchmarks as defined references to test model performance; (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4 model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  5. A framework for benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J. T.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J. B.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  6. Benchmarking Attosecond Physics with Atomic Hydrogen

    Science.gov (United States)

    2015-05-25

    Final 3. DATES COVERED (From - To) 12 Mar 12 – 11 Mar 15 4. TITLE AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a...AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a. CONTRACT NUMBER FA2386-12-1-4025 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...attosecond physics with atomic hydrogen ” May 25, 2015 PI information: David Kielpinski, dave.kielpinski@gmail.com Griffith University Centre

  7. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  8. Implementation of NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  9. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  10. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  11. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  12. A Comparative Study of Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms on Numerical Benchmark Problems

    DEFF Research Database (Denmark)

    Vesterstrøm, Jacob Svaneborg; Thomsen, Rene

    2004-01-01

    in several real-world applications. In this paper, we evaluate the performance of DE, PSO, and EAs regarding their general applicability as numerical optimization techniques. The comparison is performed on a suite of 34 widely used benchmark problems. The results from our study show that DE generally...

  13. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    NARCIS (Netherlands)

    van Lent, W.A.M.; de Beer, Relinde; van Harten, Willem H.

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the

  14. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    NARCIS (Netherlands)

    Lent, van Wineke A.M.; Beer, de Relinde; Harten, van Wim H.

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the

  15. Benchmarking and Regulation of Electricity Transmission and Distribution Utilities: Lessons from International Experience

    OpenAIRE

    2001-01-01

    Since the early 1980's, many countries have implemented electricity sector reform, many of which have bundled generation, transmission, distribution and supply activities, and have introduced competition in generation and supply. An increasing number of countries are also adopting incentive regulation to promote efficiency improvement in the natural monopoly activities - transmission and distribution. Incentive regulation almost invariably involves benchmarking or comparison of actual vs...

  16. The benchmark analysis of gastric, colorectal and rectal cancer pathways: toward establishing standardized clinical pathway in the cancer care.

    Science.gov (United States)

    Ryu, Munemasa; Hamano, Masaaki; Nakagawara, Akira; Shinoda, Masayuki; Shimizu, Hideaki; Miura, Takeshi; Yoshida, Isao; Nemoto, Atsushi; Yoshikawa, Aki

    2011-01-01

    Most clinical pathways in treating cancers in Japan are based on individual physician's personal experiences rather than on an empirical analysis of clinical data such as benchmark comparison with other hospitals. Therefore, these pathways are far from being standardized. By comparing detailed clinical data from five cancer centers, we have observed various differences among hospitals. By conducting benchmark analyses, providing detailed feedback to the participating hospitals and by repeating the benchmark a year later, we strive to develop more standardized clinical pathways for the treatment of cancers. The Cancer Quality Initiative was launched in 2007 by five cancer centers. Using diagnosis procedure combination data, the member hospitals benchmarked their pre-operative and post-operative length of stays, the duration of antibiotics administrations and the post-operative fasting duration for gastric, colon and rectal cancers. The benchmark was conducted by disclosing hospital identities and performed using 2007 and 2008 data. In the 2007 benchmark, substantial differences were shown among five hospitals in the treatment of gastric, colon and rectal cancers. After providing the 2007 results to the participating hospitals and organizing several brainstorming discussions, significant improvements were observed in the 2008 data study. The benchmark analysis of clinical data is extremely useful in promoting more standardized care and, thus in improving the quality of cancer treatment in Japan. By repeating the benchmark analyses, we can offer truly clinical evidence-based higher quality standardized cancer treatment to our patients.

  17. Comparison of VNEM to measured data from Ringhals unit 3. (Phase 3)

    Energy Technology Data Exchange (ETDEWEB)

    Tsuiki, M.; Mullet, S. (Institute for Energy Technology, OECD Halden Project, Kjeller (Norway))

    2011-01-15

    1. PWR. Comparisons have been made of a PWR core simulator CYGNUS with VNEM neutronics module to the measured data obtained from Ringhals unit 3 NPP through the cycle 1A (core average burnup = 0 through 10,507MWD/MT). The results can be summarized as: core eigenvalue = 0.99937 +/- 0.00086 before intermediate 5 months shutdown core eigenvalue = 0.99647 +/- 0.00029 after intermediate 5 months shutdown. The reason of core eigenvalue drop after the intermediate shutdown is estimated to be the build-up of fissile elements during the long shutdown. A calculation model to track some important isotopes in addition to Xe135 and Sm149 (these isotopes are tracked in the present version of CYGNUS) has to be implemented. As for the comparison of the neutron detector readings, the agreement was excellent throughout the cycle 1A as observed in Phase 1 and 2 (2008, 2009). The burnup tilt effect was not observed during the cycle 1A. The verification of the burnup tilt model of CYGNUS will be performed in the next phase of the project. 2. BWR. A preliminary 2D numerical benchmarking was performed for BWR cores. The problems were generated imitating the NEACRP MOX PWR 2D benchmark problems. The results of comparisons of VNEM to a reference transport code (FCM2D), based on the method of characteristics, were as good as those obtained in the case of PWR cores for similar benchmarking. (Author)

  18. Surveying and benchmarking techniques to analyse DNA gel fingerprint images.

    Science.gov (United States)

    Heras, Jónathan; Domínguez, César; Mata, Eloy; Pascual, Vico

    2016-11-01

    DNA fingerprinting is a genetic typing technique that allows the analysis of the genomic relatedness between samples, and the comparison of DNA patterns. The analysis of DNA gel fingerprint images usually consists of five consecutive steps: image pre-processing, lane segmentation, band detection, normalization and fingerprint comparison. In this article, we firstly survey the main methods that have been applied in the literature in each of these stages. Secondly, we focus on lane-segmentation and band-detection algorithms-as they are the steps that usually require user-intervention-and detect the seven core algorithms used for both tasks. Subsequently, we present a benchmark that includes a data set of images, the gold standards associated with those images and the tools to measure the performance of lane-segmentation and band-detection algorithms. Finally, we implement the core algorithms used both for lane segmentation and band detection, and evaluate their performance using our benchmark. As a conclusion of that study, we obtain that the average profile algorithm is the best starting point for lane segmentation and band detection.

  19. 'Wasteaware' benchmark indicators for integrated sustainable waste management in cities.

    Science.gov (United States)

    Wilson, David C; Rodic, Ljiljana; Cowing, Michael J; Velis, Costas A; Whiteman, Andrew D; Scheinberg, Anne; Vilches, Recaredo; Masterson, Darragh; Stretz, Joachim; Oelz, Barbara

    2015-01-01

    This paper addresses a major problem in international solid waste management, which is twofold: a lack of data, and a lack of consistent data to allow comparison between cities. The paper presents an indicator set for integrated sustainable waste management (ISWM) in cities both North and South, to allow benchmarking of a city's performance, comparing cities and monitoring developments over time. It builds on pioneering work for UN-Habitat's solid waste management in the World's cities. The comprehensive analytical framework of a city's solid waste management system is divided into two overlapping 'triangles' - one comprising the three physical components, i.e. collection, recycling, and disposal, and the other comprising three governance aspects, i.e. inclusivity; financial sustainability; and sound institutions and proactive policies. The indicator set includes essential quantitative indicators as well as qualitative composite indicators. This updated and revised 'Wasteaware' set of ISWM benchmark indicators is the cumulative result of testing various prototypes in more than 50 cities around the world. This experience confirms the utility of indicators in allowing comprehensive performance measurement and comparison of both 'hard' physical components and 'soft' governance aspects; and in prioritising 'next steps' in developing a city's solid waste management system, by identifying both local strengths that can be built on and weak points to be addressed. The Wasteaware ISWM indicators are applicable to a broad range of cities with very different levels of income and solid waste management practices. Their wide application as a standard methodology will help to fill the historical data gap.

  20. Linear multispecies gyrokinetic flux tube benchmarks in shaped tokamak plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Merlo, G.; Sauter, O.; Brunner, S.; Burckel, A.; Villard, L. [Ecole Polytechnique Fédérale de Lausanne (EPFL), Swiss Plasma Center (SPC), CH-1015 Lausanne (Switzerland); Camenen, Y. [Aix-Marseille Université CNRS, PIIM UMR 7345, 13397 Marseille (France); Casson, F. J. [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Dorland, W. [Department of Physics, University of Maryland, College Park, Maryland 20742 (United States); Fable, E.; Görler, T. [Max-Planck Institut für Plasmaphysik, Boltzmannstr. 2, D-85748 Garching (Germany); Jenko, F.; Told, D. [Department of Physics and Astronomy, University of California, Los Angeles, California 90095 (United States); Peeters, A. G. [Physics Department, University of Bayreuth, 95440 Bayreuth (Germany)

    2016-03-15

    Verification is the fundamental step that any turbulence simulation code has to be submitted in order to assess the proper implementation of the underlying equations. We have carried out a cross comparison of three flux tube gyrokinetic codes, GENE [F. Jenko et al., Phys. Plasmas 7, 1904 (2000)], GKW [A. G. Peeters et al., Comput. Phys. Commun. 180, 2650 (2009)], and GS2 [W. Dorland et al., Phys. Rev. Lett. 85, 5579 (2000)], focusing our attention on the effect of realistic geometries described by a series of MHD equilibria with increasing shaping complexity. To simplify the effort, the benchmark has been limited to the electrostatic collisionless linear behaviour of the system. A fully gyrokinetic model has been used to describe the dynamics of both ions and electrons. Several tests have been carried out looking at linear stability at ion and electron scales, where for the assumed profiles Ion Temperature Gradient (ITG)/Trapped Electron Modes and Electron Temperature Gradient modes are unstable. The capability of the codes to handle a non-zero ballooning angle has been successfully benchmarked in the ITG regime. Finally, the standard Rosenbluth-Hinton test has been successfully carried out looking at the effect of shaping on Zonal Flows (ZFs) and Geodesic Acoustic Modes (GAMs). Inter-code comparison as well as validation of simulation results against analytical estimates has been accomplished. All the performed tests confirm that plasma elongation strongly stabilizes plasma instabilities as well as leads to a strong increase in ZF residual and GAM damping.

  1. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  2. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  3. Reducing maternal mortality: better monitoring, indicators and benchmarks needed to improve emergency obstetric care. Research summary for policymakers.

    Science.gov (United States)

    Collender, Guy; Gabrysch, Sabine; Campbell, Oona M R

    2012-06-01

    Several limitations of emergency obstetric care (EmOC) indicators and benchmarks are analysed in this short paper, which synthesises recent research on this topic. A comparison between Sri Lanka and Zambia is used to highlight the inconsistencies and shortcomings in current methods of monitoring EmOC. Recommendations are made to improve the usefulness and accuracy of EmOC indicators and benchmarks in the future.

  4. The impact of winter 2012 cold outbreak over the Northern Adriatic Sea dynamics: preliminary comparison among data and high resolution operational atmospheric models

    Science.gov (United States)

    Davolio, Silvio; Miglietta, Mario M.; Carniel, Sandro; Benetazzo, Alvise; Buzzi, Andrea; Drofa, Oxana; Falco, Pierpaolo; Fantini, Maurizio; Malguzzi, Piero; Ricchi, Antonio; Russo, Aniello; Paccagnella, Tiziana; Sclavo, Mauro

    2013-04-01

    exceptionally dense water formation, registered during the 2012 winter in the northern Adriatic region. During late January and early February, indeed, the basin was characterized by a persistent and exceptional cold anomaly responsible for large energy losses due to cold and extremely strong winds. Sea waters temperatures dropped to about 6°C and the Venice lagoon got partially covered by ice. In the period of interest, available measurements in the northern Adriatic Sea (temperature, salinity, density, wind speed, direction and inferred heat fluxes) were used, together with satellite measurements, to carry out a first semi-quantitative comparison among existing meteorological models implemented over the region. Namely, the work presents an intercomparison among three state-of-the-art, non-hydrostatic NWP models: COSMO-I7, WRF and MOLOCH. All models are run in operational mode, and their results are used by several Regional authorities and institutions for weather forecasting and support to civil protection decision. Therefore, this evaluation is a useful assessment preliminary to a full coupling of the above mentioned atmospheric models with existing ocean models already implemented in the region (e.g. ROMS in the COAWST system). Preliminary results show also some uncommon mesoscale structures reproduced by the models in the proximity of the central-south Italian coast, and highlight their possible influence on the local surface sea circulation. These effects will be soon explored by means of fully-coupled ocean-atmosphere models within on-going projects.

  5. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  6. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    Science.gov (United States)

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  7. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  8. A chemical EOR benchmark study of different reservoir simulators

    Science.gov (United States)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    Interest in chemical EOR processes has intensified in recent years due to the advancements in chemical formulations and injection techniques. Injecting Polymer (P), surfactant/polymer (SP), and alkaline/surfactant/polymer (ASP) are techniques for improving sweep and displacement efficiencies with the aim of improving oil production in both secondary and tertiary floods. There has been great interest in chemical flooding recently for different challenging situations. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. More oil reservoirs are reaching maturity where secondary polymer floods and tertiary surfactant methods have become increasingly important. This significance has added to the industry's interest in using reservoir simulators as tools for reservoir evaluation and management to minimize costs and increase the process efficiency. Reservoir simulators with special features are needed to represent coupled chemical and physical processes present in chemical EOR processes. The simulators need to be first validated against well controlled lab and pilot scale experiments to reliably predict the full field implementations. The available data from laboratory scale include 1) phase behavior and rheological data; and 2) results of secondary and tertiary coreflood experiments for P, SP, and ASP floods under reservoir conditions, i.e. chemical retentions, pressure drop, and oil recovery. Data collected from corefloods are used as benchmark tests comparing numerical reservoir simulators with chemical EOR modeling capabilities such as STARS of CMG, ECLIPSE-100 of Schlumberger, REVEAL of Petroleum Experts. The research UTCHEM simulator from The University of Texas at Austin is also included since it has been the benchmark for chemical flooding simulation for over 25 years. The results of this benchmark comparison will be utilized to improve

  9. Synthesis of the OECD/NEA-PSI CFD benchmark exercise

    Energy Technology Data Exchange (ETDEWEB)

    Andreani, Michele, E-mail: Michele.andreani@psi.ch; Badillo, Arnoldo; Kapulla, Ralf

    2016-04-01

    Highlights: • A benchmark exercise on stratification erosion in containment was conducted using a test in the PANDA facility. • Blind calculations were provided by nineteen participants. • Results were compared with experimental data. • A ranking was made. • A large spread of results was observed, with very few simulations providing accurate results for the most important variables, though not for velocities. - Abstract: The third International Benchmark Exercise (IBE-3) conducted under the auspices of OECD/NEA is based on the comparison of blind CFD simulations with experimental data addressing the erosion of a stratified layer by an off-axis buoyant jet in a large vessel. The numerical benchmark exercise is based on a dedicated experiment in the PANDA facility conducted at the Paul Scherrer Institut (PSI) in Switzerland, using only one vessel. The use of non-prototypical fluids (i.e. helium as simulant for hydrogen, and air as simulant for steam), and the consequent absence of the complex physical effects produced by steam condensation enhanced the suitability of the data for CFD validation purposes. The test started with a helium–air layer at the top of the vessel and air in the lower part. The helium-rich layer was gradually eroded by a low-momentum air/helium jet emerging at a lower elevation. Blind calculation results were submitted by nineteen participants, and the calculation results have been compared with the PANDA data. This report, adopting the format of the reports for the two previous exercises, includes a ranking of the contributions, where the largest weight is given to the time progression of the erosion of the helium-rich layer. In accordance with the limited scope of the benchmark exercise, this report is more a collection of comparisons between calculated results and data than a synthesis. Therefore, the few conclusions are based on the mere observation of the agreement of the various submissions with the test result, and do not

  10. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  11. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  12. Analytical benchmarks for precision particle tracking in electric and magnetic rings

    Energy Technology Data Exchange (ETDEWEB)

    Metodiev, E.M. [Brookhaven National Laboratory, Physics Department, Upton, NY 11973 (United States); Harvard College, Harvard University, Cambridge, MA 02138 (United States); Center for Axion and Precision Physics Research, IBS, Daejeon 305-701 (Korea, Republic of); Department of Physics, KAIST, Daejeon 305-701 (Korea, Republic of); D' Silva, I.M.; Fandaros, M. [Brookhaven National Laboratory, Physics Department, Upton, NY 11973 (United States); Gaisser, M. [Center for Axion and Precision Physics Research, IBS, Daejeon 305-701 (Korea, Republic of); Department of Physics, KAIST, Daejeon 305-701 (Korea, Republic of); Hacıömeroğlu, S. [Brookhaven National Laboratory, Physics Department, Upton, NY 11973 (United States); Center for Axion and Precision Physics Research, IBS, Daejeon 305-701 (Korea, Republic of); Istanbul Technical University, Istanbul 34469 (Turkey); Department of Physics, KAIST, Daejeon 305-701 (Korea, Republic of); Huang, D. [Brookhaven National Laboratory, Physics Department, Upton, NY 11973 (United States); Huang, K.L. [Brookhaven National Laboratory, Physics Department, Upton, NY 11973 (United States); Harvard College, Harvard University, Cambridge, MA 02138 (United States); Patil, A.; Prodromou, R.; Semertzidis, O.A.; Sharma, D.; Stamatakis, A.N. [Brookhaven National Laboratory, Physics Department, Upton, NY 11973 (United States); Orlov, Y.F. [Department of Physics, Cornell University, Ithaca, NY (United States); Semertzidis, Y.K. [Brookhaven National Laboratory, Physics Department, Upton, NY 11973 (United States); Center for Axion and Precision Physics Research, IBS, Daejeon 305-701 (Korea, Republic of); Department of Physics, KAIST, Daejeon 305-701 (Korea, Republic of)

    2015-10-11

    To determine the accuracy of tracking programs for precision storage ring experiments, analytical estimates of particle and spin dynamics in electric and magnetic rings were developed and compared to the numerical results of a tracking program based on Runge–Kutta/Predictor–Corrector integration. Initial discrepancies in the comparisons indicated the need to improve several of the analytical estimates. In the end, this rather slow program passed all benchmarks, often agreeing with the analytical estimates to the part-per-billion level. Thus, it can in turn be used to benchmark faster tracking programs for accuracy.

  13. Analysis and comparison. Bd. 1 - Longlife. Report on the analysis of state of technology, administrative and legal procedures, financial situation, demographic needs, similarities and differences in the participating countries Denmark, Germany, Lithuania, Poland and Russia. Formulations and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Rueckert, Klaus (ed.)

    2010-07-01

    Environmental considerations are becoming one of the key features in the design when it comes to constructing modern, sustainable residential buildings. In an effort to streamline procedures and practices, the project Longlife has conducted a comparative review of these among the countries Denmark, Germany, Lithuania, Poland and Russia (associated organizations). The countries involved have shared knowledge and experiences with each other about how their respective building processes operate. These are collated and analysed. There are differences and commonness in the state of technology, administrative and legal procedures, financial situation, demographic needs, and how a 'housing project' functions. With this exercise Longlife has started to ensure that differences across the Baltic Sea Region will be minimised as regards environmentally-friendly residential constructions. This initial comparative stage covers planning, building permit and tendering procedures, practices for developing and operating housing and construction technologies. The report reflects the currently most applicable features of the participating countries' processes. Longlife project partners work in three competence teams to use the special know how and experiences and to cooperate in the public private partnership composition. The report analyses in the competence team 1 - Engineering and building technology, design standards - the engineering and technology standards in the countries Denmark, Germany, Lithuania, Poland and Russia. The report shows for the competence team 2 - Administration procedures, licensing rules, tendering rules, laws - the comparison and investigations of administration procedures, building permit rules, tendering rules and laws in the participating countries. The report provides for competence team 3 - Economical and financial basis - a general and a specific overview about economical and financial issues, sustainability and quality aspects in the

  14. Lessons Learned on Benchmarking from the International Human Reliability Analysis Empirical Study

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; John A. Forester; Andreas Bye; Vinh N. Dang; Erasmia Lois

    2010-06-01

    The International Human Reliability Analysis (HRA) Empirical Study is a comparative benchmark of the prediction of HRA methods to the performance of nuclear power plant crews in a control room simulator. There are a number of unique aspects to the present study that distinguish it from previous HRA benchmarks, most notably the emphasis on a method-to-data comparison instead of a method-to-method comparison. This paper reviews seven lessons learned about HRA benchmarking from conducting the study: (1) the dual purposes of the study afforded by joining another HRA study; (2) the importance of comparing not only quantitative but also qualitative aspects of HRA; (3) consideration of both negative and positive drivers on crew performance; (4) a relatively large sample size of crews; (5) the use of multiple methods and scenarios to provide a well-rounded view of HRA performance; (6) the importance of clearly defined human failure events; and (7) the use of a common comparison language to “translate” the results of different HRA methods. These seven lessons learned highlight how the present study can serve as a useful template for future benchmarking studies.

  15. Benchmarking Ionizing Space Environment Models

    Science.gov (United States)

    Bourdarie, S.; Inguimbert, C.; Standarovski, D.; Vaillé, J.-R.; Sicard-Piet, A.; Falguere, D.; Ecoffet, R.; Poivey, C.; Lorfèvre, E.

    2017-08-01

    In-flight feedback data are collected, such as displacement damage doses, ionizing doses, and cumulated Single Event upset (SEU) on board various space vehicles and are compared to predictions performed with: 1) proton measurements performed with spectrometers data on board the same spacecraft if any and 2) protons spectrum predicted by the legacy AP8min model and the AP9 and Onera Proton Altitude Low models. When an accurate representation of the 3-D spacecraft shielding as well as appropriate ground calibrations are considered in the calculations, such comparisons provide powerful metrics to investigate engineering model accuracy. To describe >30 MeV trapped protons fluxes, the AP8 min model is found to provide closer predictions to observations than AP9 V1.30.001 (mean and perturbed mean).

  16. Coral benchmarks in the center of biodiversity.

    Science.gov (United States)

    Licuanan, W Y; Robles, R; Dygico, M; Songco, A; van Woesik, R

    2017-01-30

    There is an urgent need to quantify coral reef benchmarks that assess changes and recovery rates through time and serve as goals for management. Yet, few studies have identified benchmarks for hard coral cover and diversity in the center of marine diversity. In this study, we estimated coral cover and generic diversity benchmarks on the Tubbataha reefs, the largest and best-enforced no-take marine protected area in the Philippines. The shallow (2-6m) reef slopes of Tubbataha were monitored annually, from 2012 to 2015, using hierarchical sampling. Mean coral cover was 34% (σ±1.7) and generic diversity was 18 (σ±0.9) per 75m by 25m station. The southeastern leeward slopes supported on average 56% coral cover, whereas the northeastern windward slopes supported 30%, and the western slopes supported 18% coral cover. Generic diversity was more spatially homogeneous than coral cover. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. The national hydrologic bench-mark network

    Science.gov (United States)

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  18. DWEB: A Data Warehouse Engineering Benchmark

    CERN Document Server

    Darmont, Jérôme; Boussaïd, Omar

    2005-01-01

    Data warehouse architectural choices and optimization techniques are critical to decision support query performance. To facilitate these choices, the performance of the designed data warehouse must be assessed. This is usually done with the help of benchmarks, which can either help system users comparing the performances of different systems, or help system engineers testing the effect of various design choices. While the TPC standard decision support benchmarks address the first point, they are not tuneable enough to address the second one and fail to model different data warehouse schemas. By contrast, our Data Warehouse Engineering Benchmark (DWEB) allows to generate various ad-hoc synthetic data warehouses and workloads. DWEB is fully parameterized to fulfill data warehouse design needs. However, two levels of parameterization keep it relatively easy to tune. Finally, DWEB is implemented as a Java free software that can be interfaced with most existing relational database management systems. A sample usag...

  19. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  20. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  1. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  2. FGK Benchmark Stars A new metallicity scale

    CERN Document Server

    Jofre, Paula; Soubiran, C; Blanco-Cuaresma, S; Pancino, E; Bergemann, M; Cantat-Gaudin, T; Hernandez, J I Gonzalez; Hill, V; Lardo, C; de Laverny, P; Lind, K; Magrini, L; Masseron, T; Montes, D; Mucciarelli, A; Nordlander, T; Recio-Blanco, A; Sobeck, J; Sordo, R; Sousa, S G; Tabernero, H; Vallenari, A; Van Eck, S; Worley, C C

    2013-01-01

    In the era of large spectroscopic surveys of stars of the Milky Way, atmospheric parameter pipelines require reference stars to evaluate and homogenize their values. We provide a new metallicity scale for the FGK benchmark stars based on their corresponding fundamental effective temperature and surface gravity. This was done by analyzing homogeneously with up to seven different methods a spectral library of benchmark stars. Although our direct aim was to provide a reference metallicity to be used by the Gaia-ESO Survey, the fundamental effective temperatures and surface gravities of benchmark stars of Heiter et al. 2013 (in prep) and their metallicities obtained in this work can also be used as reference parameters for other ongoing surveys, such as Gaia, HERMES, RAVE, APOGEE and LAMOST.

  3. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated with profess......Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated...... for 191 orthopaedics departments of German hospitals matched with survey data on bureaucratic benchmarking information provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically...

  4. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  5. 77 FR 33167 - Citric Acid and Certain Citrate Salts From the People's Republic of China: Preliminary Results of...

    Science.gov (United States)

    2012-06-05

    ... companies. Benchmarks and Discount Rates The Department is investigating loans received by the RZBC... reported by the company as a benchmark.\\49\\ If the firm did not have any comparable commercial loans during... Imp. & Exp. Co., Ltd. (collectively, the RZBC Companies). If these preliminary results are adopted...

  6. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  7. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes....... This makes it difficult to compare the resources used, since some programmes by their nature require more classroom time and equipment than others. It is also far from straightforward to compare college effects with respect to grades, since the various programmes apply very different forms of assessment...

  8. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...

  9. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  10. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  11. Analytical Benchmarking, Precision Particle Tracking, Electric and Magnetic Storage Rings, Runge-Kutta, Predictor-Corrector

    CERN Document Server

    Metodiev, E M; Fandaros, M; Haciomeroglu, S; Huang, D; Huang, K L; Patil, A; Prodromou, R; Semertzidis, O A; Sharma, D; Stamatakis, A N; Orlov, Y F; Semertzidis, Y K

    2015-01-01

    A set of analytical benchmarks for tracking programs are required for precision storage ring experiments. To determine the accuracy of precision tracking programs in electric and magnetic rings, a variety of analytical estimates of particle and spin dynamics in the rings are developed and compared to the numerical results of tracking simulations. Initial discrepancies in the comparisons indicated the need for improvement of several of the analytical estimates. As an example, we find that the fourth order Runge-Kutta/Predictor-Corrector method was accurate but slow, and that it passed all the benchmarks it was tested against, often to the sub-part per billion level. Thus high precision analytical estimates and tracking programs based on fourth order Runge-Kutta/Predictor-Corrector integration can be used to benchmark faster tracking programs for accuracy.

  12. A new methodology for building energy benchmarking: An approach based on clustering concept and statistical models

    Science.gov (United States)

    Gao, Xuefeng

    Though many building energy benchmarking programs have been developed during the past decades, they hold certain limitations. The major concern is that they may cause misleading benchmarking due to not fully considering the impacts of the multiple features of buildings on energy performance. The existing methods classify buildings according to only one of many features of buildings -- the use type, which may result in a comparison between two buildings that are tremendously different in other features and not properly comparable as a result. This research aims to tackle this challenge by proposing a new methodology based on the clustering concept and statistical analysis. The clustering concept, which reflects on machine learning algorithms, classifies buildings based on a multi-dimensional domain of building features, rather than the single dimension of use type. Buildings with the greatest similarity of features that influence energy performance are classified into the same cluster, and benchmarked according to the centroid reference of the cluster. Statistical analysis is applied to find the most influential features impacting building energy performance, as well as provide prediction models for the new design energy consumption. The proposed methodology as applicable to both existing building benchmarking and new design benchmarking was discussed in this dissertation. The former contains four steps: feature selection, clustering algorithm adaptation, results validation, and interpretation. The latter consists of three parts: data observation, inverse modeling, and forward modeling. The experimentation and validation were carried out for both perspectives. It was shown that the proposed methodology could account for the total building energy performance and was able to provide a more comprehensive approach to benchmarking. In addition, the multi-dimensional clustering concept enables energy benchmarking among different types of buildings, and inspires a new

  13. Installation and Benchmark Tests of CINDER'90 Activation Code

    Energy Technology Data Exchange (ETDEWEB)

    Kum, Oyeon; Heo, Seung Uk; Oh, Sunju [Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of)

    2014-10-15

    For the reactor community, CINDER'90 is now integrated directly into MCNPX so that the transmutation processes can be considered in an integrated manner with the particle transport by using the 'BURN' option card. However, the much smaller accelerator community cannot take advantage of these improvements because the BURN option card of MCNPX is only accessible in eigenvalue calculations of radioactive systems. In this study, we introduce preliminary results in activation calculations with the CINDER'90 code by using previously approved benchmarking problems. Transmutation computing code, CINDER'90, which is widely used in the USA and European national laboratories, was successfully installed and benchmarked. Basic theory of atomic transmutation is introduced and three typical benchmark test problems are solved. The code works with MCNPX and is easy to use with the Perl script. The results are well summarized in a tabular format by using a post processing code, TABCODE. The cross benchmark tests with FLUKA are underway and the results will be presented in the near future. On the other hand, creating a new integrated activation code (MCNPX + CINDER) is now considered seriously for the more successful activation analysis.

  14. SPOC Benchmark Case: SNRE Model

    Energy Technology Data Exchange (ETDEWEB)

    Vishal Patel; Michael Eades; Claude Russel Joyner II

    2016-02-01

    The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations of the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.

  15. Benchmarking health IT among OECD countries: better data for better policy.

    Science.gov (United States)

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this.

  16. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    Science.gov (United States)

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible

  17. Modeling of the ORNL PCA Benchmark Using SCALE6.0 Hybrid Deterministic-Stochastic Methodology

    Directory of Open Access Journals (Sweden)

    Mario Matijević

    2013-01-01

    Full Text Available Revised guidelines with the support of computational benchmarks are needed for the regulation of the allowed neutron irradiation to reactor structures during power plant lifetime. Currently, US NRC Regulatory Guide 1.190 is the effective guideline for reactor dosimetry calculations. A well known international shielding database SINBAD contains large selection of models for benchmarking neutron transport methods. In this paper a PCA benchmark has been chosen from SINBAD for qualification of our methodology for pressure vessel neutron fluence calculations, as required by the Regulatory Guide 1.190. The SCALE6.0 code package, developed at Oak Ridge National Laboratory, was used for modeling of the PCA benchmark. The CSAS6 criticality sequence of the SCALE6.0 code package, which includes KENO-VI Monte Carlo code, as well as MAVRIC/Monaco hybrid shielding sequence, was utilized for calculation of equivalent fission fluxes. The shielding analysis was performed using multigroup shielding library v7_200n47g derived from general purpose ENDF/B-VII.0 library. As a source of response functions for reaction rate calculations with MAVRIC we used international reactor dosimetry libraries (IRDF-2002 and IRDF-90.v2 and appropriate cross-sections from transport library v7_200n47g. The comparison of calculational results and benchmark data showed a good agreement of the calculated and measured equivalent fission fluxes.

  18. The challenge of benchmarking health systems: is ICT innovation capacity more systemic than organizational dependent?

    Science.gov (United States)

    Lapão, Luís Velez

    2015-01-01

    The article by Catan et al. presents a benchmarking exercise comparing Israel and Portugal on the implementation of Information and Communication Technologies in the healthcare sector. Special attention was given to e-Health and m-Health. The authors collected information via a set of interviews with key stakeholders. They compared two different cultures and societies, which have reached slightly different implementation outcomes. Although the comparison is very enlightening, it is also challenging. Benchmarking exercises present a set of challenges, such as the choice of methodologies and the assessment of the impact on organizational strategy. Precise benchmarking methodology is a valid tool for eliciting information about alternatives for improving health systems. However, many beneficial interventions, which benchmark as effective, fail to translate into meaningful healthcare outcomes across contexts. There is a relationship between results and the innovational and competitive environments. Differences in healthcare governance and financing models are well known; but little is known about their impact on Information and Communication Technology implementation. The article by Catan et al. provides interesting clues about this issue. Public systems (such as those of Portugal, UK, Sweden, Spain, etc.) present specific advantages and disadvantages concerning Information and Communication Technology development and implementation. Meanwhile, private systems based fundamentally on insurance packages, (such as Israel, Germany, Netherlands or USA) present a different set of advantages and disadvantages - especially a more open context for innovation. Challenging issues from both the Portuguese and Israeli cases will be addressed. Clearly, more research is needed on both benchmarking methodologies and on ICT implementation strategies.

  19. Characterization of a benchmark database for myoelectric movement classification.

    Science.gov (United States)

    Atzori, Manfredo; Gijsberts, Arjan; Kuzborskij, Ilja; Elsig, Simone; Hager, Anne-Gabrielle Mittaz; Deriaz, Olivier; Castellini, Claudio; Muller, Henning; Caputo, Barbara

    2015-01-01

    In this paper, we characterize the Ninapro database and its use as a benchmark for hand prosthesis evaluation. The database is a publicly available resource that aims to support research on advanced myoelectric hand prostheses. The database is obtained by jointly recording surface electromyography signals from the forearm and kinematics of the hand and wrist while subjects perform a predefined set of actions and postures. Besides describing the acquisition protocol, overall features of the datasets and the processing procedures in detail, we present benchmark classification results using a variety of feature representations and classifiers. Our comparison shows that simple feature representations such as mean absolute value and waveform length can achieve similar performance to the computationally more demanding marginal discrete wavelet transform. With respect to classification methods, the nonlinear support vector machine was found to be the only method consistently achieving high performance regardless of the type of feature representation. Furthermore, statistical analysis of these results shows that classification accuracy is negatively correlated with the subject's Body Mass Index. The analysis and the results described in this paper aim to be a strong baseline for the Ninapro database. Thanks to the Ninapro database (and the characterization described in this paper), the scientific community has the opportunity to converge to a common position on hand movement recognition by surface electromyography, a field capable to strongly affect hand prosthesis capabilities.

  20. Community Benchmarking of Tsunami-Induced Nearshore Current Models

    Science.gov (United States)

    Lynett, P. J.; Wilson, R. I.; Gately, K.

    2015-12-01

    To help produce accurate and consistent maritime hazard products, the National Tsunami Hazard Mitigation Program (NTHMP) Strategic Plan includes a requirement to develop and run a benchmarking workshop to evaluate the numerical tsunami modeling of currents. For this workshop, five different benchmarking datasets were organized. These datasets were selected based on characteristics such as geometric complexity, currents that are shear/separation driven (and thus are de-coupled from the incident wave forcing), tidal coupling, and interaction with the built environment. While tsunami simulation models have generally been well validated against wave height and runup, comparisons with speed data are much less common. As model results are increasingly being used to estimate or indicate damage to coastal infrastructure, understanding the accuracy and precision of speed predictions becomes of important. As a result of this 2-day workshop held in early 2015, modelers now have a better awareness of their ability to accurately capture the physics of tsunami currents, and therefore a better understanding of how to use these simulation tools for hazard assessment and mitigation efforts. In this presentation, the model results - from 14 different modelers - will be presented and summarized, with a focus on statistical and ensemble properties of the current predictions.

  1. Benchmarking an unstructured grid sediment model in an energetic estuary

    Science.gov (United States)

    Lopez, Jesse E.; Baptista, António M.

    2017-02-01

    A sediment model coupled to the hydrodynamic model SELFE is validated against a benchmark combining a set of idealized tests and an application to a field-data rich energetic estuary. After sensitivity studies, model results for the idealized tests largely agree with previously reported results from other models in addition to analytical, semi-analytical, or laboratory results. Results of suspended sediment in an open channel test with fixed bottom are sensitive to turbulence closure and treatment for hydrodynamic bottom boundary. Results for the migration of a trench are very sensitive to critical stress and erosion rate, but largely insensitive to turbulence closure. The model is able to qualitatively represent sediment dynamics associated with estuarine turbidity maxima in an idealized estuary. Applied to the Columbia River estuary, the model qualitatively captures sediment dynamics observed by fixed stations and shipborne profiles. Representation of the vertical structure of suspended sediment degrades when stratification is underpredicted. Across all tests, skill metrics of suspended sediments lag those of hydrodynamics even when qualitatively representing dynamics. The benchmark is fully documented in an openly available repository to encourage unambiguous comparisons against other models.

  2. Algorithm and Architecture Independent Benchmarking with SEAK

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  3. A human benchmark for language recognition

    NARCIS (Netherlands)

    Orr, R.; Leeuwen, D.A. van

    2009-01-01

    In this study, we explore a human benchmark in language recognition, for the purpose of comparing human performance to machine performance in the context of the NIST LRE 2007. Humans are categorised in terms of language proficiency, and performance is presented per proficiency. Themain challenge in

  4. Benchmarking Year Five Students' Reading Abilities

    Science.gov (United States)

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  5. Benchmark Generation and Simulation at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Lagadapati, Mahesh [North Carolina State University (NCSU), Raleigh; Mueller, Frank [North Carolina State University (NCSU), Raleigh; Engelmann, Christian [ORNL

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  6. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  7. Thermodynamic benchmark study using Biacore technology

    NARCIS (Netherlands)

    Navratilova, I.; Papalia, G.A.; Rich, R.L.; Bedinger, D.; Brophy, S.; Condon, B.; Deng, T.; Emerick, A.W.; Guan, H.W.; Hayden, T.; Heutmekers, T.; Hoorelbeke, B.; McCroskey, M.C.; Murphy, M.M.; Nakagawa, T.; Parmeggiani, F.; Xiaochun, Q.; Rebe, S.; Nenad, T.; Tsang, T.; Waddell, M.B.; Zhang, F.F.; Leavitt, S.; Myszka, D.G.

    2007-01-01

    A total of 22 individuals participated in this benchmark study to characterize the thermodynamics of small-molecule inhibitor-enzyme interactions using Biacore instruments. Participants were provided with reagents (the enzyme carbonic anhydrase II, which was immobilized onto the sensor surface, and

  8. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  9. Alberta K-12 ESL Proficiency Benchmarks

    Science.gov (United States)

    Salmon, Kathy; Ettrich, Mike

    2012-01-01

    The Alberta K-12 ESL Proficiency Benchmarks are organized by division: kindergarten, grades 1-3, grades 4-6, grades 7-9, and grades 10-12. They are descriptors of language proficiency in listening, speaking, reading, and writing. The descriptors are arranged in a continuum of seven language competences across five proficiency levels. Several…

  10. Seven Benchmarks for Information Technology Investment.

    Science.gov (United States)

    Smallen, David; Leach, Karen

    2002-01-01

    Offers benchmarks to help campuses evaluate their efforts in supplying information technology (IT) services. The first three help understand the IT budget, the next three provide insight into staffing levels and emphases, and the seventh relates to the pervasiveness of institutional infrastructure. (EV)

  11. Benchmarking Peer Production Mechanisms, Processes & Practices

    Science.gov (United States)

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  12. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  13. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  14. Benchmarking 2010: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  15. Benchmark Experiment for Beryllium Slab Samples

    Institute of Scientific and Technical Information of China (English)

    NIE; Yang-bo; BAO; Jie; HAN; Rui; RUAN; Xi-chao; REN; Jie; HUANG; Han-xiong; ZHOU; Zu-ying

    2015-01-01

    In order to validate the evaluated nuclear data on beryllium,a benchmark experiment has been performed at China Institution of Atomic Energy(CIAE).Neutron leakage spectra from pure beryllium slab samples(10cm×10cm×11cm)were measured at 61°and 121°using timeof-

  16. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  17. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  18. Issues in Benchmarking and Assessing Institutional Engagement

    Science.gov (United States)

    Furco, Andrew; Miller, William

    2009-01-01

    The process of assessing and benchmarking community engagement can take many forms. To date, more than two dozen assessment tools for measuring community engagement institutionalization have been published. These tools vary substantially in purpose, level of complexity, scope, process, structure, and focus. While some instruments are designed to…

  19. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  20. Benchmarks of support in internal medicine residency training programs.

    Science.gov (United States)

    Wolfsthal, Susan D; Beasley, Brent W; Kopelman, Richard; Stickley, William; Gabryel, Timothy; Kahn, Marc J

    2002-01-01

    To identify benchmarks of financial and staff support in internal medicine residency training programs and their correlation with indicators of quality. A survey instrument to determine characteristics of support of residency training programs was mailed to each member program of the Association of Program Directors of Internal Medicine. Results were correlated with the three-year running average of the pass rates on the American Board of Internal Medicine certifying examination using bivariate and multivariate analyses. Of 394 surveys, 287 (73%) were completed: 74% of respondents were program directors and 20% were both chair and program director. The mean duration as program director was 7.5 years (median = 5), but it was significantly lower for women than for men (4.9 versus 8.1; p =.001). Respondents spent 62% of their time in educational and administrative duties, 30% in clinical activities, 5% in research, and 2% in other activities. Most chief residents were PGY4s, with 72% receiving compensation additional to base salary. On average, there was one associate program director for every 33 residents, one chief resident for every 27 residents, and one staff person for every 21 residents. Most programs provided trainees with incremental educational stipends, meals while oncall, travel and meeting expenses, and parking. Support from pharmaceutical companies was used for meals, books, and meeting expenses. Almost all programs provided meals for applicants, with 15% providing travel allowances and 37% providing lodging. The programs' board pass rates significantly correlated with the numbers of faculty fulltime equivalents (FTEs), the numbers of resident FTEs per office staff FTEs, and the numbers of categorical and preliminary applications received and ranked by the programs in 1998 and 1999. Regression analyses demonstrated three independent predictors of the programs' board pass rates: number of faculty (a positive predictor), percentage of clinical work

  1. Supply chain integration scales validation and benchmark values

    Directory of Open Access Journals (Sweden)

    Juan A. Marin-Garcia

    2013-06-01

    Full Text Available Purpose: The clarification of the constructs of the supply chain integration (clients, suppliers, external and internal, the creation of a measurement instrument based on a list of items taken from earlier papers, the validation of these scales and a preliminary benchmark to interpret the scales by percentiles based on a set of control variables (size of the plant, country, sector and degree of vertical integration. Design/methodology/approach: Our empirical analysis is based on the HPM project database (2005-2007 timeframe. The international sample is made up of 266 plants across ten countries: Austria, Canada, Finland, Germany, Italy, Japan, Korea, Spain, Sweden and the USA. In each country. We analized the descriptive statistics, internal consistency testing to purify the items (inter-item correlations, Cronbach’s alpha, squared multiple correlation, corrected item-total correlation, exploratory factor analysis, and finally, a confirmatory factor analysis to check the convergent and discriminant validity of the scales. The analyses will be done with the SPSS and EQS programme using the maximum likelihood parameter estimation method. Findings: The four proposed scales show excellent psychometric properties. Research limitations/implications: with a clearer and more concise designation of the supply chain integration measurement scales more reliable and accurate data could be taken to analyse the relations between these constructs with other variables of interest to the academic l fields. Practical implications: providing scales that are valid as a diagnostic tool for best practices, as well as providing a benchmark with which to compare the score for each individual plant against a collection of industrial companies from the machinery, electronics and transportation sectors. Originality/value: supply chain integration may be a major factor in explaining the performance of companies. The results are nevertheless inconclusive, the vast range

  2. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  3. GEN-IV BENCHMARKING OF TRISO FUEL PERFORMANCE MODELS UNDER ACCIDENT CONDITIONS MODELING INPUT DATA

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise Paul [Idaho National Laboratory

    2016-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read

  4. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  5. Benchmarking transaction and analytical processing systems the creation of a mixed workload benchmark and its application

    CERN Document Server

    Bog, Anja

    2014-01-01

    This book introduces a new benchmark for hybrid database systems, gauging the effect of adding OLAP to an OLTP workload and analyzing the impact of commonly used optimizations in historically separate OLTP and OLAP domains in mixed-workload scenarios.

  6. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  7. How to Set the Allowance Benchmarking for Cement Industry in China’s Carbon Market: Marginal Analysis and the Case of the Hubei Emission Trading Pilot

    Directory of Open Access Journals (Sweden)

    Fan Dai

    2017-02-01

    Full Text Available Greenhouse gas (GHG benchmarking for allocation serves as rewards for early actions in mitigating GHG emissions by using more advanced technologies. China Hubei launched the carbon emission trading pilot in 2014, with the cement industry represented as a major contributor to the GHG emissions in Hubei. This article is set to establish a general benchmarking framework by describing and calculating the marginal abatement cost curve (MACC and marginal revenue and then comparing the different GHG benchmarking approaches for the cement industry in the Hubei Emission Trading Pilot (Hubei ETS case. Based on the comparison of three GHG benchmarking approaches, the Waxman-Markey standard, the European Union Emission Trading Scheme (EU ETS cement benchmarking, and the benchmarking approach applied in California Cap-and-Trade program, it is found that; (1 the Waxman-Markey benchmark is too loose to apply in Hubei as it provides little incentive for companies to mitigate; (2 the EU ETS benchmark approach fits the current cement industry in Hubei ETS; and (3 the GHG benchmarking standard in the California Cap-and-Trade Program is the most stringent standard and drives the direction of the future development for Hubei ETS.

  8. Dependable Benchmarking for Storage Systems in High-Energy Physics

    CERN Document Server

    Fleri Soler, Edward

    2017-01-01

    In high-energy physics, storage systems play a crucial role to store and secure very valuable data produced by complex experiments. The effectiveness and efficiency of data acquisition systems of such experiments depends directly on those of these storage systems. Coping with present day rates and reliability requirements of such experiments implies operating high-performance hardware under the best possible conditions, with a broad set of hardware and software parameters existing along the hierarchical levels, from networks down to drives. An extensive number of tests are required for the tuning of parameters to achieve optimised I/O operations. Current approaches to I/O optimisation generally consist of manual test execution and result taking. This approach lacks appropriate modularity, durability and reproducibility, attainable through dedicated testing facilities. The aim of this project is to conceive a user-friendly, dedicated storage benchmarking tool for the improved comparison of I/O parameters in re...

  9. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    Science.gov (United States)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  10. Techno-economic assessment and comparison of CO2 capture technologies for industrial processes: Preliminary results for the iron and steel sector

    NARCIS (Netherlands)

    Kuramochi, T.; Ramírez Ramírez, C.A.; Turkenburg, W.C.; Faaij, A.P.C.

    2011-01-01

    This paper presents the methodology and the preliminary results of a techno-economic assessment of CCS implementation on the iron and steel sector. The results show that for the short-mid term, a CO2 avoidance cost of less than 50 €/tonne at a CO2 avoidance rate of around 50% are possible by convert

  11. The ACRV Picking Benchmark (APB): A Robotic Shelf Picking Benchmark to Foster Reproducible Research

    OpenAIRE

    Leitner, Jürgen; Tow, Adam W.; Dean, Jake E.; Suenderhauf, Niko; Durham, Joseph W.; Cooper, Matthew; Eich, Markus; Lehnert, Christopher; Mangels, Ruben; McCool, Christopher; Kujala, Peter; Nicholson, Lachlan; Van Pham, Trung; Sergeant, James; Wu, Liao

    2016-01-01

    Robotic challenges like the Amazon Picking Challenge (APC) or the DARPA Challenges are an established and important way to drive scientific progress. They make research comparable on a well-defined benchmark with equal test conditions for all participants. However, such challenge events occur only occasionally, are limited to a small number of contestants, and the test conditions are very difficult to replicate after the main event. We present a new physical benchmark challenge for robotic pi...

  12. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    Science.gov (United States)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  13. Revisiting the TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes Over a Range in Parameter Space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B [ORNL; Azmy, Yousry [North Carolina State University

    2009-01-01

    Improved TORT solutions to the 3D transport codes' suite of benchmarks exercise are presented in this study. Preliminary TORT solutions to this benchmark indicate that the majority of benchmark quantities for most benchmark cases are computed with good accuracy, and that accuracy improves with model refinement. However, TORT fails to compute accurate results for some benchmark cases with aspect ratios drastically different from 1, possibly due to ray effects. In this work, we employ the standard approach of splitting the solution to the transport equation into an uncollided flux and a fully collided flux via the code sequence GRTUNCL3D and TORT to mitigate ray effects. The results of this code sequence presented in this paper show that the accuracy of most benchmark cases improved substantially. Furthermore, the iterative convergence problems reported for the preliminary TORT solutions have been resolved by bringing the computational cells' aspect ratio closer to unity and, more importantly, by using 64-bit arithmetic precision in the calculation sequence. Results of this study are also reported.

  14. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying an...... organizational relations, behaviors and actions. In closing it is briefly considered how to study the calculative practices of benchmarking....... and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... this perspective develops more thorough knowledge about benchmarking and challenges the current dominating rationales. Hereby, it is argued that benchmarking is not a neutral practice. On the contrary it is highly influenced by organizational ambitions and strategies, with the potentials to transform...

  15. Effects of Exposure Imprecision on Estimation of the Benchmark Dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose......Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose...

  16. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  17. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    OpenAIRE

    Zaharchenko Lolita A.; Kolesnyk Oksana A.

    2013-01-01

    The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking an...

  18. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  19. Benchmarking of corporate social responsibility: Methodological problems and robustness

    OpenAIRE

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  20. A benchmark for comparison of cell tracking algorithms

    NARCIS (Netherlands)

    M. Maška (Martin); V. Ulman (Vladimír); K. Svoboda; P. Matula (Pavel); P. Matula (Petr); C. Ederra (Cristina); A. Urbiola (Ainhoa); T. España (Tomás); R. Venkatesan (Rajkumar); D.M.W. Balak (Deepak); P. Karas (Pavel); T. Bolcková (Tereza); M. Štreitová (Markéta); C. Carthel (Craig); S. Coraluppi (Stefano); N. Harder (Nathalie); K. Rohr (Karl); K.E.G. Magnusson (Klas E.); J. Jaldén (Joakim); H.M. Blau (Helen); O.M. Dzyubachyk (Oleh); P. Křížek (Pavel); G.M. Hagen (Guy); D. Pastor-Escuredo (David); D. Jimenez-Carretero (Daniel); M.J. Ledesma-Carbayo (Maria); A. Muñoz-Barrutia (Arrate); E. Meijering (Erik); M. Kozubek (Michal); C. Ortiz-De-Solorzano (Carlos)

    2014-01-01

    textabstractMotivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Sy

  1. A benchmark for comparison of cell tracking algorithms

    NARCIS (Netherlands)

    M. Maška (Martin); V. Ulman (Vladimír); K. Svoboda; P. Matula (Pavel); P. Matula (Petr); C. Ederra (Cristina); A. Urbiola (Ainhoa); T. España (Tomás); R. Venkatesan (Rajkumar); D.M.W. Balak (Deepak); P. Karas (Pavel); T. Bolcková (Tereza); M. Štreitová (Markéta); C. Carthel (Craig); S. Coraluppi (Stefano); N. Harder (Nathalie); K. Rohr (Karl); K.E.G. Magnusson (Klas E.); J. Jaldén (Joakim); H.M. Blau (Helen); O.M. Dzyubachyk (Oleh); P. Křížek (Pavel); G.M. Hagen (Guy); D. Pastor-Escuredo (David); D. Jimenez-Carretero (Daniel); M.J. Ledesma-Carbayo (Maria); A. Muñoz-Barrutia (Arrate); E. Meijering (Erik); M. Kozubek (Michal); C. Ortiz-De-Solorzano (Carlos)

    2014-01-01

    textabstractMotivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International

  2. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  3. Taking Stock of Corporate Benchmarking Practices: Panacea or Pandora's Box?

    Science.gov (United States)

    Fleisher, Craig S.; Burton, Sara

    1995-01-01

    Discusses why corporate communications/public relations (cc/pr) should be benchmarked (an approach used by cc/pr managers to demonstrate the value of their activities to skeptical organizational executives). Discusses myths about cc/pr benchmarking; types, targets, and focus of cc/pr benchmarking; a process model; and critical decisions about…

  4. 47 CFR 69.108 - Transport rate benchmark.

    Science.gov (United States)

    2010-10-01

    ... with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone company... benchmark ratio of 9.6 to 1 or higher. (c) If a telephone company's initial transport rates are based on... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section...

  5. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  6. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  7. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  8. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  9. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  10. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  11. 29 CFR 1952.203 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.203 Section 1952.203... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  12. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  13. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  15. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  17. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. Energy benchmarking in wastewater treatment plants: the importance of site operation and layout.

    Science.gov (United States)

    Belloir, C; Stanford, C; Soares, A

    2015-01-01

    Energy benchmarking is a powerful tool in the optimization of wastewater treatment plants (WWTPs) in helping to reduce costs and greenhouse gas emissions. Traditionally, energy benchmarking methods focused solely on reporting electricity consumption, however, recent developments in this area have led to the inclusion of other types of energy, including electrical, manual, chemical and mechanical consumptions that can be expressed in kWh/m3. In this study, two full-scale WWTPs were benchmarked, both incorporated preliminary, secondary (oxidation ditch) and tertiary treatment processes, Site 1 also had an additional primary treatment step. The results indicated that Site 1 required 2.32 kWh/m3 against 0.98 kWh/m3 for Site 2. Aeration presented the highest energy consumption for both sites with 2.08 kWh/m3 required for Site 1 and 0.91 kWh/m3 in Site 2. The mechanical energy represented the second biggest consumption for Site 1 (9%, 0.212 kWh/m3) and chemical input was significant in Site 2 (4.1%, 0.026 kWh/m3). The analysis of the results indicated that Site 2 could be optimized by constructing a primary settling tank that would reduce the biochemical oxygen demand, total suspended solids and NH4 loads to the oxidation ditch by 55%, 75% and 12%, respectively, and at the same time reduce the aeration requirements by 49%. This study demonstrated that the effectiveness of the energy benchmarking exercise in identifying the highest energy-consuming assets, nevertheless it points out the need to develop a holistic overview of the WWTP and the need to include parameters such as effluent quality, site operation and plant layout to allow adequate benchmarking.

  19. Characterization of addressability by simultaneous randomized benchmarking

    CERN Document Server

    Gambetta, Jay M; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-01-01

    The control and handling of errors arising from cross-talk and unwanted interactions in multi-qubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking each qubit individually and then simultaneously, and the amount of addressability is related to the difference of the average gate fidelities of those experiments. We present the results on two similar samples with different amounts of cross-talk and unwanted interactions, which agree with predictions based on simple models for the amount of residual coupling.

  20. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  1. The PROOF benchmark suite measuring PROOF performance

    Science.gov (United States)

    Ryu, S.; Ganis, G.

    2012-06-01

    The PROOF benchmark suite is a new utility suite of PROOF to measure performance and scalability. The primary goal of the benchmark suite is to determine optimal configuration parameters for a set of machines to be used as PROOF cluster. The suite measures the performance of the cluster for a set of standard tasks as a function of the number of effective processes. Cluster administrators can use the suite to measure the performance of the cluster and find optimal configuration parameters. PROOF developers can also utilize the suite to help them measure, identify problems and improve their software. In this paper, the new tool is explained in detail and use cases are presented to illustrate the new tool.

  2. Measuring NUMA effects with the STREAM benchmark

    CERN Document Server

    Bergstrom, Lars

    2011-01-01

    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

  3. Non-judgemental Dynamic Fuel Cycle Benchmarking

    CERN Document Server

    Scopatz, Anthony Michael

    2015-01-01

    This paper presents a new fuel cycle benchmarking analysis methodology by coupling Gaussian process regression, a popular technique in Machine Learning, to dynamic time warping, a mechanism widely used in speech recognition. Together they generate figures-of-merit that are applicable to any time series metric that a benchmark may study. The figures-of-merit account for uncertainty in the metric itself, utilize information across the whole time domain, and do not require that the simulators use a common time grid. Here, a distance measure is defined that can be used to compare the performance of each simulator for a given metric. Additionally, a contribution measure is derived from the distance measure that can be used to rank order the importance of fuel cycle metrics. Lastly, this paper warns against using standard signal processing techniques for error reduction. This is because it is found that error reduction is better handled by the Gaussian process regression itself.

  4. Argonne Code Center: benchmark problem book

    Energy Technology Data Exchange (ETDEWEB)

    1977-06-01

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification. (RWR)

  5. Assessing and benchmarking multiphoton microscopes for biologists.

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs.

  6. ASBench: benchmarking sets for allosteric discovery.

    Science.gov (United States)

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  7. Active vibration control of nonlinear benchmark buildings

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xing-de; CHEN Dao-zheng

    2007-01-01

    The present nonlinear model reduction methods unfit the nonlinear benchmark buildings as their vibration equations belong to a non-affine system. Meanwhile,the controllers designed directly by the nonlinear control strategy have a high order, and they are difficult to be applied actually. Therefore, a new active vibration control way which fits the nonlinear buildings is proposed. The idea of the proposed way is based on the model identification and structural model linearization, and exerting the control force to the built model according to the force action principle. This proposed way has a better practicability as the built model can be reduced by the balance reduction method based on the empirical Grammian matrix. A three-story benchmark structure is presented and the simulation results illustrate that the proposed method is viable for the civil engineering structures.

  8. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  9. Physics benchmarks of the VELO upgrade

    CERN Document Server

    Eklund, Lars

    2017-01-01

    The LHCb Experiment at the LHC is successfully performing precision measurements primarily in the area of flavour physics. The collaboration is preparing an upgrade that will start taking data in 2021 with a trigger-less readout at five times the current luminosity. The vertex locator has been crucial in the success of the experiment and will continue to be so for the upgrade. It will be replaced by a hybrid pixel detector and this paper discusses the performance benchmarks of the upgraded detector. Despite the challenging experimental environment, the vertex locator will maintain or improve upon its benchmark figures compared to the current detector. Finally the long term plans for LHCb, beyond those of the upgrade currently in preparation, are discussed.

  10. Experiences in Benchmarking of Autonomic Systems

    Science.gov (United States)

    Etchevers, Xavier; Coupaye, Thierry; Vachet, Guy

    Autonomic computing promises improvements of systems quality of service in terms of availability, reliability, performance, security, etc. However, little research and experimental results have so far demonstrated this assertion, nor provided proof of the return on investment stemming from the efforts that introducing autonomic features requires. Existing works in the area of benchmarking of autonomic systems can be characterized by their qualitative and fragmented approaches. Still a crucial need is to provide generic (i.e. independent from business, technology, architecture and implementation choices) autonomic computing benchmarking tools for evaluating and/or comparing autonomic systems from a technical and, ultimately, an economical point of view. This article introduces a methodology and a process for defining and evaluating factors, criteria and metrics in order to qualitatively and quantitatively assess autonomic features in computing systems. It also discusses associated experimental results on three different autonomic systems.

  11. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  12. SUMMARY OF GENERAL WORKING GROUP A+B+D: CODES BENCHMARKING.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.; SHAPOSHNIKOVA, E.; ZIMMERMANN, F.; HOFMANN, I.

    2006-05-29

    Computer simulation is an indispensable tool in assisting the design, construction, and operation of accelerators. In particular, computer simulation complements analytical theories and experimental observations in understanding beam dynamics in accelerators. The ultimate function of computer simulation is to study mechanisms that limit the performance of frontier accelerators. There are four goals for the benchmarking of computer simulation codes, namely debugging, validation, comparison and verification: (1) Debugging--codes should calculate what they are supposed to calculate; (2) Validation--results generated by the codes should agree with established analytical results for specific cases; (3) Comparison--results from two sets of codes should agree with each other if the models used are the same; and (4) Verification--results from the codes should agree with experimental measurements. This is the summary of the joint session among working groups A, B, and D of the HI32006 Workshop on computer codes benchmarking.

  13. Benchmarking Nature Tourism between Zhangjiajie and Repovesi

    OpenAIRE

    Wu, Zhou

    2014-01-01

    Since nature tourism became a booming business in modern society, more and more tourists choose nature-based tourism destination for their holidays. To find ways to promote Repovesi national park is quite significant, in a bid to reinforce the competitiveness of Repovesi national park. The topic of this thesis is both to find good marketing strategies used by the Zhangjiajie national park, via benchmarking and to provide some suggestions to Repovesi national park. The Method used in t...

  14. Benchmarking Performance of Web Service Operations

    OpenAIRE

    Zhang, Shuai

    2011-01-01

    Web services are often used for retrieving data from servers providing information of different kinds. A data providing web service operation returns collections of objects for a given set of arguments without any side effects. In this project a web service benchmark (WSBENCH) is developed to simulate the performance of web service calls. Web service operations are specified as SQL statements. The function generator of WSBENCH converts user specified SQL queries into functions and automatical...

  15. Felix Stub Generator and Benchmarks Generator

    CERN Document Server

    Valenciano, Jose Jaime

    2014-01-01

    This report discusses two projects I have been working on during my summer studentship period in the context of the FELIX upgrade for ATLAS. The first project concerns the automated code generation needed to support and speed-up the FELIX firmware and software development cycle. The second project required the execution and analysis of benchmarks of the FELIX data-decoding software as a function of data sizes, number of threads and number of data blocks.

  16. Benchmarking polish basic metal manufacturing companies

    Directory of Open Access Journals (Sweden)

    P. Pomykalski

    2014-01-01

    Full Text Available Basic metal manufacturing companies are undergoing substantial strategic changes resulting from global changes in demand. During such periods managers should closely monitor and benchmark the financial results of companies operating in their section. Proper and timely identification of the consequences of changes in these areas may be crucial as managers seek to exploit opportunities and avoid threats. The paper examines changes in financial ratios of basic metal manufacturing companies operating in Poland in the period 2006-2011.

  17. BENCHMARK AS INSTRUMENT OF CRISIS MANAGEMENT

    OpenAIRE

    Haievskyi, Vladyslav

    2017-01-01

    In the article is determined the essence of a question's benchmark through synthesis of such concepts as “benchmark”, “crisis management” as an instrument of crisis management, the powerful tool which the entity carries out the comparative analysis of processes and effective activities and allows to reduce costs for production's of products in case of limitation's resources, to raise profit and to achieve success in optimization of strategy's activities of the entity.

  18. Self-interacting Dark Matter Benchmarks

    OpenAIRE

    Kaplinghat, M.; Tulin, S.; Yu, H-B

    2017-01-01

    Dark matter self-interactions have important implications for the distributions of dark matter in the Universe, from dwarf galaxies to galaxy clusters. We present benchmark models that illustrate characteristic features of dark matter that is self-interacting through a new light mediator. These models have self-interactions large enough to change dark matter densities in the centers of galaxies in accord with observations, while remaining compatible with large-scale structur...

  19. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  20. Preliminary validation of RELAP5/Mod4.0 code for LBE cooled NACIE facility

    Energy Technology Data Exchange (ETDEWEB)

    Kumari, Indu; Khanna, Ashok, E-mail: akhanna@iitk.ac.in

    2017-04-01

    Highlights: • Detail discussion of thermo physical properties of Lead Bismuth Eutectic incorporated in the code RELAP5/Mod4.0 included. • Benchmarking of LBE properties in RELAP5/Mod4.0 against literature. • NACIE facility for three different power levels (10.8, 21.7 and 32.5 kW) under natural circulation considered for benchmarking. • Preliminary validation of the LBE properties against experimental data. • NACIE facility for power level 22.5 kW considered for validation. - Abstract: The one-dimensional thermal hydraulic computer code RELAP5 was developed for thermal hydraulic study of light water reactor as well as for nuclear research reactors. The purpose of this work is to evaluate the code RELAP5/Mod4.0 for analysis of research reactors. This paper consists of three major sections. The first section presents detailed discussions on thermo-physical properties of Lead Bismuth Eutectic (LBE) incorporated in RELAP5/Mod4.0 code. In the second section, benchmarking of RELAP5/Mod4.0 has been done with the Natural Circulation Experimental (NACIE) facility in comparison with Barone’s simulations using RELAP5/Mod3.3. Three different power levels (10.8 kW, 21.7 kW and 32.5 kW) under natural circulation conditions are considered. Results obtained for LBE temperatures, temperature difference across heat section, pin surface temperatures, mass flow rates and heat transfer coefficients in heat section heat exchanger are in agreement with Barone’s simulation results within 7% of average relative error. Third section presents validation of RELAP5/Mod4.0 against the experimental data of NACIE facility performed by Tarantino et al. test number 21 at power of 22.5 kW comparing the profiles of temperatures, mass flow rate and velocity of LBE. Simulation and experimental results agree within 7% of average relative error.

  1. Benchmarking and accounting for the (private) cloud

    Science.gov (United States)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  2. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  3. The Application of the PEBBED Code Suite to the PBMR-400 Coupled Code Benchmark - FY 2006 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    2006-09-01

    This document describes the recent developments of the PEBBED code suite and its application to the PBMR-400 Coupled Code Benchmark. This report addresses an FY2006 Level 2 milestone under the NGNP Design and Evaluation Methods Work Package. The milestone states "Complete a report describing the results of the application of the integrated PEBBED code package to the PBMR-400 coupled code benchmark". The report describes the current state of the PEBBED code suite, provides an overview of the Benchmark problems to which it was applied, discusses the code developments achieved in the past year, and states some of the results attained. Results of the steady state problems generated by the PEBBED fuel management code compare favorably to the preliminary results generated by codes from other participating institutions and to similar non-Benchmark analyses. Partial transient analysis capability has been achieved through the acquisition of the NEM-THERMIX code from Penn State University. Phase I of the task has been achieved through the development of a self-consistent set of tools for generating cross sections for design and transient analysis and in the successful execution of the steady state benchmark exercises.

  4. RADSAT Benchmarks for Prompt Gamma Neutron Activation Analysis Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Kimberly A.; Gesh, Christopher J.

    2011-07-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. High-resolution gamma-ray spectrometers are used in these applications to measure the spectrum of the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used simulation tool for this type of problem, but computational times can be prohibitively long. This work explores the use of multi-group deterministic methods for the simulation of coupled neutron-photon problems. The main purpose of this work is to benchmark several problems modeled with RADSAT and MCNP to experimental data. Additionally, the cross section libraries for RADSAT are updated to include ENDF/B-VII cross sections. Preliminary findings show promising results when compared to MCNP and experimental data, but also areas where additional inquiry and testing are needed. The potential benefits and shortcomings of the multi-group-based approach are discussed in terms of accuracy and computational efficiency.

  5. An argument for abandoning the travelling salesman problem as a neural-network benchmark.

    Science.gov (United States)

    Smith, K

    1996-01-01

    In this paper, a distinction is drawn between research which assesses the suitability of the Hopfield network for solving the travelling salesman problem (TSP) and research which attempts to determine the effectiveness of the Hopfield network as an optimization technique. It is argued that the TSP is generally misused as a benchmark for the latter goal, with the existence of an alternative linear formulation giving rise to unreasonable comparisons.

  6. Benchmark experiment on vanadium assembly with D-T neutrons. Leakage neutron spectrum measurement

    Energy Technology Data Exchange (ETDEWEB)

    Kokooo; Murata, I.; Nakano, D.; Takahashi, A. [Osaka Univ., Suita (Japan); Maekawa, F.; Ikeda, Y.

    1998-03-01

    The fusion neutronics benchmark experiments have been done for vanadium and vanadium alloy by using the slab assembly and time-of-flight (TOF) method. The leakage neutron spectra were measured from 50 keV to 15 MeV and comparison were done with MCNP-4A calculations which was made by using evaluated nuclear data of JENDL-3.2, JENDL-Fusion File and FENDL/E-1.0. (author)

  7. Continuum discretization methods in a composite-particle scattering off a nucleus: the benchmark calculations

    CERN Document Server

    Rubtsova, O A; Moro, A M

    2008-01-01

    The direct comparison of two different continuum discretization methods towards the solution of a composite particle scattering off a nucleus is presented. The first approach -- the Continumm-Discretized Coupled Channel method -- is based on the differential equation formalism, while the second one -- the Wave-Packet Continuum Discretization method -- uses the integral equation formulation for the composite-particle scattering problem. As benchmark calculations we have chosen the deuteron off \

  8. Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy; Benchmark-Experiment zur Verifikation von Strahlungstransportrechnungen fuer die Dosimetrie in der Strahlentherapie

    Energy Technology Data Exchange (ETDEWEB)

    Renner, Franziska [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (Germany)

    2016-11-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide.

  9. A preliminary study of the OECD/NEA 3D transport problem using the lattice code DRAGON

    Energy Technology Data Exchange (ETDEWEB)

    Martin, N.; Marleau, G.; Hebert, A. [Inst. de genie nucleaire, Ecole Polytechnique de Montreal, Montreal, Quebec (Canada)

    2008-07-01

    In this paper we present a preliminary analysis of the NEA3D-TAB-2007 transport problem proposed by the OECD/NEA expert group on radiative transfer. This computational benchmark was originally proposed by Y. Azmy in 2007 to test the performance of 3D transport methods and codes over a suite of problems defined by large variations in space parameters. Two deterministic methods were applied to generate the numerical solutions: the discrete ordinates method (S{sub N}), and the method of open characteristics of I.R. Suslov (MCCG). We provide comparisons between MCNP reference solutions and MCCG and DRAGON-S{sub N} results in order to reveal the advantages and limitations of both methods. (author)

  10. [Benchmarking in the clinical arena. A potential answer to the dynamic changes in the health care system].

    Science.gov (United States)

    Bredl, K; Hüsig, S; Angele, M K; Lüring, C

    2010-08-01

    Current changes in the health system due to economic restrictions leading to increased competition require the introduction of intelligent management tools in the clinical arena. In a world where change and development are the only constants, flexibility and critical judgment of one's own achievements are requirements for success in all parts of society. Benchmarking, a management tool widely used in industry, represents a potential answer to the dynamic changes in the health system. This article deals with the theoretic basis and the clinical implications of benchmarking. The strategic background of benchmarking is the systematic process of comparison and identification with the best (best practice) leading to improved processes and results in one's own department and hospital. It is the aim of benchmarking in the clinical arena to achieve higher quality and patient directed innovation with less financial resources. This might result in better patient care. In summary, the management tool of benchmarking will be introduced into the clinical arena to keep hospitals competitive. Successful benchmarking will result in a leading position of a certain department in a special field.

  11. Twelve metropolitan carbon footprints: A preliminary comparative global assessment

    Energy Technology Data Exchange (ETDEWEB)

    Sovacool, Benjamin K., E-mail: bsovacool@nus.edu.s [Lee Kuan Yew School of Public Policy, National University of Singapore (Singapore); Brown, Marilyn A., E-mail: Marilyn.Brown@pubpolicy.gatech.ed [School of Public Policy, Georgia Institute of Technology, Atlanta, Georgia (United States)

    2010-09-15

    A dearth of available data on carbon emissions and comparative analysis between metropolitan areas make it difficult to confirm or refute best practices and policies. To help provide benchmarks and expand our understanding of urban centers and climate change, this article offers a preliminary comparison of the carbon footprints of 12 metropolitan areas. It does this by examining emissions related to vehicles, energy used in buildings, industry, agriculture, and waste. The carbon emissions from these sources-discussed here as the metro area's partial carbon footprint-provide a foundation for identifying the pricing, land use, help metropolitan areas throughout the world respond to climate change. The article begins by exploring a sample of the existing literature on urban morphology and climate change and explaining the methodology used to calculate each area's carbon footprint. The article then depicts the specific carbon footprints for Beijing, Jakarta, London, Los Angeles, Manila, Mexico City, New Delhi, New York, Sao Paulo, Seoul, Singapore, and Tokyo and compares these to respective national averages. It concludes by offering suggestions for how city planners and policymakers can reduce the carbon footprint of these and possibly other large urban areas.

  12. Twelve metropolitan carbon footprints. A preliminary comparative global assessment

    Energy Technology Data Exchange (ETDEWEB)

    Sovacool, Benjamin K. [Lee Kuan Yew School of Public Policy, National University of Singapore (Singapore); Brown, Marilyn A. [School of Public Policy, Georgia Institute of Technology, Atlanta, Georgia (United States)

    2010-09-15

    A dearth of available data on carbon emissions and comparative analysis between metropolitan areas make it difficult to confirm or refute best practices and policies. To help provide benchmarks and expand our understanding of urban centers and climate change, this article offers a preliminary comparison of the carbon footprints of 12 metropolitan areas. It does this by examining emissions related to vehicles, energy used in buildings, industry, agriculture, and waste. The carbon emissions from these sources - discussed here as the metro area's partial carbon footprint - provide a foundation for identifying the pricing, land use, help metropolitan areas throughout the world respond to climate change. The article begins by exploring a sample of the existing literature on urban morphology and climate change and explaining the methodology used to calculate each area's carbon footprint. The article then depicts the specific carbon footprints for Beijing, Jakarta, London, Los Angeles, Manila, Mexico City, New Delhi, New York, Sao Paulo, Seoul, Singapore, and Tokyo and compares these to respective national averages. It concludes by offering suggestions for how city planners and policymakers can reduce the carbon footprint of these and possibly other large urban areas. (author)

  13. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W., II

    1993-01-01

    One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance of a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical

  14. Global Gridded Crop Model Evaluation: Benchmarking, Skills, Deficiencies and Implications.

    Science.gov (United States)

    Muller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Folberth, Christian; Glotter, Michael; Hoek, Steven; hide

    2017-01-01

    Crop models are increasingly used to simulate crop yields at the global scale, but so far there is no general framework on how to assess model performance. Here we evaluate the simulation results of 14 global gridded crop modeling groups that have contributed historic crop yield simulations for maize, wheat, rice and soybean to the Global Gridded Crop Model Intercomparison (GGCMI) of the Agricultural Model Intercomparison and Improvement Project (AgMIP). Simulation results are compared to reference data at global, national and grid cell scales and we evaluate model performance with respect to time series correlation, spatial correlation and mean bias. We find that global gridded crop models (GGCMs) show mixed skill in reproducing time series correlations or spatial patterns at the different spatial scales. Generally, maize, wheat and soybean simulations of many GGCMs are capable of reproducing larger parts of observed temporal variability (time series correlation coefficients (r) of up to 0.888 for maize, 0.673 for wheat and 0.643 for soybean at the global scale) but rice yield variability cannot be well reproduced by most models. Yield variability can be well reproduced for most major producing countries by many GGCMs and for all countries by at least some. A comparison with gridded yield data and a statistical analysis of the effects of weather variability on yield variability shows that the ensemble of GGCMs can explain more of the yield variability than an ensemble of regression models for maize and soybean, but not for wheat and rice. We identify future research needs in global gridded crop modeling and for all individual crop modeling groups. In the absence of a purely observation-based benchmark for model evaluation, we propose that the best performing crop model per crop and region establishes the benchmark for all others, and modelers are encouraged to investigate how crop model performance can be increased. We make our evaluation system accessible to all

  15. Benchmark Problems Used to Assess Computational Aeroacoustics Codes

    Science.gov (United States)

    Dahl, Milo D.; Envia, Edmane

    2005-01-01

    The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.

  16. A novel and well-defined benchmarking method for second generation read mapping

    Directory of Open Access Journals (Sweden)

    Weese David

    2011-05-01

    Full Text Available Abstract Background Second generation sequencing technologies yield DNA sequence data at ultra high-throughput. Common to most biological applications is a mapping of the reads to an almost identical or highly similar reference genome. The assessment of the quality of read mapping results is not straightforward and has not been formalized so far. Hence, it has not been easy to compare different read mapping approaches in a unified way and to determine which program is the best for what task. Results We present a new benchmark method, called Rabema (Read Alignment BEnchMArk, for read mappers. It consists of a strict definition of the read mapping problem and of tools to evaluate the result of arbitrary read mappers supporting the SAM output format. Conclusions We show the usefulness of the benchmark program by performing a comparison of popular read mappers. The tools supporting the benchmark are licensed under the GPL and available from http://www.seqan.de/projects/rabema.html.

  17. Benchmarking study and its application for shielding analysis of large accelerator facilities

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee-Seock; Kim, Dong-hyun; Oranj, Leila Mokhtari; Oh, Joo-Hee; Lee, Arim; Jung, Nam-Suk [POSTECH, Pohang (Korea, Republic of)

    2015-10-15

    Shielding Analysis is one of subjects which are indispensable to construct large accelerator facility. Several methods, such as the Monte Carlo, discrete ordinate, and simplified calculation, have been used for this purpose. The calculation precision is overcome by increasing the trial (history) numbers. However its accuracy is still a big issue in the shielding analysis. To secure the accuracy in the Monte Carlo calculation, the benchmarking study using experimental data and the code comparison are adopted fundamentally. In this paper, the benchmarking result for electrons, protons, and heavy ions are presented as well as the proper application of the results is discussed. The benchmarking calculations, which are indispensable in the shielding analysis were performed for different particles: proton, heavy ion and electron. Four different multi-particle Monte Carlo codes, MCNPX, FLUKA, PHITS, and MARS, were examined for higher energy range equivalent to large accelerator facility. The degree of agreement between the experimental data including the SINBAD database and the calculated results were estimated in the terms of secondary neutron production and attenuation through the concrete and iron shields. The degree of discrepancy and the features of Monte Carlo codes were investigated and the application way of the benchmarking results are discussed in the view of safety margin and selecting the code for the shielding analysis. In most cases, the tested Monte Carlo codes give proper credible results except of a few limitation of each codes.

  18. A benchmark of excitonic couplings derived from atomic transition charges.

    Science.gov (United States)

    Kistler, Kurt A; Spano, Francis C; Matsika, Spiridoula

    2013-02-21

    In this report we benchmark Coulombic excitonic couplings between various pairs of chromophores calculated using transition charges localized on the atoms of each monomer chromophore, as derived from a Mulliken population analysis of the monomeric transition densities. The systems studied are dimers of 1-methylthymine, 1-methylcytosine, 2-amino-9-methylpurine, all-trans-1,3,5-hexatriene, all-trans-1,3,5,7-octatetraene, trans-stilbene, naphthalene, perylenediimide, and dithia-anthracenophane. Transition densities are taken from different single-reference electronic structure excited state methods: time-dependent density functional theory (TDDFT), configuration-interaction singles (CIS), and semiempirical methods based on intermediate neglect of differential overlap. Comparisons of these results with full ab initio calculations of the electronic couplings using a supersystem are made, as are comparisons with experimental data. Results show that the transition charges do a good job of reproducing the supersystem couplings for dimers with moderate to long-range interchromophore separation. It is also found that CIS supermolecular couplings tend to overestimate the couplings, and often the transition charges approach may be better, due to fortuitous cancellation of errors.

  19. Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis

    Directory of Open Access Journals (Sweden)

    Julius Hannink

    2017-08-01

    Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.

  20. Benchmarking procedures for high-throughput context specific reconstruction algorithms

    Directory of Open Access Journals (Sweden)

    Maria ePires Pacheco

    2016-01-01

    Full Text Available Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX (Duarte et al., 2007; Thiele et al., 2013 or HMR (Agren et al., 2013 has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last ten years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding.This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished, consistency testing and comparison based testing. The former includes methods like cross validation or testing with artificial networks. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms, that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms