WorldWideScience

Sample records for assembly computational benchmark

  1. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  2. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  3. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  4. Evaluation of PWR and BWR assembly benchmark calculations. Status report of EPRI computational benchmark results, performed in the framework of the Netherlands` PINK programme (Joint project of ECN, IRI, KEMA and GKN)

    Energy Technology Data Exchange (ETDEWEB)

    Gruppelaar, H. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Klippel, H.T. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Kloosterman, J.L. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Hoogenboom, J.E. [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Verhagen, F.C.M. [Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands); Bruggink, J.C. [Gemeenschappelijke Kernenergiecentrale Nederland N.V., Dodewaard (Netherlands)

    1993-11-01

    Benchmark results of the Dutch PINK working group on calculational benchmarks on single pin cell and multipin assemblies as defined by EPRI are presented and evaluated. First a short update of methods used by the various institutes involved is given as well as an update of the status with respect to previous performed pin-cell calculations. Problems detected in previous pin-cell calculations are inspected more closely. Detailed discussion of results of multipin assembly calculations is given. The assembly consists of 9 pins in a multicell square lattice in which the central pin is filled differently, i.e. a Gd pin for the BWR assembly and a control rod/guide tube for the PWR assembly. The results for pin cells showed a rather good overall agreement between the four participants although BWR pins with high void fraction turned out to be difficult to calculate. With respect to burnup calculations good overall agreement for the reactivity swing was obtained, provided that a fine time grid is used. (orig.)

  5. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  6. GABenchToB: a genome assembly benchmark tuned on bacteria and benchtop sequencers.

    Directory of Open Access Journals (Sweden)

    Sebastian Jünemann

    Full Text Available De novo genome assembly is the process of reconstructing a complete genomic sequence from countless small sequencing reads. Due to the complexity of this task, numerous genome assemblers have been developed to cope with different requirements and the different kinds of data provided by sequencers within the fast evolving field of next-generation sequencing technologies. In particular, the recently introduced generation of benchtop sequencers, like Illumina's MiSeq and Ion Torrent's Personal Genome Machine (PGM, popularized the easy, fast, and cheap sequencing of bacterial organisms to a broad range of academic and clinical institutions. With a strong pragmatic focus, here, we give a novel insight into the line of assembly evaluation surveys as we benchmark popular de novo genome assemblers based on bacterial data generated by benchtop sequencers. Therefore, single-library assemblies were generated, assembled, and compared to each other by metrics describing assembly contiguity and accuracy, and also by practice-oriented criteria as for instance computing time. In addition, we extensively analyzed the effect of the depth of coverage on the genome assemblies within reasonable ranges and the k-mer optimization problem of de Bruijn Graph assemblers. Our results show that, although both MiSeq and PGM allow for good genome assemblies, they require different approaches. They not only pair with different assembler types, but also affect assemblies differently regarding the depth of coverage where oversampling can become problematic. Assemblies vary greatly with respect to contiguity and accuracy but also by the requirement on the computing power. Consequently, no assembler can be rated best for all preconditions. Instead, the given kind of data, the demands on assembly quality, and the available computing infrastructure determines which assembler suits best. The data sets, scripts and all additional information needed to replicate our results are freely

  7. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Directory of Open Access Journals (Sweden)

    Seyhan Yazar

    Full Text Available A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR on Amazon EC2 instances and Google Compute Engine (GCE, using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2 for E.coli and 53.5% (95% CI: 34.4-72.6 for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1 and 173.9% (95% CI: 134.6-213.1 more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  8. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Science.gov (United States)

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  9. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Ravindrudu, Rahul [Iowa State Univ., Ames, IA (United States)

    2004-01-01

    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access

  10. Randomized benchmarking in measurement-based quantum computing

    Science.gov (United States)

    Alexander, Rafael N.; Turner, Peter S.; Bartlett, Stephen D.

    2016-09-01

    Randomized benchmarking is routinely used as an efficient method for characterizing the performance of sets of elementary logic gates in small quantum devices. In the measurement-based model of quantum computation, logic gates are implemented via single-site measurements on a fixed universal resource state. Here we adapt the randomized benchmarking protocol for a single qubit to a linear cluster state computation, which provides partial, yet efficient characterization of the noise associated with the target gate set. Applying randomized benchmarking to measurement-based quantum computation exhibits an interesting interplay between the inherent randomness associated with logic gates in the measurement-based model and the random gate sequences used in benchmarking. We consider two different approaches: the first makes use of the standard single-qubit Clifford group, while the second uses recently introduced (non-Clifford) measurement-based 2-designs, which harness inherent randomness to implement gate sequences.

  11. Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation

    Science.gov (United States)

    Scott, James R.

    2004-01-01

    NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.

  12. Benchmarking neuromorphic vision: lessons learnt from computer vision.

    Science.gov (United States)

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  13. Computer organization and assembly language programming

    CERN Document Server

    Peterson, James L

    1978-01-01

    Computer Organization and Assembly Language Programming deals with lower level computer programming-machine or assembly language, and how these are used in the typical computer system. The book explains the operations of the computer at the machine language level. The text reviews basic computer operations, organization, and deals primarily with the MIX computer system. The book describes assembly language programming techniques, such as defining appropriate data structures, determining the information for input or output, and the flow of control within the program. The text explains basic I/O

  14. Uncertainty Analysis for OECD-NEA-UAM Benchmark Problem of TMI-1 PWR Fuel Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Hyuk; Kim, S. J.; Seo, K.W.; Hwang, D. H. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    A quantification of code uncertainty is one of main questions that is continuously asked by the regulatory body like KINS. Utility and code developers solve the issue case by case because the general answer about this question is still opened. Under the circumference, OECD-NEA has attracted the global consensus on the uncertainty quantification through the UAM benchmark program. OECD-NEA benchmark II-2 problem is a problem on the uncertainty quantification of subchannel code. It is a problem that the uncertainty of fuel temperature and ONB location on the TMI-1 fuel assembly are estimated on the transient and steady condition. In this study, the uncertainty quantification of MATRA code is performed on the problem. Workbench platform is developed to produce the large set of inputs that is needed to estimate the uncertainty quantification on the benchmark problem. Direct Monte Carlo sampling is used to the random sampling from sample PDF. Uncertainty analysis of MATRA code on OECD-NEA benchmark problem is estimated using the developed tool and MATRA code. Uncertainty analysis on OECD-NEA benchmark II-2 problem was performed to quantify the uncertainty of MATRA code. Direct Monte Carlo sampling is used to extract 2000 random parameters. Workbench program is developed to generate input files and post process of calculation results. Uncertainty affected by input parameters was estimated on the DNBR, the cladding and the coolant temperatures.

  15. Benchmark experiment on vanadium assembly with D-T neutrons. Leakage neutron spectrum measurement

    Energy Technology Data Exchange (ETDEWEB)

    Kokooo; Murata, I.; Nakano, D.; Takahashi, A. [Osaka Univ., Suita (Japan); Maekawa, F.; Ikeda, Y.

    1998-03-01

    The fusion neutronics benchmark experiments have been done for vanadium and vanadium alloy by using the slab assembly and time-of-flight (TOF) method. The leakage neutron spectra were measured from 50 keV to 15 MeV and comparison were done with MCNP-4A calculations which was made by using evaluated nuclear data of JENDL-3.2, JENDL-Fusion File and FENDL/E-1.0. (author)

  16. Benchmark Problems Used to Assess Computational Aeroacoustics Codes

    Science.gov (United States)

    Dahl, Milo D.; Envia, Edmane

    2005-01-01

    The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.

  17. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  18. OECD/NEA burnup credit criticality benchmarks phase IIIA: Criticality calculations of BWR spent fuel assemblies in storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ando, Yoshihira [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    2000-09-01

    The report describes the final results of Phase IIIA Benchmarks conducted by the Burnup Credit Criticality Calculation Working Group under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD/NEA). The benchmarks are intended to confirm the predictive capability of the current computer code and data library combinations for the neutron multiplication factor (k{sub eff}) of a layer of irradiated BWR fuel assembly array model. In total 22 benchmark problems are proposed for calculations of k{sub eff}. The effects of following parameters are investigated: cooling time, inclusion/exclusion of FP nuclides and axial burnup profile, and inclusion of axial profile of void fraction or constant void fractions during burnup. Axial profiles of fractional fission rates are further requested for five cases out of the 22 problems. Twenty-one sets of results are presented, contributed by 17 institutes from 9 countries. The relative dispersion of k{sub eff} values calculated by the participants from the mean value is almost within the band of {+-}1%{delta}k/k. The deviations from the averaged calculated fission rate profiles are found to be within {+-}5% for most cases. (author)

  19. Benchmarking Computational Fluid Dynamics Models for Application to Lava Flow Simulations and Hazard Assessment

    Science.gov (United States)

    Dietterich, H. R.; Lev, E.; Chen, J.; Cashman, K. V.; Honor, C.

    2015-12-01

    Recent eruptions in Hawai'i, Iceland, and Cape Verde highlight the need for improved lava flow models for forecasting and hazard assessment. Existing models used for lava flow simulation range in assumptions, complexity, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess the capabilities of existing models and test the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flows, including VolcFlow, OpenFOAM, Flow3D, and COMSOL. Using new benchmark scenarios defined in Cordonnier et al. (2015) as a guide, we model Newtonian, Herschel-Bulkley and cooling flows over inclined planes, obstacles, and digital elevation models with a wide range of source conditions. Results are compared to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Our study highlights the strengths and weakness of each code, including accuracy and computational costs, and provides insights regarding code selection. We apply the best-fit codes to simulate the lava flows in Harrat Rahat, a predominately mafic volcanic field in Saudi Arabia. Input parameters are assembled from rheology and volume measurements of past flows using geochemistry, crystallinity, and present-day lidar and photogrammetric digital elevation models. With these data, we use our verified models to reconstruct historic and prehistoric events, in order to assess the hazards posed by lava flows for Harrat Rahat.

  20. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Jordan, Kirk; Lucini, Biagio; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  1. JACOB: a dynamic database for computational chemistry benchmarking.

    Science.gov (United States)

    Yang, Jack; Waller, Mark P

    2012-12-21

    JACOB (just a collection of benchmarks) is a database that contains four diverse benchmark studies, which in-turn included 72 data sets, with a total of 122,356 individual results. The database is constructed upon a dynamic web framework that allows users to retrieve data from the database via predefined categories. Additional flexibility is made available via user-defined text-based queries. Requested sets of results are then automatically presented as bar graphs, with parameters of the graphs being controllable via the URL. JACOB is currently available at www.wallerlab.org/jacob.

  2. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    Energy Technology Data Exchange (ETDEWEB)

    Orii, Shigeo [Japan Atomic Energy Research Inst., Tokyo (Japan)

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  3. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  4. Numerics of High Performance Computers and Benchmark Evaluation of Distributed Memory Computers

    Directory of Open Access Journals (Sweden)

    H. S. Krishna

    2004-07-01

    Full Text Available The internal representation of numerical data, their speed of manipulation to generate the desired result through efficient utilisation of central processing unit, memory, and communication links are essential steps of all high performance scientific computations. Machine parameters, in particular, reveal accuracy and error bounds of computation, required for performance tuning of codes. This paper reports diagnosis of machine parameters, measurement of computing power of several workstations, serial and parallel computers, and a component-wise test procedure for distributed memory computers. Hierarchical memory structure is illustrated by block copying and unrolling techniques. Locality of reference for cache reuse of data is amply demonstrated by fast Fourier transform codes. Cache and register-blocking technique results in their optimum utilisation with consequent gain in throughput during vector-matrix operations. Implementation of these memory management techniques reduces cache inefficiency loss, which is known to be proportional to the number of processors. Of the two Linux clusters-ANUP16, HPC22 and HPC64, it has been found from the measurement of intrinsic parameters and from application benchmark of multi-block Euler code test run that ANUP16 is suitable for problems that exhibit fine-grained parallelism. The delivered performance of ANUP16 is of immense utility for developing high-end PC clusters like HPC64 and customised parallel computers with added advantage of speed and high degree of parallelism.

  5. Benchmark Numerical Toolkits for High Performance Computing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  6. Benchmarking of computer codes and approaches for modeling exposure scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  7. Analysis of Network Performance for Computer Communication Systems with Benchmark

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper introduced a performance evaluating approach of computer communication system based on the simulation and measurement technology, and discussed its evaluating models. The result of our experiment showed that the outcome of practical measurement on Ether-LAN fitted in well with the theoreticai analysis. The approach we presented can be used to define various kinds of artificially simulated load models conveniently, build all kinds of network application environments in a flexible way, and exert sufficientiy the widely-used and high-precision features of the traditional simulation technology and the reality,reliability, adaptability features of measurement technology.

  8. Benchmark experiment on vanadium assembly with D-T neutrons. In-situ measurement

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Kasugai, Yoshimi; Konno, Chikara; Wada, Masayuki; Oyama, Yukio; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Murata, Isao; Kokooo; Takahashi, Akito

    1998-03-01

    Fusion neutronics benchmark experimental data on vanadium were obtained for neutrons in almost entire energies as well as secondary gamma-rays. Benchmark calculations for the experiment were performed to investigate validity of recent nuclear data files, i.e., JENDL Fusion File, FENDL/E-1.0 and EFF-3. (author)

  9. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Woodworth, Ken [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kuruganti, Teja [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-17

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the software for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.

  10. Benchmarking of Computational Models for NDE and SHM of Composites

    Science.gov (United States)

    Wheeler, Kevin; Leckey, Cara; Hafiychuk, Vasyl; Juarez, Peter; Timucin, Dogan; Schuet, Stefan; Hafiychuk, Halyna

    2016-01-01

    Ultrasonic wave phenomena constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials such as carbon-fiber-reinforced polymer (CFRP) laminates. Computational models of ultrasonic guided-wave excitation, propagation, scattering, and detection in quasi-isotropic laminates can be extremely valuable in designing practically realizable NDE and SHM hardware and software with desired accuracy, reliability, efficiency, and coverage. This paper presents comparisons of guided-wave simulations for CFRP composites implemented using three different simulation codes: two commercial finite-element analysis packages, COMSOL and ABAQUS, and a custom code implementing the Elastodynamic Finite Integration Technique (EFIT). Comparisons are also made to experimental laser Doppler vibrometry data and theoretical dispersion curves.

  11. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, A B; de Supinski, B; Mueller, F; Mckee, S A

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even more complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.

  12. Soft Computing in Optimizing Assembly Lines Balancing

    Directory of Open Access Journals (Sweden)

    Muhammad Z. Matondang

    2010-01-01

    Full Text Available As part of manufacturing systems, the assembly line has become one of the most valuable researches to accomplish the real world problems related to them. Many efforts have been made to seek the best techniques in optimizing assembly lines. Problem statement: Since it was published by Salveson in 1955, some methods and techniques have been developed based on mathematical modeling. In recent years, some researches in Assembly Line Balancing (ALB have been conducted using Soft Computing (SC approaches. However, there is no comprehensive survey studies conducted regarding the use of SC in ALB problems, which is became the aim of this study. Approach: This study reviewed published literatures and previous related works that applied SC in solving ALB problems. Main outcomes: This study looks into the suitability of SC approaches in several types of ALB problems. Furthermore, this study provides the classification of ALB problems that can facilitate distinguishing those problems as fields of research. Result: This study found that Genetic Algorithms (GAs are predominantly applied to solve ALB problems compared to other SC approaches. This high suitability in ALB refers to GAs' main characteristics that include its robustness and flexibility. These SC approaches have mostly been applied to simple ALB problems, which are not problems that are covered in a real complex manufacturing environment. Conclusion/Recommendations: This study recommends that future researches in ALB should be conducted with regard to other issues, beyond the simple ALB problems and more practical to the industries. Besides the advantages of GAs, there are still opportunities to use other SC approaches and the hybrid-systems among them that could increase the suitability of these approaches, especially for multi-objective ALB problems. This study also recommends that human involvement in ALB needs to be considered as a problem factor in ALB.

  13. Models of natural computation : gene assembly and membrane systems

    NARCIS (Netherlands)

    Brijder, Robert

    2008-01-01

    This thesis is concerned with two research areas in natural computing: the computational nature of gene assembly and membrane computing. Gene assembly is a process occurring in unicellular organisms called ciliates. During this process genes are transformed through cut-and-paste operations. We stud

  14. A computer scientist’s evaluation of publically available hardware Trojan benchmarks

    OpenAIRE

    Slayback, Scott M.

    2015-01-01

    Approved for public release; distribution is unlimited Dr. Hassan Salmani and Dr. Mohammed Tehranipoor have developed a collection of publically available hardware Trojans, meant to be used as common benchmarks for the analysis of detection and mitigation techniques. In this thesis, we evaluate a selection of these Trojans from the perspective of a computer scientist with limited electrical engineering background. Note that this thesis is also intended to serve as a supplement to the exist...

  15. Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flapper, Joris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry – including four dairy processes – cheese, fluid milk, butter, and milk powder.

  16. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A. [Los Alamos National Laboratory (LANL); Macfarlane, R E [Los Alamos National Laboratory (LANL); Mosteller, R D [Los Alamos National Laboratory (LANL); Kiedrowski, B C [Los Alamos National Laboratory (LANL); Frankle, S C [Los Alamos National Laboratory (LANL); Chadwick, M. B. [Los Alamos National Laboratory (LANL); Mcknight, R D [Argonne National Laboratory (ANL); Lell, R M [Argonne National Laboratory (ANL); Palmiotti, G [Idaho National Laboratory (INL); Hiruta, h [Idaho National Laboratory (INL); Herman, Micheal W [Brookhaven National Laboratory (BNL); Arcilla, r [Brookhaven National Laboratory (BNL); Mughabghab, S F [Brookhaven National Laboratory (BNL); Sublet, J C [Culham Science Center, Abington, UK; Trkov, A. [Jozef Stefan Institute, Slovenia; Trumbull, T H [Knolls Atomic Power Laboratory; Dunn, Michael E [ORNL

    2011-01-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical

  17. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also

  18. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Science.gov (United States)

    Kahler, A. C.; MacFarlane, R. E.; Mosteller, R. D.; Kiedrowski, B. C.; Frankle, S. C.; Chadwick, M. B.; McKnight, R. D.; Lell, R. M.; Palmiotti, G.; Hiruta, H.; Herman, M.; Arcilla, R.; Mughabghab, S. F.; Sublet, J. C.; Trkov, A.; Trumbull, T. H.; Dunn, M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., "ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data," Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected

  19. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  20. Frances: A Tool for Understanding Computer Architecture and Assembly Language

    Science.gov (United States)

    Sondag, Tyler; Pokorny, Kian L.; Rajan, Hridesh

    2012-01-01

    Students in all areas of computing require knowledge of the computing device including software implementation at the machine level. Several courses in computer science curricula address these low-level details such as computer architecture and assembly languages. For such courses, there are advantages to studying real architectures instead of…

  1. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  2. ESTABLISHING A METHODOLOGY FOR BENCHMARKING SPEECH SYNTHESIS FOR COMPUTER-ASSISTED LANGUAGE LEARNING (CALL

    Directory of Open Access Journals (Sweden)

    Zöe Handley

    2005-09-01

    Full Text Available Despite the new possibilities that speech synthesis brings about, few Computer-Assisted Language Learning (CALL applications integrating speech synthesis have found their way onto the market. One potential reason is that the suitability and benefits of the use of speech synthesis in CALL have not been proven. One way to do this is through evaluation. Yet, very few formal evaluations of speech synthesis for CALL purposes have been conducted. One possible reason for the neglect of evaluation in this context is the fact that it is expensive in terms of time and resources. An important concern given that there are several levels of evaluation from which such applications would benefit. Benchmarking, the comparison of the score obtained by a system with that obtained by one which is known, to guarantee user satisfaction in a standard task or set of tasks, is introduced as a potential solution to this problem. In this article, we report on our progress towards the development of one of these benchmarks, namely a benchmark for determining the adequacy of speech synthesis systems for use in CALL. We do so by presenting the results of a case study which aimed to identify the criteria which determine the adequacy of the output of speech synthesis systems for use in its various roles in CALL with a view to the selection of benchmark tests which will address these criteria. These roles (reading machine, pronunciation model, and conversational partner are also discussed here. An agenda for further research and evaluation is proposed in the conclusion.

  3. On computational properties of gene assembly in ciliates

    Directory of Open Access Journals (Sweden)

    Vladimir Rogojin

    2010-11-01

    Full Text Available Gene assembly in stichotrichous ciliates happening during sexual reproduction is one of the most involved DNA manipulation processes occurring in biology. This biological process is of high interest from the computational and mathematical points of view due to its close analogy with such concepts and notions in theoretical computer science as permutation and linked list sorting and string rewriting. Studies on computational properties of gene assembly in ciliates represent a good example of interdisciplinary research contributing to both computer science and biology. We review here a number of general results related both to the development of different computational methods enhancing our understanding on the nature of gene assembly, as well as to the development of new biologically motivated computational and mathematical models and paradigms. Those paradigms contribute in particular to combinatorics, formal languages and computability theories.

  4. Research on Three Dimensional Computer Assistance Assembly Process Design System

    Institute of Scientific and Technical Information of China (English)

    HOU Wenjun; YAN Yaoqi; DUAN Wenjia; SUN Hanxu

    2006-01-01

    The computer aided process planning will certainly play a significant role in the success of enterprise informationization. 3-dimensional design will promote Tri-dimensional process planning. This article analysis nowadays situation and problems of assembly process planning, gives a 3-dimensional computer aided process planning system (3D-VAPP), and researches on the product information extraction, assembly sequence and path planning in visual interactive assembly process design, dynamic emulation of assembly and process verification, assembly animation outputs and automatic exploding view generation, interactive craft filling and craft knowledge management, etc. It also gives a multi-layer collision detect and multi-perspective automatic camera switching algorithm. Experiments were done to validate the feasibility of such technology and algorithm, which established the foundation of tri-dimensional computer aided process planning.

  5. Hybrid Numerical Solvers for Massively Parallel Eigenvalue Computation and Their Benchmark with Electronic Structure Calculations

    CERN Document Server

    Imachi, Hiroto

    2015-01-01

    Optimally hybrid numerical solvers were constructed for massively parallel generalized eigenvalue problem (GEP).The strong scaling benchmark was carried out on the K computer and other supercomputers for electronic structure calculation problems in the matrix sizes of M = 10^4-10^6 with upto 105 cores. The procedure of GEP is decomposed into the two subprocedures of the reducer to the standard eigenvalue problem (SEP) and the solver of SEP. A hybrid solver is constructed, when a routine is chosen for each subprocedure from the three parallel solver libraries of ScaLAPACK, ELPA and EigenExa. The hybrid solvers with the two newer libraries, ELPA and EigenExa, give better benchmark results than the conventional ScaLAPACK library. The detailed analysis on the results implies that the reducer can be a bottleneck in next-generation (exa-scale) supercomputers, which indicates the guidance for future research. The code was developed as a middleware and a mini-application and will appear online.

  6. Benchmarking the CRBLASTER Computational Framework on the 350-MHz 49-core Maestro Development Board

    Science.gov (United States)

    Mighell, K. J.

    2012-09-01

    I describe the performance of the CRBLASTER computational framework on a 350-MHz 49-core Maestro Development Board (MBD). The 49-core Interim Test Chip (ITC) was developed by the U.S. Government and is based on the intellectual property of the 64-core TILE64 processor of the Tilera Corporation. The Maestro processor is intended for use in the high radiation environments found in space; the ITC was fabricated using IBM 90-nm CMOS 9SF technology and Radiation-Hardening-by-Design (RHDB) rules. CRBLASTER is a parallel-processing cosmic-ray rejection application based on a simple computational framework that uses the high-performance computing industry standard Message Passing Interface (MPI) library. CRBLASTER was designed to be used by research scientists to easily port image-analysis programs based on embarrassingly-parallel algorithms to a parallel-processing environment such as a multi-node Beowulf cluster or multi-core processors using MPI. I describe my experience of porting CRBLASTER to the 64-core TILE64 processor, the Maestro simulator, and finally the 49-core Maestro processor itself. Performance comparisons using the ITC are presented between emulating all floating-point operations in software and doing all floating point operations with hardware assist from an IEEE-754 compliant Aurora FPU (floating point unit) that is attached to each of the 49 cores. Benchmarking of the CRBLASTER computational framework using the memory-intensive L.A.COSMIC cosmic ray rejection algorithm and a computational-intensive Poisson noise generator reveal subtleties of the Maestro hardware design. Lastly, I describe the importance of using real scientific applications during the testing phase of next-generation computer hardware; complex real-world scientific applications can stress hardware in novel ways that may not necessarily be revealed while executing simple applications or unit tests.

  7. Quantum computing applied to calculations of molecular energies: CH2 benchmark.

    Science.gov (United States)

    Veis, Libor; Pittner, Jiří

    2010-11-21

    Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.

  8. The Use of Hebbian Cell Assemblies for Nonlinear Computation

    DEFF Research Database (Denmark)

    Tetzlaff, Christian; Dasgupta, Sakyasingha; Kulvicius, Tomas;

    2015-01-01

    When learning a complex task our nervous system self-organizes large groups of neurons into coherent dynamic activity patterns. During this, a network with multiple, simultaneously active, and computationally powerful cell assemblies is created. How such ordered structures are formed while...... computing complex non-linear transforms and - for execution - must cooperate with each other without interference. This mechanism, thus, permits the self-organization of computationally powerful sub-structures in dynamic networks for behavior control....... the learning of complex calculations. Due to synaptic scaling the dynamics of different cell assemblies do not interfere with each other. As a consequence, this type of self-organization allows executing a difficult, six degrees of freedom, manipulation task with a robot where assemblies need to learn...

  9. A critical assembly designed to measure neutronic benchmarks in support of the space nuclear thermal propulsion program

    Science.gov (United States)

    Parma, Edward J.; Ball, Russell M.; Hoovler, Gary S.; Selcow, Elizabeth C.; Cerbone, Ralph J.

    1993-01-01

    A reactor designed to perform criticality experiments in support of the Space Nuclear Thermal Propulsion program is currently in operation at the Sandia National Laboratories' reactor facility. The reactor is a small, water-moderated system that uses highly enriched uranium particle fuel in a 19-element configuration. Its purpose is to obtain neutronic measurements under a variety of experimental conditions that are subsequently used to benchmark rector-design computer codes. Brookhaven National Laboratory, Babcock & Wilcox, and Sandia National Laboratories participated in determining the reactor's performance requirements, design, follow-on experimentation, and in obtaining the licensing approvals. Brookhaven National Laboratory is primarily responsible for the analytical support, Babcock & Wilcox the hardware design, and Sandia National Laboratories the operational safety. All of the team members participate in determining the experimentation requirements, performance, and data reduction. Initial criticality was achieved in October 1989. An overall description of the reactor is presented along with key design features and safety-related aspects.

  10. Thermal Hydraulic Computational Fluid Dynamics Simulations and Experimental Investigation of Deformed Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Mays, Brian [AREVA Federal Services, Lynchburg, VA (United States); Jackson, R. Brian [TerraPower, Bellevue, WA (United States)

    2017-03-08

    The project, Toward a Longer Life Core: Thermal Hydraulic CFD Simulations and Experimental Investigation of Deformed Fuel Assemblies, DOE Project code DE-NE0008321, was a verification and validation project for flow and heat transfer through wire wrapped simulated liquid metal fuel assemblies that included both experiments and computational fluid dynamics simulations of those experiments. This project was a two year collaboration between AREVA, TerraPower, Argonne National Laboratory and Texas A&M University. Experiments were performed by AREVA and Texas A&M University. Numerical simulations of these experiments were performed by TerraPower and Argonne National Lab. Project management was performed by AREVA Federal Services. The first of a kind project resulted in the production of both local point temperature measurements and local flow mixing experiment data paired with numerical simulation benchmarking of the experiments. The project experiments included the largest wire-wrapped pin assembly Mass Index of Refraction (MIR) experiment in the world, the first known wire-wrapped assembly experiment with deformed duct geometries and the largest numerical simulations ever produced for wire-wrapped bundles.

  11. Computational fluid dynamics (CFD) round robin benchmark for a pressurized water reactor (PWR) rod bundle

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Shin K., E-mail: paengki1@tamu.edu; Hassan, Yassin A.

    2016-05-15

    Highlights: • The capabilities of steady RANS models were directly assessed for full axial scale experiment. • The importance of mesh and conjugate heat transfer was reaffirmed. • The rod inner-surface temperature was directly compared. • The steady RANS calculations showed a limitation in the prediction of circumferential distribution of the rod surface temperature. - Abstract: This study examined the capabilities and limitations of steady Reynolds-Averaged Navier–Stokes (RANS) approach for pressurized water reactor (PWR) rod bundle problems, based on the round robin benchmark of computational fluid dynamics (CFD) codes against the NESTOR experiment for a 5 × 5 rod bundle with typical split-type mixing vane grids (MVGs). The round robin exercise against the high-fidelity, broad-range (covering multi-spans and entire lateral domain) NESTOR experimental data for both the flow field and the rod temperatures enabled us to obtain important insights into CFD prediction and validation for the split-type MVG PWR rod bundle problem. It was found that the steady RANS turbulence models with wall function could reasonably predict two key variables for a rod bundle problem – grid span pressure loss and the rod surface temperature – once mesh (type, resolution, and configuration) was suitable and conjugate heat transfer was properly considered. However, they over-predicted the magnitude of the circumferential variation of the rod surface temperature and could not capture its peak azimuthal locations for a central rod in the wake of the MVG. These discrepancies in the rod surface temperature were probably because the steady RANS approach could not capture unsteady, large-scale cross-flow fluctuations and qualitative cross-flow pattern change due to the laterally confined test section. Based on this benchmarking study, lessons and recommendations about experimental methods as well as CFD methods were also provided for the future research.

  12. DSP Platform Benchmarking : DSP Platform Benchmarking

    OpenAIRE

    Xinyuan, Luo

    2009-01-01

    Benchmarking of DSP kernel algorithms was conducted in the thesis on a DSP processor for teaching in the course TESA26 in the department of Electrical Engineering. It includes benchmarking on cycle count and memory usage. The goal of the thesis is to evaluate the quality of a single MAC DSP instruction set and provide suggestions for further improvement in instruction set architecture accordingly. The scope of the thesis is limited to benchmark the processor only based on assembly coding. The...

  13. Dynamic self-assembly in living systems as computation.

    Energy Technology Data Exchange (ETDEWEB)

    Bouchard, Ann Marie; Osbourn, Gordon Cecil

    2004-06-01

    Biochemical reactions taking place in living systems that map different inputs to specific outputs are intuitively recognized as performing information processing. Conventional wisdom distinguishes such proteins, whose primary function is to transfer and process information, from proteins that perform the vast majority of the construction, maintenance, and actuation tasks of the cell (assembling and disassembling macromolecular structures, producing movement, and synthesizing and degrading molecules). In this paper, we examine the computing capabilities of biological processes in the context of the formal model of computing known as the random access machine (RAM) [Dewdney AK (1993) The New Turing Omnibus. Computer Science Press, New York], which is equivalent to a Turing machine [Minsky ML (1967) Computation: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs, NJ]. When viewed from the RAM perspective, we observe that many of these dynamic self-assembly processes - synthesis, degradation, assembly, movement - do carry out computational operations. We also show that the same computing model is applicable at other hierarchical levels of biological systems (e.g., cellular or organism networks as well as molecular networks). We present stochastic simulations of idealized protein networks designed explicitly to carry out a numeric calculation. We explore the reliability of such computations and discuss error-correction strategies (algorithms) employed by living systems. Finally, we discuss some real examples of dynamic self-assembly processes that occur in living systems, and describe the RAM computer programs they implement. Thus, by viewing the processes of living systems from the RAM perspective, a far greater fraction of these processes can be understood as computing than has been previously recognized.

  14. Tolerance Verification of an Industrial Assembly using Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo; Regi, Francesco

    2016-01-01

    This paper reports on results of tolerance verification of a multi-material assembly by using Computed Tomography (CT). The workpiece comprises three parts which are made out of different materials. Five different measurands were inspected. The calculation of measurement uncertainties was attempt...

  15. Benchmarking a multiresolution discontinuous Galerkin shallow water model: Implications for computational hydraulics

    Science.gov (United States)

    Caviedes-Voullième, Daniel; Kesserwani, Georges

    2015-12-01

    Numerical modelling of wide ranges of different physical scales, which are involved in Shallow Water (SW) problems, has been a key challenge in computational hydraulics. Adaptive meshing techniques have been commonly coupled with numerical methods in an attempt to address this challenge. The combination of MultiWavelets (MW) with the Runge-Kutta Discontinuous Galerkin (RKDG) method offers a new philosophy to readily achieve mesh adaptivity driven by the local variability of the numerical solution, and without requiring more than one threshold value set by the user. However, the practical merits and implications of the MWRKDG, in terms of how far it contributes to address the key challenge above, are yet to be explored. This work systematically explores this, through the verification and validation of the MWRKDG for selected steady and transient benchmark tests, which involves the features of real SW problems. Our findings reveal a practical promise of the SW-MWRKDG solver, in terms of efficient and accurate mesh-adaptivity, but also suggest further improvement in the SW-RKDG reference scheme to better intertwine with, and harness the prowess of, the MW-based adaptivity.

  16. Computational design of co-assembling protein-DNA nanowires

    Science.gov (United States)

    Mou, Yun; Yu, Jiun-Yann; Wannier, Timothy M.; Guo, Chin-Lin; Mayo, Stephen L.

    2015-09-01

    Biomolecular self-assemblies are of great interest to nanotechnologists because of their functional versatility and their biocompatibility. Over the past decade, sophisticated single-component nanostructures composed exclusively of nucleic acids, peptides and proteins have been reported, and these nanostructures have been used in a wide range of applications, from drug delivery to molecular computing. Despite these successes, the development of hybrid co-assemblies of nucleic acids and proteins has remained elusive. Here we use computational protein design to create a protein-DNA co-assembling nanomaterial whose assembly is driven via non-covalent interactions. To achieve this, a homodimerization interface is engineered onto the Drosophila Engrailed homeodomain (ENH), allowing the dimerized protein complex to bind to two double-stranded DNA (dsDNA) molecules. By varying the arrangement of protein-binding sites on the dsDNA, an irregular bulk nanoparticle or a nanowire with single-molecule width can be spontaneously formed by mixing the protein and dsDNA building blocks. We characterize the protein-DNA nanowire using fluorescence microscopy, atomic force microscopy and X-ray crystallography, confirming that the nanowire is formed via the proposed mechanism. This work lays the foundation for the development of new classes of protein-DNA hybrid materials. Further applications can be explored by incorporating DNA origami, DNA aptamers and/or peptide epitopes into the protein-DNA framework presented here.

  17. Protein-DNA computation by stochastic assembly cascade

    CERN Document Server

    Bar-Ziv, Roy; Libchaber, Albert; 10.1073/pnas.162369099

    2010-01-01

    The assembly of RecA on single-stranded DNA is measured and interpreted as a stochastic finite-state machine that is able to discriminate fine differences between sequences, a basic computational operation. RecA filaments efficiently scan DNA sequence through a cascade of random nucleation and disassembly events that is mechanistically similar to the dynamic instability of microtubules. This iterative cascade is a multistage kinetic proofreading process that amplifies minute differences, even a single base change. Our measurements suggest that this stochastic Turing-like machine can compute certain integral transforms.

  18. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flapper, Joris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variables affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and

  19. COMPUTER-AIDED BLOCK ASSEMBLY PROCESS PLANNING IN SHIPBUILD-ING BASED ON RULE-REASONING

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhiying; LI Zhen; JIANG Zhibin

    2008-01-01

    Computer-aided block assembly process planning based on rule-reasoning are developed in order to improve the assembly efficiency and implement the automated block assembly process planning generation in shipbuilding. First, weighted directed liaison graph (WDLG) is proposed to represent the model of block assembly process according to the characteristics of assembly relation, and edge list (EL) is used to describe assembly sequences. Shapes and assembly attributes of block parts are analyzed to determine the assembly position and matched parts of parts used frequently. Then, a series of assembly rules are generalized, and assembly sequences for block are obtained by means of rule reasoning. Final, a prototype system of computer-aided block assembly process planning is built. The system has been tested on actual block, and the results were found to be quite efficiency. Meanwhile, the fundament for the automation of block assembly process generation and integration with other systems is established.

  20. Genome Assembly and Computational Analysis Pipelines for Bacterial Pathogens

    KAUST Repository

    Rangkuti, Farania Gama Ardhina

    2011-06-01

    Pathogens lie behind the deadliest pandemics in history. To date, AIDS pandemic has resulted in more than 25 million fatal cases, while tuberculosis and malaria annually claim more than 2 million lives. Comparative genomic analyses are needed to gain insights into the molecular mechanisms of pathogens, but the abundance of biological data dictates that such studies cannot be performed without the assistance of computational approaches. This explains the significant need for computational pipelines for genome assembly and analyses. The aim of this research is to develop such pipelines. This work utilizes various bioinformatics approaches to analyze the high-­throughput genomic sequence data that has been obtained from several strains of bacterial pathogens. A pipeline has been compiled for quality control for sequencing and assembly, and several protocols have been developed to detect contaminations. Visualization has been generated of genomic data in various formats, in addition to alignment, homology detection and sequence variant detection. We have also implemented a metaheuristic algorithm that significantly improves bacterial genome assemblies compared to other known methods. Experiments on Mycobacterium tuberculosis H37Rv data showed that our method resulted in improvement of N50 value of up to 9697% while consistently maintaining high accuracy, covering around 98% of the published reference genome. Other improvement efforts were also implemented, consisting of iterative local assemblies and iterative correction of contiguated bases. Our result expedites the genomic analysis of virulent genes up to single base pair resolution. It is also applicable to virtually every pathogenic microorganism, propelling further research in the control of and protection from pathogen-­associated diseases.

  1. A Computer Model for Analyzing Volatile Removal Assembly

    Science.gov (United States)

    Guo, Boyun

    2010-01-01

    A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.

  2. Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C Language

    OpenAIRE

    Blaha, Stephen

    2002-01-01

    We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.

  3. Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C

    OpenAIRE

    Blaha, Stephen

    2002-01-01

    We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.

  4. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    Science.gov (United States)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  5. PATHWAY ASSEMBLY ASSISTED BY COMPUTER: TEACHING ANAEROBIC GLYCOLYSIS

    Directory of Open Access Journals (Sweden)

    F. M Sarraipa

    2008-05-01

    Full Text Available The  knowledge on  metabolic pathways  is required in  the higher education courses on biological  field.  This  work  presents  a  computer assisted  approach for metabolic pathways self study, based  on their assembly, reaction-by-reaction.  Anaerobic glycolysis was used as a model.  The software was designed to users who have basic knowledge on enzymatic catalysis,  and  to be used with or without teacher’s help. Every reaction is detailed, and the student can move forward only after having assembled each reaction correctly. The software  contains a tutorial  to help users both  on its use, and  on the  correct assembly of each reaction.  The software was field tested  in the basics biochemistry disciplines to the students of Physical Education, Nursing, Medicine and Biology from the State University of Campinas  –  UNICAMP, and in the physiology discipline to the students of Physical Education from the Institute Adventist Sao Paulo – IASP. A database using MySQL was structured to collect data on the software using . Every action taken by the students were recorded. The statistical analysis showed that the number of tries decreases as the students move forward on the pathway assembly. The most difficult reaction besides the first one, were the ones that presented  pattern changes, for example, the sixth reaction was the  first oxidation-reduction reaction. In the first reaction the most frequent mistakes  were using the phosphohexose isomerase as enzyme or having forgotten to include ATP among the substrates. In the sixth reaction the most frequent mistakes was having forgotten to include NAD+ among the substrates. The recorded data analysis can be used by the teachers to give in their lectures, special attention to the reactions were the students made more mistakes.

  6. Benchmark study of UV/Visible spectra of coumarin derivatives by computational approach

    Science.gov (United States)

    Irfan, Muhammad; Iqbal, Javed; Eliasson, Bertil; Ayub, Khurshid; Rana, Usman Ali; Ud-Din Khan, Salah

    2017-02-01

    A benchmark study of UV/Visible spectra of Simple coumarins and Furanocoumarins derivatives was conducted by employing the Density Functional Theory (DFT) and Time Dependent Density Functional Theory (TD-DFT) approaches. In this study the geometries of ground and excited states, excitation energy and absorption spectra were estimated by using the DFT functional CAM-B3LYP, WB97XD, HSEH1PBE, MPW1PW91 and TD-B3LYP with 6-31 + G (d,p) basis set. CAM-B3LYP functional was found to have close agreement with the experimental values of Furranocoumarin class of coumarins while MPW1PW91 gave close results for simple coumarins. This study provided an insight about the electronic characteristics of the selected compounds and provided an effective tool for developing and designing the better UV absorber compounds.

  7. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  8. Benchmark physics experiment of metallic-fueled LMFBR at FCA. 2; Experiments of FCA assembly XVI-1 and their analyses

    Energy Technology Data Exchange (ETDEWEB)

    Iijima, Susumu; Oigawa, Hiroyuki; Ohno, Akio; Sakurai, Takeshi; Nemoto, Tatsuo; Osugi, Toshitaka; Satoh, Kunio; Hayasaka, Katsuhisa [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Bando, Masaru

    1993-10-01

    An availability of data and method for a design of metallic-fueled LMFBR is examined by using the experiment results of FCA assembly XVI-1. Experiment included criticality and reactivity coefficients such as Doppler, sodium void, fuel shifting and fuel expansion. Reaction rate ratios, sample worth and control rod worth were also measured. Analysis was made by using three-dimensional diffusion calculations and JENDL-2 cross sections. Predictions of assembly XVI-1 reactor physics parameters agree reasonably well with the measured values, but for some reactivity coefficients such as Doppler, large zone sodium void and fuel shifting further improvement of calculation method was need. (author).

  9. Research on assembly reliability control technology for computer numerical control machine tools

    Directory of Open Access Journals (Sweden)

    Yan Ran

    2017-01-01

    Full Text Available Nowadays, although more and more companies focus on improving the quality of computer numerical control machine tools, its reliability control still remains as an unsolved problem. Since assembly reliability control is very important in product reliability assurance in China, a new key assembly processes extraction method based on the integration of quality function deployment; failure mode, effects, and criticality analysis; and fuzzy theory for computer numerical control machine tools is proposed. Firstly, assembly faults and assembly reliability control flow of computer numerical control machine tools are studied. Secondly, quality function deployment; failure mode, effects, and criticality analysis; and fuzzy theory are integrated to build a scientific extraction model, by which the key assembly processes meeting both customer functional demands and failure data distribution can be extracted, also an example is given to illustrate the correctness and effectiveness of the method. Finally, the assembly reliability monitoring system is established based on key assembly processes to realize and simplify this method.

  10. Financial Benchmarking

    OpenAIRE

    2012-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  11. Theory of Connectivity: Nature and Nurture of Cell Assemblies and Cognitive Computation

    OpenAIRE

    Meng eLi; Jun eLiu; Joe Z Tsien

    2016-01-01

    Richard Semon and Donald Hebb are among the firsts to put forth the notion of cell assembly – a group of coherently or sequentially-activated neurons– to represent percept, memory, or concept. Despite the rekindled interest in this age-old idea, the concept of cell assembly still remains ill-defined and its operational principle is poorly understood. What is the size of a cell assembly? How should a cell assembly be organized? What is the computational logic underlying Hebbian cell assemb...

  12. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  13. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  14. Theory of Connectivity: Nature and Nurture of Cell Assemblies and Cognitive Computation.

    Science.gov (United States)

    Li, Meng; Liu, Jun; Tsien, Joe Z

    2016-01-01

    Richard Semon and Donald Hebb are among the firsts to put forth the notion of cell assembly-a group of coherently or sequentially-activated neurons-to represent percept, memory, or concept. Despite the rekindled interest in this century-old idea, the concept of cell assembly still remains ill-defined and its operational principle is poorly understood. What is the size of a cell assembly? How should a cell assembly be organized? What is the computational logic underlying Hebbian cell assemblies? How might Nature vs. Nurture interact at the level of a cell assembly? In contrast to the widely assumed randomness within the mature but naïve cell assembly, the Theory of Connectivity postulates that the brain consists of the developmentally pre-programmed cell assemblies known as the functional connectivity motif (FCM). Principal cells within such FCM is organized by the power-of-two-based mathematical principle that guides the construction of specific-to-general combinatorial connectivity patterns in neuronal circuits, giving rise to a full range of specific features, various relational patterns, and generalized knowledge. This pre-configured canonical computation is predicted to be evolutionarily conserved across many circuits, ranging from these encoding memory engrams and imagination to decision-making and motor control. Although the power-of-two-based wiring and computational logic places a mathematical boundary on an individual's cognitive capacity, the fullest intellectual potential can be brought about by optimized nature and nurture. This theory may also open up a new avenue to examining how genetic mutations and various drugs might impair or improve the computational logic of brain circuits.

  15. Theory of Connectivity: Nature and Nurture of Cell Assemblies and Cognitive Computation

    Directory of Open Access Journals (Sweden)

    Meng eLi

    2016-04-01

    Full Text Available Richard Semon and Donald Hebb are among the firsts to put forth the notion of cell assembly – a group of coherently or sequentially-activated neurons– to represent percept, memory, or concept. Despite the rekindled interest in this age-old idea, the concept of cell assembly still remains ill-defined and its operational principle is poorly understood. What is the size of a cell assembly? How should a cell assembly be organized? What is the computational logic underlying Hebbian cell assemblies? How might Nature vs Nurture interact at the level of a cell assembly? In contrast to the widely assumed local randomness within the mature but naïve cell assembly, the recent Theory of Connectivity postulates that the brain consists of the developmentally pre-programmed cell assemblies known as the functional connectivity motif (FCM. Principal cells within such FCM is organized by the power-of-two-based mathematical principle that guides the construction of specific-to-general combinatorial connectivity patterns in neuronal circuits, giving rise to a full range of specific features, various relational patterns, and generalized knowledge. This pre-configured canonical computation is predicted to be evolutionarily conserved across many circuits, ranging from these encoding memory engrams and imagination to decision-making and motor control. Although the power-of-two-based wiring and computational logic places a mathematical boundary on an individual’s cognitive capacity, the fullest intellectual potential can be brought about by optimized nature and nurture. This theory may also open up a new avenue to examining how genetic mutations and various drugs might impair or enhance the computational logic of brain circuits.

  16. Arithmetic computation using self-assembly of DNA tiles:subtraction and division

    Institute of Scientific and Technical Information of China (English)

    Xuncai Zhang; Yanfeng Wang; Zhihua Chen; Jin Xu; Guangzhao Cui

    2009-01-01

    Recently,experiments have demonstrated that simple binary arithmetic and logical operations can be computed by the process of selfassembly of DNA tiles.In this paper,we show how the tile assembly process can be used for subtraction and division.In order to achieve this aim,four systems,including the comparator system,the duplicator system,the subtraction system,and the division system,are proposed to compute the difference and quotient of two input numbers using the tile assembly model.This work indicates that these systems can be carried out in polynomial time with optimal O(1)distinct tile types in parallel and at very low cost.Furthermore,we provide a scheme to factor the product of two prime numbers,and it is a breakthrough in basic biological operations using a molecular computer by self-assembly.

  17. Computer Aided Design of the Link-Fork Head-Piston Assembly of the Kaplan Turbine with Solidworks

    Directory of Open Access Journals (Sweden)

    Camelia Jianu

    2010-10-01

    Full Text Available The paper presents the steps for 3D computer aided design (CAD of the link-fork head-piston assembly of the Kaplan turbine made in SolidWorks.The present paper is a tutorial for a Kaplan turbine assembly 3D geometry, which is dedicated to the Assembly design and Drawing Geometry and Drawing Annotation.

  18. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  19. MULTI-AGENT COMPUTER AIDED ASSEMBLY PROCESS PLANNING SYSTEM FOR SHIP HULL

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A multi-agent computer aided assembly process planning system (MCAAPP) for ship hull is presented. The system includes system framework, global facilitator, the macro agent structure, agent communication language, agent-oriented programming language, knowledge representation and reasoning strategy. The system can produce the technological file and technological quota, which can satisfy the production needs of factory.

  20. DNA Self-Assembly and Computation Studied with a Coarse-grained Dynamic Bonded Model

    DEFF Research Database (Denmark)

    Svaneborg, Carsten; Fellermann, Harold; Rasmussen, Steen

    2012-01-01

    We utilize a coarse-grained directional dynamic bonding DNA model [C. Svaneborg, Comp. Phys. Comm. (In Press DOI:10.1016/j.cpc.2012.03.005)] to study DNA self-assembly and DNA computation. In our DNA model, a single nucleotide is represented by a single interaction site, and complementary sites c...

  1. A Solar Powered Wireless Computer Mouse: Design, Assembly and Preliminary Testing of 15 Prototypes

    NARCIS (Netherlands)

    van Sark, W.G.J.H.M.; Reich, N.H.; Alsema, E.A.; Netten, M.P.; Veefkind, M.; Silvester, S.; Elzen, B.; Verwaal, M.

    2007-01-01

    The concept and design of a solar powered wireless computer mouse has been completed, and 15 prototypes have been successfully assembled. After necessary cutting, the crystalline silicon cells show satisfactory efficiency: up to 14% when implemented into the mouse device. The implemented voltage con

  2. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  3. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  4. A computational technique to identify the optimal stiffness matrix for a discrete nuclear fuel assembly model

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nam-Gyu, E-mail: nkpark@knfc.co.kr [R and D Center, KEPCO Nuclear Fuel Co., LTD., 493 Deokjin-dong, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Kim, Kyoung-Joo, E-mail: kyoungjoo@knfc.co.kr [R and D Center, KEPCO Nuclear Fuel Co., LTD., 493 Deokjin-dong, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Kim, Kyoung-Hong, E-mail: kyounghong@knfc.co.kr [R and D Center, KEPCO Nuclear Fuel Co., LTD., 493 Deokjin-dong, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Suh, Jung-Min, E-mail: jmsuh@knfc.co.kr [R and D Center, KEPCO Nuclear Fuel Co., LTD., 493 Deokjin-dong, Yuseong-gu, Daejeon 305-353 (Korea, Republic of)

    2013-02-15

    Highlights: ► An identification method of the optimal stiffness matrix for a fuel assembly structure is discussed. ► The least squares optimization method is introduced, and a closed form solution of the problem is derived. ► The method can be expanded to the system with the limited number of modes. ► Identification error due to the perturbed mode shape matrix is analyzed. ► Verification examples show that the proposed procedure leads to a reliable solution. -- Abstract: A reactor core structural model which is used to evaluate the structural integrity of the core contains nuclear fuel assembly models. Since the reactor core consists of many nuclear fuel assemblies, the use of a refined fuel assembly model leads to a considerable amount of computing time for performing nonlinear analyses such as the prediction of seismic induced vibration behaviors. The computational time could be reduced by replacing the detailed fuel assembly model with a simplified model that has fewer degrees of freedom, but the dynamic characteristics of the detailed model must be maintained in the simplified model. Such a model based on an optimal design method is proposed in this paper. That is, when a mass matrix and a mode shape matrix are given, the optimal stiffness matrix of a discrete fuel assembly model can be estimated by applying the least squares minimization method. The verification of the method is completed by comparing test results and simulation results. This paper shows that the simplified model's dynamic behaviors are quite similar to experimental results and that the suggested method is suitable for identifying reliable mathematical model for fuel assemblies.

  5. Assembly of a 3D Cellular Computer Using Folded E-Blocks

    Directory of Open Access Journals (Sweden)

    Shivendra Pandey

    2016-04-01

    Full Text Available The assembly of integrated circuits in three dimensions (3D provides a possible solution to address the ever-increasing demands of modern day electronic devices. It has been suggested that by using the third dimension, devices with high density, defect tolerance, short interconnects and small overall form factors could be created. However, apart from pseudo 3D architecture, such as monolithic integration, die, or wafer stacking, the creation of paradigms to integrate electronic low-complexity cellular building blocks in architecture that has tile space in all three dimensions has remained elusive. Here, we present software and hardware foundations for a truly 3D cellular computational devices that could be realized in practice. The computing architecture relies on the scalable, self-configurable and defect-tolerant cell matrix. The hardware is based on a scalable and manufacturable approach for 3D assembly using folded polyhedral electronic blocks (E-blocks. We created monomers, dimers and 2 × 2 × 2 assemblies of polyhedral E-blocks and verified the computational capabilities by implementing simple logic functions. We further show that 63.2% more compact 3D circuits can be obtained with our design automation tools compared to a 2D architecture. Our results provide a proof-of-concept for a scalable and manufacture-ready process for constructing massive-scale 3D computational devices.

  6. A computational investigation on the connection between dynamics properties of ribosomal proteins and ribosome assembly.

    Directory of Open Access Journals (Sweden)

    Brittany Burton

    Full Text Available Assembly of the ribosome from its protein and RNA constituents has been studied extensively over the past 50 years, and experimental evidence suggests that prokaryotic ribosomal proteins undergo conformational changes during assembly. However, to date, no studies have attempted to elucidate these conformational changes. The present work utilizes computational methods to analyze protein dynamics and to investigate the linkage between dynamics and binding of these proteins during the assembly of the ribosome. Ribosomal proteins are known to be positively charged and we find the percentage of positive residues in r-proteins to be about twice that of the average protein: Lys+Arg is 18.7% for E. coli and 21.2% for T. thermophilus. Also, positive residues constitute a large proportion of RNA contacting residues: 39% for E. coli and 46% for T. thermophilus. This affirms the known importance of charge-charge interactions in the assembly of the ribosome. We studied the dynamics of three primary proteins from E. coli and T. thermophilus 30S subunits that bind early in the assembly (S15, S17, and S20 with atomic molecular dynamic simulations, followed by a study of all r-proteins using elastic network models. Molecular dynamics simulations show that solvent-exposed proteins (S15 and S17 tend to adopt more stable solution conformations than an RNA-embedded protein (S20. We also find protein residues that contact the 16S rRNA are generally more mobile in comparison with the other residues. This is because there is a larger proportion of contacting residues located in flexible loop regions. By the use of elastic network models, which are computationally more efficient, we show that this trend holds for most of the 30S r-proteins.

  7. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  8. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Directory of Open Access Journals (Sweden)

    Akifumi S Tanabe

    Full Text Available Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need

  9. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Science.gov (United States)

    Tanabe, Akifumi S; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate

  10. A Context-Aware Ubiquitous Learning Approach for Providing Instant Learning Support in Personal Computer Assembly Activities

    Science.gov (United States)

    Hsu, Ching-Kun; Hwang, Gwo-Jen

    2014-01-01

    Personal computer assembly courses have been recognized as being essential in helping students understand computer structure as well as the functionality of each computer component. In this study, a context-aware ubiquitous learning approach is proposed for providing instant assistance to individual students in the learning activity of a…

  11. Computer simulations predict that chromosome movements and rotations accelerate mitotic spindle assembly without compromising accuracy.

    Science.gov (United States)

    Paul, Raja; Wollman, Roy; Silkworth, William T; Nardi, Isaac K; Cimini, Daniela; Mogilner, Alex

    2009-09-15

    The mitotic spindle self-assembles in prometaphase by a combination of centrosomal pathway, in which dynamically unstable microtubules search in space until chromosomes are captured, and a chromosomal pathway, in which microtubules grow from chromosomes and focus to the spindle poles. Quantitative mechanistic understanding of how spindle assembly can be both fast and accurate is lacking. Specifically, it is unclear how, if at all, chromosome movements and combining the centrosomal and chromosomal pathways affect the assembly speed and accuracy. We used computer simulations and high-resolution microscopy to test plausible pathways of spindle assembly in realistic geometry. Our results suggest that an optimal combination of centrosomal and chromosomal pathways, spatially biased microtubule growth, and chromosome movements and rotations is needed to complete prometaphase in 10-20 min while keeping erroneous merotelic attachments down to a few percent. The simulations also provide kinetic constraints for alternative error correction mechanisms, shed light on the dual role of chromosome arm volume, and compare well with experimental data for bipolar and multipolar HT-29 colorectal cancer cells.

  12. Comparative Neutronics Analysis of DIMPLE S06 Criticality Benchmark with Contemporary Reactor Core Analysis Computer Code Systems

    Directory of Open Access Journals (Sweden)

    Wonkyeong Kim

    2015-01-01

    Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.

  13. Applications of the theory of computation to nanoscale self-assembly

    Science.gov (United States)

    Doty, David Samuel

    This thesis applies the theory of computing to the theory of nanoscale self-assembly, to explore the ability -- and under certain conditions, the inability -- of molecules to automatically arrange themselves in computationally sophisticated ways. In particular, we investigate a model of molecular self-assembly known as the abstract Tile Assembly Model (aTAM), in which different types of square "tiles" represent molecules that, through the interaction of highly specific binding sites on their four sides, can automatically assemble into larger and more elaborate structures. We investigate the possibility of using the inherent randomness of sampling different tiles in a well-mixed solution to drive selection of random numbers from a finite set, and explore the tradeoff between the uniformity of the imposed distribution and the size of structures necessary to process the sampled tiles. We then show that the inherent randomness of the competition of different types of molecules for binding can be exploited in a different way. By adjusting the relative concentrations of tiles, the structure assembled by a tile set is shown to be programmable to a high precision, in the following sense. There is a single tile set that can be made to assemble a square of arbitrary width with high probability, by setting the concentrations of the tiles appropriately, so that all the information about the square's width is "learned" from the concentrations by sampling the tiles. Based on these constructions, and those of other researchers, which have been completely implemented in a simulated environment, we design a high-level domain-specific "visual language" for implementing complex constructions in the aTAM. This language frees the implementer of an aTAM construction from many low-level and tedious details of programming and, together with a visual software tool that directly implements the basic operations of the language, frees the implementer from almost any programming at all

  14. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  15. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  16. A simulation study on proton computed tomography (CT) stopping power accuracy using dual energy CT scans as benchmark

    DEFF Research Database (Denmark)

    Hansen, David Christoffer; Seco, Joao; Sørensen, Thomas Sangild;

    2015-01-01

    Background. Accurate stopping power estimation is crucial for treatment planning in proton therapy, and the uncertainties in stopping power are currently the largest contributor to the employed dose margins. Dual energy x-ray computed tomography (CT) (clinically available) and proton CT (in...... development) have both been proposed as methods for obtaining patient stopping power maps. The purpose of this work was to assess the accuracy of proton CT using dual energy CT scans of phantoms to establish reference accuracy levels. Material and methods. A CT calibration phantom and an abdomen cross section...... of detectors and the corresponding noise characteristics. Stopping power maps were calculated for all three scans, and compared with the ground truth stopping power from the phantoms. Results. Proton CT gave slightly better stopping power estimates than the dual energy CT method, with root mean square errors...

  17. Benchmarking v ICT

    OpenAIRE

    Blecher, Jan

    2009-01-01

    The aim of this paper is to describe benefits of benchmarking IT in wider context and benchmarking scope at all. I specify benchmarking as a process and mention basic rules and guidelines. Further I define IT benchmarking domains and describe possibilities of their use. Best known type of IT benchmark is cost benchmark which represents only a subset of benchmark opportunities. In this paper, is cost benchmark rather an imaginary first step to benchmarking contribution to company. IT benchmark...

  18. Randomized benchmarking of multiqubit gates.

    Science.gov (United States)

    Gaebler, J P; Meier, A M; Tan, T R; Bowler, R; Lin, Y; Hanneke, D; Jost, J D; Home, J P; Knill, E; Leibfried, D; Wineland, D J

    2012-06-29

    We describe an extension of single-qubit gate randomized benchmarking that measures the error of multiqubit gates in a quantum information processor. This platform-independent protocol evaluates the performance of Clifford unitaries, which form a basis of fault-tolerant quantum computing. We implemented the benchmarking protocol with trapped ions and found an error per random two-qubit Clifford unitary of 0.162±0.008, thus setting the first benchmark for such unitaries. By implementing a second set of sequences with an extra two-qubit phase gate inserted after each step, we extracted an error per phase gate of 0.069±0.017. We conducted these experiments with transported, sympathetically cooled ions in a multizone Paul trap-a system that can in principle be scaled to larger numbers of ions.

  19. Computational modelling of genome-wide [corrected] transcription assembly networks using a fluidics analogy.

    Directory of Open Access Journals (Sweden)

    Yousry Y Azmy

    Full Text Available Understanding how a myriad of transcription regulators work to modulate mRNA output at thousands of genes remains a fundamental challenge in molecular biology. Here we develop a computational tool to aid in assessing the plausibility of gene regulatory models derived from genome-wide expression profiling of cells mutant for transcription regulators. mRNA output is modelled as fluid flow in a pipe lattice, with assembly of the transcription machinery represented by the effect of valves. Transcriptional regulators are represented as external pressure heads that determine flow rate. Modelling mutations in regulatory proteins is achieved by adjusting valves' on/off settings. The topology of the lattice is designed by the experimentalist to resemble the expected interconnection between the modelled agents and their influence on mRNA expression. Users can compare multiple lattice configurations so as to find the one that minimizes the error with experimental data. This computational model provides a means to test the plausibility of transcription regulation models derived from large genomic data sets.

  20. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  1. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  2. Benchmarking a DSP processor

    OpenAIRE

    Lennartsson, Per; Nordlander, Lars

    2002-01-01

    This Master thesis describes the benchmarking of a DSP processor. Benchmarking means measuring the performance in some way. In this report, we have focused on the number of instruction cycles needed to execute certain algorithms. The algorithms we have used in the benchmark are all very common in signal processing today. The results we have reached in this thesis have been compared to benchmarks for other processors, performed by Berkeley Design Technology, Inc. The algorithms were programm...

  3. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  4. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  5. Performance Evaluation and Benchmarking of Next Intelligent Systems

    Energy Technology Data Exchange (ETDEWEB)

    del Pobil, Angel [Jaume-I University; Madhavan, Raj [ORNL; Bonsignorio, Fabio [Heron Robots, Italy

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  6. Types for DSP Assembler Programs

    DEFF Research Database (Denmark)

    Larsen, Ken

    2006-01-01

    for reuse, and a procedure that computes point-wise vector multiplication. The latter uses a common idiom of prefetching memory resulting in out-of-bounds reading from memory. I present two extensions to the baseline type system: The first extension is a simple modification of some type rules to allow out...... the requirements of a procedure. I implement a proof-of-concept type checker for both the baseline type system and the extensions. I get good performance results on a small benchmark suite of programs representative of handwritten DSP assembler code. These empirical results are encouraging and strongly suggest...

  7. Sequence assembly

    DEFF Research Database (Denmark)

    Scheibye-Alsing, Karsten; Hoffmann, S.; Frankel, Annett Maria

    2009-01-01

    Despite the rapidly increasing number of sequenced and re-sequenced genomes, many issues regarding the computational assembly of large-scale sequencing data have remain unresolved. Computational assembly is crucial in large genome projects as well for the evolving high-throughput technologies and...... in genomic DNA, highly expressed genes and alternative transcripts in EST sequences. We summarize existing comparisons of different assemblers and provide a detailed descriptions and directions for download of assembly programs at: http://genome.ku.dk/resources/assembly/methods.html....

  8. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  9. Benchmarking ENDF/B-VII.0

    Science.gov (United States)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  10. Mapping Flexibility and the Assembly Switch of Cell Division Protein FtsZ by Computational and Mutational Approaches*♦

    Science.gov (United States)

    Martín-Galiano, Antonio J.; Buey, Rubén M.; Cabezas, Marta; Andreu, José M.

    2010-01-01

    The molecular switch for nucleotide-regulated assembly and disassembly of the main prokaryotic cell division protein FtsZ is unknown despite the numerous crystal structures that are available. We have characterized the functional motions in FtsZ with a computational consensus of essential dynamics, structural comparisons, sequence conservation, and networks of co-evolving residues. Employing this information, we have constructed 17 mutants, which alter the FtsZ functional cycle at different stages, to modify FtsZ flexibility. The mutant phenotypes ranged from benign to total inactivation and included increased GTPase, reduced assembly, and stabilized assembly. Six mutations clustering at the long cleft between the C-terminal β-sheet and core helix H7 deviated FtsZ assembly into curved filaments with inhibited GTPase, which still polymerize cooperatively. These mutations may perturb the predicted closure of the C-terminal domain onto H7 required for switching between curved and straight association modes and for GTPase activation. By mapping the FtsZ assembly switch, this work also gives insight into FtsZ druggability because the curved mutations delineate the putative binding site of the promising antibacterial FtsZ inhibitor PC190723. PMID:20472561

  11. Transcriptator: An Automated Computational Pipeline to Annotate Assembled Reads and Identify Non Coding RNA.

    Directory of Open Access Journals (Sweden)

    Kumar Parijat Tripathi

    Full Text Available RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool, QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery tools. It offers a report on statistical analysis of functional and Gene Ontology (GO annotation's enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein-protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA by ab initio methods helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is

  12. Transcriptator: An Automated Computational Pipeline to Annotate Assembled Reads and Identify Non Coding RNA

    Science.gov (United States)

    Zuccaro, Antonio; Guarracino, Mario Rosario

    2015-01-01

    RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool), QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery) tools. It offers a report on statistical analysis of functional and Gene Ontology (GO) annotation’s enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein—protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA) by ab initio methods) helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is freely

  13. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  14. COMPUTER SIMULATION OF 3-DIMENSIONAL DYNAMIC ASSEMBLY PROCESS OF MECHANICAL ROTATIONAL BODY

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    Focusing on the study of the components of mechanical rotational body,the data structure and algorithm of component model generation are discussed.Some problems in assembly process of 3-dimensional graph of components are studied in great detail.

  15. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  16. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  17. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  18. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  19. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore al...... alternative improvement strategies. Implementations of both a parametric and a non-parametric model are presented....

  20. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection.

  1. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  2. Molecular design driving tetraporphyrin self-assembly on graphite: a joint STM, electrochemical and computational study

    Science.gov (United States)

    El Garah, M.; Santana Bonilla, A.; Ciesielski, A.; Gualandi, A.; Mengozzi, L.; Fiorani, A.; Iurlo, M.; Marcaccio, M.; Gutierrez, R.; Rapino, S.; Calvaresi, M.; Zerbetto, F.; Cuniberti, G.; Cozzi, P. G.; Paolucci, F.; Samorì, P.

    2016-07-01

    Tuning the intermolecular interactions among suitably designed molecules forming highly ordered self-assembled monolayers is a viable approach to control their organization at the supramolecular level. Such a tuning is particularly important when applied to sophisticated molecules combining functional units which possess specific electronic properties, such as electron/energy transfer, in order to develop multifunctional systems. Here we have synthesized two tetraferrocene-porphyrin derivatives that by design can selectively self-assemble at the graphite/liquid interface into either face-on or edge-on monolayer-thick architectures. The former supramolecular arrangement consists of two-dimensional planar networks based on hydrogen bonding among adjacent molecules whereas the latter relies on columnar assembly generated through intermolecular van der Waals interactions. Scanning Tunneling Microscopy (STM) at the solid-liquid interface has been corroborated by cyclic voltammetry measurements and assessed by theoretical calculations to gain multiscale insight into the arrangement of the molecule with respect to the basal plane of the surface. The STM analysis allowed the visualization of these assemblies with a sub-nanometer resolution, and cyclic voltammetry measurements provided direct evidence of the interactions of porphyrin and ferrocene with the graphite surface and offered also insight into the dynamics within the face-on and edge-on assemblies. The experimental findings were supported by theoretical calculations to shed light on the electronic and other physical properties of both assemblies. The capability to engineer the functional nanopatterns through self-assembly of porphyrins containing ferrocene units is a key step toward the bottom-up construction of multifunctional molecular nanostructures and nanodevices.Tuning the intermolecular interactions among suitably designed molecules forming highly ordered self-assembled monolayers is a viable approach to

  3. A microprocessor-based single board computer for high energy physics event pattern recognition

    CERN Document Server

    Bernstein, H; Imossi, R; Kopp, J K; Kramer, M A; Love, W A; Ozaki, S; Platner, E D

    1981-01-01

    A single board MC 68000 based computer has been assembled and benchmarked against the CDC 7600 running portions of the pattern recognition code used at the MPS. This computer has a floating coprocessor to achieve throughputs equivalent to several percent that of the 7600. A major part of this work was the construction of a FORTR

  4. Computer-aided design of nanostructures from self- and directed-assembly of soft matter building blocks

    Science.gov (United States)

    Nguyen, Trung Dac

    2011-12-01

    Functional materials that are active at nanometer scales and adaptive to environment have been highly desirable for a huge array of novel applications ranging from photonics, sensing, fuel cells, smart materials to drug delivery and miniature robots. These bio-inspired features imply that the underlying structure of this type of materials should possess a well-defined ordering as well as the ability to reconfigure in response to a given external stimulus such as temperature, electric field, pH or light. In this thesis, we employ computer simulation as a design tool, demonstrating that various ordered and reconfigurable structures can be obtained from the self- and directed-assembly of soft matter nano-building blocks such as nanoparticles, polymer-tethered nanoparticles and colloidal particles. We show that, besides thermodynamic parameters, the self-assembly of these building blocks is governed by nanoparticle geometry, the number and attachment location of tethers, solvent selectivity, balance between attractive and repulsive forces, nanoparticle size polydispersity, and field strength. We demonstrate that higher-order nanostructures, i.e. those for which the correlation length is much greater than the length scale of individual assembling building blocks, can be hierarchically assembled. For instance, bilayer sheets formed by laterally tethered rods fold into spiral scrolls and helical structures, which are able to adopt different morphologies depending on the environmental condition. We find that a square grid structure formed by laterally tethered nanorods can be transformed into a bilayer sheet structure, and vice versa, upon shortening, or lengthening, the rod segments, respectively. From these inspiring results, we propose a general scheme by which shape-shifting particles are employed to induce the reconfiguration of pre-assembled structures. Finally, we investigate the role of an external field in assisting the formation of assembled structures that would

  5. Exploring Programmable Self-Assembly in Non-DNA based Molecular Computing

    CERN Document Server

    Terrazas, German; Krasnogor, Natalio

    2013-01-01

    Self-assembly is a phenomenon observed in nature at all scales where autonomous entities build complex structures, without external influences nor centralised master plan. Modelling such entities and programming correct interactions among them is crucial for controlling the manufacture of desired complex structures at the molecular and supramolecular scale. This work focuses on a programmability model for non DNA-based molecules and complex behaviour analysis of their self-assembled conformations. In particular, we look into modelling, programming and simulation of porphyrin molecules self-assembly and apply Kolgomorov complexity-based techniques to classify and assess simulation results in terms of information content. The analysis focuses on phase transition, clustering, variability and parameter discovery which as a whole pave the way to the notion of complex systems programmability.

  6. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  7. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  8. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  9. Benchmarking of Heavy Ion Transport Codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, Igor [ORNL; Ronningen, Reginald M. [Michigan State University, East Lansing; Heilbronn, Lawrence [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required.

  10. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  11. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  12. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  13. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  14. Benchmarking in Foodservice Operations.

    Science.gov (United States)

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  15. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  16. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  17. Computational Comparison of Human Genomic Sequence Assemblies for a Region of Chromosome 4

    OpenAIRE

    Semple, Colin; Stewart W. Morris; Porteous, David J.; Evans, Kathryn L.

    2002-01-01

    Much of the available human genomic sequence data exist in a fragmentary draft state following the completion of the initial high-volume sequencing performed by the International Human Genome Sequencing Consortium (IHGSC) and Celera Genomics (CG). We compared six draft genome assemblies over a region of chromosome 4p (D4S394–D4S403), two consecutive releases by the IHGSC at University of California, Santa Cruz (UCSC), two consecutive releases from the National Centre for Biotechnology Informa...

  18. Benchmarking File System Benchmarking: It *IS* Rocket Science

    OpenAIRE

    Seltzer, Margo I.; Tarasov, Vasily; Bhanage, Saumitra; Zadok, Erez

    2011-01-01

    The quality of file system benchmarking has not improved in over a decade of intense research spanning hundreds of publications. Researchers repeatedly use a wide range of poorly designed benchmarks, and in most cases, develop their own ad-hoc benchmarks. Our community lacks a definition of what we want to benchmark in a file system. We propose several dimensions of file system benchmarking and review the wide range of tools and techniques in widespread use. We experimentally show that even t...

  19. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  20. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  1. A comparative study of methods to compute the free energy of an ordered assembly by molecular simulation.

    Science.gov (United States)

    Moustafa, Sabry G; Schultz, Andrew J; Kofke, David A

    2013-08-28

    We present a comparative study of methods to compute the absolute free energy of a crystalline assembly of hard particles by molecular simulation. We consider all combinations of three choices defining the methodology: (1) the reference system: Einstein crystal (EC), interacting harmonic (IH), or r(-12) soft spheres (SS); (2) the integration path: Frenkel-Ladd (FL) or penetrable ramp (PR); and (3) the free-energy method: overlap-sampling free-energy perturbation (OS) or thermodynamic integration (TI). We apply the methods to FCC hard spheres at the melting state. The study shows that, in the best cases, OS and TI are roughly equivalent in efficiency, with a slight advantage to TI. We also examine the multistate Bennett acceptance ratio method, and find that it offers no advantage for this particular application. The PR path shows advantage in general over FL, providing results of the same precision with 2-9 times less computation, depending on the choice of a common reference. The best combination for the FL path is TI+EC, which is how the FL method is usually implemented. For the PR path, the SS system (with either TI or OS) proves to be most effective; it gives equivalent precision to TI+FL+EC with about 6 times less computation (or 12 times less, if discounting the computational effort required to establish the SS reference free energy). Both the SS and IH references show great advantage in capturing finite-size effects, providing a variation in free-energy difference with system size that is about 10 times less than EC. This result further confirms previous work for soft-particle crystals, and suggests that free-energy calculations for a structured assembly be performed using a hybrid method, in which the finite-system free-energy difference is added to the extrapolated (1/N→0) absolute free energy of the reference system, to obtain a result that is nearly independent of system size.

  2. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  3. TRUMP-BD: A computer code for the analysis of nuclear fuel assemblies under severe accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Lombardo, N.J.; Marseille, T.J.; White, M.D.; Lowery, P.S.

    1990-06-01

    TRUMP-BD (Boil Down) is an extension of the TRUMP (Edwards 1972) computer program for the analysis of nuclear fuel assemblies under severe accident conditions. This extension allows prediction of the heat transfer rates, metal-water oxidation rates, fission product release rates, steam generation and consumption rates, and temperature distributions for nuclear fuel assemblies under core uncovery conditions. The heat transfer processes include conduction in solid structures, convection across fluid-solid boundaries, and radiation between interacting surfaces. Metal-water reaction kinetics are modeled with empirical relationships to predict the oxidation rates of steam-exposed Zircaloy and uranium metal. The metal-water oxidation models are parabolic in form with an Arrhenius temperature dependence. Uranium oxidation begins when fuel cladding failure occurs; Zircaloy oxidation occurs continuously at temperatures above 13000{degree}F when metal and steam are available. From the metal-water reactions, the hydrogen generation rate, total hydrogen release, and temporal and spatial distribution of oxide formations are computed. Consumption of steam from the oxidation reactions and the effect of hydrogen on the coolant properties is modeled for independent coolant flow channels. Fission product release from exposed uranium metal Zircaloy-clad fuel is modeled using empirical time and temperature relationships that consider the release to be subject to oxidation and volitization/diffusion ( bake-out'') release mechanisms. Release of the volatile species of iodine (I), tellurium (Te), cesium (Ce), ruthenium (Ru), strontium (Sr), zirconium (Zr), cerium (Cr), and barium (Ba) from uranium metal fuel may be modeled.

  4. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  5. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  6. Development of computational methods to describe the mechanical behavior of PWR fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Wanninger, Andreas; Seidl, Marcus; Macian-Juan, Rafael [Technische Univ. Muenchen, Garching (Germany). Dept. of Nuclear Engineering

    2016-10-15

    To investigate the static mechanical response of PWR fuel assemblies (FAs) in the reactor core, a structural FA model is being developed using the FEM code ANSYS Mechanical. To assess the capabilities of the model, lateral deflection tests are performed for a reference FA. For this purpose we distinguish between two environments, in-laboratory and in-reactor for different burn-ups. The results are in qualitative agreement with experimental tests and show the stiffness decrease of the FAs during irradiation in the reactor core.

  7. Computational simulation of thermal hydraulic processes in the model LMFBR fuel assembly

    Science.gov (United States)

    Bayaskhalanov, M. V.; Merinov, I. G.; Korsun, A. S.; Vlasov, M. N.

    2017-01-01

    The aim of this study was to verify a developed software module on the experimental fuel assembly with partial blockage of the flow section. The developed software module for simulation of thermal hydraulic processes in liquid metal coolant is based on theory of anisotropic porous media with specially developed integral turbulence model for coefficients determination. The finite element method is used for numerical solution. Experimental data for hexahedral assembly with electrically heated smooth cylindrical rods cooled by liquid sodium are considered. The results of calculation obtained with developed software module for a case of corner blockade are presented. The calculated distribution of coolant velocities showed the presence of the vortex flow behind the blockade. Features vortex region are in a good quantitative and qualitative agreement with experimental data. This demonstrates the efficiency of the hydrodynamic unit for developed software module. But obtained radial coolant temperature profiles differ significantly from the experimental in the vortex flow region. The possible reasons for this discrepancy were analyzed.

  8. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  9. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  10. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  11. Computer-assisted assembly and correction simulation for complex axis deviations using the Ilizarov fixator.

    Science.gov (United States)

    Kochs, A

    1995-01-01

    In axis correction with the Ilizarov ring fixator, the correction results are often insufficient or there are unexpected translation effects, which can be causally attributed to wrong preoperative planning or inaccurate assembly. To avoid such results, computerised simulation was developed. Via digitalisation of the bone outlines traced from X-radiographs with an additional scale, preoperative correction planning can be performed, simulated with normal software. This can be used while constructing the apparatus and positioning the joints. In addition, the translation effect of the bone fragments can be simulated by arbitrarily choosing the pivot of the correction. In transferring the X-radiograph true to scale, one can compare the ring planes before and after correction. It is possible to estimate the necessary distraction as well as compression and thus the postoperative distraction mode. Using computerised planning, the apparatus construction can be optimised and complications caused by misplanning avoided. Not only the inexperienced user can benefit from this aid.

  12. Qualitative computational bioanalytics: assembly of viral channel-forming peptides around mono and divalent ions.

    Science.gov (United States)

    Li, Li-Hua; Hsu, Hao-Jen; Fischer, Wolfgang B

    2013-12-06

    A fine-grained docking protocol was used to generate a bundle-like structure of the bitopic membrane protein Vpu from HIV-1. Vpu is a type I membrane protein with 81 amino acids. It is proposed that Vpu forms ion- and substrate-conducting bundles, which are located at the plasma membrane in the infected cell. The Vpu1-32 peptide that includes the transmembrane domain (TMD) is assembled into homo-pentameric bundles around prepositioned Na, K, Ca or Cl ions. For bundles with the lowest energy, the TMDs generate a hydrophobic pore. Bundles in which Ser-24 faces the pore have higher energy. The tilt of the helices in the lowest energy bundles is larger than bundles with serines facing the pore. Left-handed bundles are lowest in energy where the ions are located at the serines.

  13. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    Energy Technology Data Exchange (ETDEWEB)

    Abanades, Alberto [Universidad Politecnica de Madrid (Spain); Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto [ANL, Argonne (United States); Bornos, Victor; Kiyavitskaya, Anna [Joint Institute of Power Eng. and Nucl. Research ' Sosny' , Minsk (Belarus); Carta, Mario [ENEA, Casaccia (Italy); Janczyszyn, Jerzy [AGH-University of Science and Technology, Krakow (Poland); Maiorino, Jose [IPEN, Sao Paulo (Brazil); Pyeon, Cheolho [Kyoto University (Japan); Stanculescu, Alexander [IAEA, Vienna (Austria); Titarenko, Yury [ITEP, Moscow (Russian Federation); Westmeier, Wolfram [Wolfram Westmeier GmbH, Ebsdorfergrund (Germany)

    2008-07-01

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  14. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  15. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  16. HPCS HPCchallenge Benchmark Suite

    Science.gov (United States)

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  17. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  18. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  19. Chain Assemblies from Nanoparticles Synthesized by Atmospheric Pressure Plasma Enhanced Chemical Vapor Deposition: The Computational View.

    Science.gov (United States)

    Mishin, Maxim V; Zamotin, Kirill Y; Protopopova, Vera S; Alexandrov, Sergey E

    2015-12-01

    This article refers to the computational study of nanoparticle self-organization on the solid-state substrate surface with consideration of the experimental results, when nanoparticles were synthesised during atmospheric pressure plasma enhanced chemical vapor deposition (AP-PECVD). The experimental study of silicon dioxide nanoparticle synthesis by AP-PECVD demonstrated that all deposit volume consists of tangled chains of nanoparticles. In certain cases, micron-sized fractals are formed from tangled chains due to deposit rearrangement. This work is focused on the study of tangled chain formation only. In order to reveal their formation mechanism, a physico-mathematical model was developed. The suggested model was based on the motion equation solution for charged and neutral nanoparticles in the potential fields with the use of the empirical interaction potentials. In addition, the computational simulation was carried out based on the suggested model. As a result, the influence of such experimental parameters as deposition duration, particle charge, gas flow velocity, and angle of gas flow was found. It was demonstrated that electrical charges carried by nanoparticles from the discharge area are not responsible for the formation of tangled chains from nanoparticles, whereas nanoparticle kinetic energy plays a crucial role in deposit morphology and density. The computational results were consistent with experimental results.

  20. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  1. Amphiphilic nanosheet self-assembly at the water/oil interface: computer simulations.

    Science.gov (United States)

    Xiang, Wenjun; Zhao, Shuangliang; Song, Xianyu; Fang, Shenwen; Wang, Fen; Zhong, Cheng; Luo, Zhaoyang

    2017-03-15

    In this paper, dissipative particle dynamics simulations are performed to study the interfacial and emulsion stabilizing properties of various systems of amphiphilic nanosheets (ANs) self-assembled at the oil/water (O/W) interface. The ANs have a dimensional symmetry structure that encompasses a triangular-plate at the center and two soft comb-like shells constructed with hydrophilic and hydrophobic polymers. As the simulation results show, the AN molecules are highly oriented in interfacial films with their triangular nanosheets parallel to the O/W interface, while their hydrophobic and hydrophilic segments attempt to immerse into the oil phase and aqueous phase, respectively. These results reveal that the rotation of ANs at oil/water interfaces is greatly restricted, meanwhile, their nanosheet (or planar) configuration facilitates their favorable orientation thereby, thus making the emulsion more stable. At higher concentrations, a wrapped-like or micelle morphology is observed. The O/W emulsions stabilized by ANs were also simulated, and it is interesting to find AN 'patches' at the O/W interface which resembles the leather patches on a football. By introducing the "amphiphilic nanosheet balance" concept, the hydrophilic-lipophilic balance (HLB) values of ANs were calculated. Due to their properties of two-dimensional symmetry, the HLB values of ANs tend to approximately 1 which reveals a stronger stability for emulsions.

  2. Experiences in Benchmarking of Autonomic Systems

    Science.gov (United States)

    Etchevers, Xavier; Coupaye, Thierry; Vachet, Guy

    Autonomic computing promises improvements of systems quality of service in terms of availability, reliability, performance, security, etc. However, little research and experimental results have so far demonstrated this assertion, nor provided proof of the return on investment stemming from the efforts that introducing autonomic features requires. Existing works in the area of benchmarking of autonomic systems can be characterized by their qualitative and fragmented approaches. Still a crucial need is to provide generic (i.e. independent from business, technology, architecture and implementation choices) autonomic computing benchmarking tools for evaluating and/or comparing autonomic systems from a technical and, ultimately, an economical point of view. This article introduces a methodology and a process for defining and evaluating factors, criteria and metrics in order to qualitatively and quantitatively assess autonomic features in computing systems. It also discusses associated experimental results on three different autonomic systems.

  3. Self-assembly of micelles in organic solutions of lecithin and bile salt: Mesoscale computer simulation

    Science.gov (United States)

    Markina, A.; Ivanov, V.; Komarov, P.; Khokhlov, A.; Tung, S.-H.

    2016-11-01

    We propose a coarse-grained model for studying the effects of adding bile salt to lecithin organosols by means of computer simulation. This model allows us to reveal the mechanisms of experimentally observed increasing of viscosity upon increasing the bile salt concentration. We show that increasing the bile salt to lecithin molar ratio induces the growth of elongated micelles of ellipsoidal and cylindrical shape due to incorporation of disklike bile salt molecules. These wormlike micelles can entangle into transient network displaying perceptible viscoelastic properties.

  4. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  5. Parallel molecular computation of modular-multiplication with two same inputs over finite field GF(2(n)) using self-assembly of DNA tiles.

    Science.gov (United States)

    Li, Yongnan; Xiao, Limin; Ruan, Li

    2014-06-01

    Two major advantages of DNA computing - huge memory capacity and high parallelism - are being explored for large-scale parallel computing, mass data storage and cryptography. Tile assembly model is a highly distributed parallel model of DNA computing. Finite field GF(2(n)) is one of the most commonly used mathematic sets for constructing public-key cryptosystem. It is still an open question that how to implement the basic operations over finite field GF(2(n)) using DNA tiles. This paper proposes how the parallel tile assembly process could be used for computing the modular-square, modular-multiplication with two same inputs, over finite field GF(2(n)). This system could obtain the final result within less steps than another molecular computing system designed in our previous study, because square and reduction are executed simultaneously and the previous system computes reduction after calculating square. Rigorous theoretical proofs are described and specific computing instance is given after defining the basic tiles and the assembly rules. Time complexity of this system is 3n-1 and space complexity is 2n(2).

  6. ABACUS, a direct method for protein NMR structure computation via assembly of fragments.

    Science.gov (United States)

    Grishaev, A; Steren, C A; Wu, B; Pineda-Lucena, A; Arrowsmith, C; Llinás, M

    2005-10-01

    The ABACUS algorithm obtains the protein NMR structure from unassigned NOESY distance restraints. ABACUS works as an integrated approach that uses the complete set of available NMR experimental information in parallel and yields spin system typing, NOE spin pair identities, sequence specific resonance assignments, and protein structure, all at once. The protocol starts from unassigned molecular fragments (including single amino acid spin systems) derived from triple-resonance (1)H/(13)C/(15)N NMR experiments. Identifications of connected spin systems and NOEs precede the full sequence specific resonance assignments. The latter are obtained iteratively via Monte Carlo-Metropolis and/or probabilistic sequence selections, molecular dynamics structure computation and BACUS filtering (A. Grishaev and M. Llinás, J Biomol NMR 2004;28:1-10). ABACUS starts from scratch, without the requirement of an initial approximate structure, and improves iteratively the NOE identities in a self-consistent fashion. The procedure was run as a blind test on data recorded on mth1743, a 70-amino acid genomic protein from M. thermoautotrophicum. It converges to a structure in ca. 15 cycles of computation on a 3-GHz processor PC. The calculated structures are very similar to the ones obtained via conventional methods (1.22 A backbone RMSD). The success of ABACUS on mth1743 further validates BACUS as a NOESY identification protocol.

  7. Comparing Neuromorphic Solutions in Action: Implementing a Bio-Inspired Solution to a Benchmark Classification Task on Three Parallel-Computing Platforms.

    Science.gov (United States)

    Diamond, Alan; Nowotny, Thomas; Schmuker, Michael

    2015-01-01

    Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and "neuromorphic algorithms" are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for

  8. Comparing neuromorphic solutions in action: implementing a bio-inspired solution to a benchmark classification task on three parallel-computing platforms

    Directory of Open Access Journals (Sweden)

    Alan eDiamond

    2016-01-01

    Full Text Available Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and neuromorphic algorithms are being developed. As they are maturing towards deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability and power efficiency.Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analogue Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task.We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model’s ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached.With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication

  9. Radiography benchmark 2014

    Energy Technology Data Exchange (ETDEWEB)

    Jaenisch, G.-R., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Deresch, A., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Bellon, C., E-mail: Gerd-Ruediger.Jaenisch@bam.de [Federal Institute for Materials Research and Testing, Unter den Eichen 87, 12205 Berlin (Germany); Schumm, A.; Lucet-Sanchez, F.; Guerin, P. [EDF R and D, 1 avenue du Général de Gaulle, 92141 Clamart (France)

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  10. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperf...

  11. Xenopus laevis: an ideal experimental model for studying the developmental dynamics of neural network assembly and sensory-motor computations.

    Science.gov (United States)

    Straka, Hans; Simmers, John

    2012-04-01

    The amphibian Xenopus laevis represents a highly amenable model system for exploring the ontogeny of central neural networks, the functional establishment of sensory-motor transformations, and the generation of effective motor commands for complex behaviors. Specifically, the ability to employ a range of semi-intact and isolated preparations for in vitro morphophysiological experimentation has provided new insights into the developmental and integrative processes associated with the generation of locomotory behavior during changing life styles. In vitro electrophysiological studies have begun to explore the functional assembly, disassembly and dynamic plasticity of spinal pattern generating circuits as Xenopus undergoes the developmental switch from larval tail-based swimming to adult limb-based locomotion. Major advances have also been made in understanding the developmental onset of multisensory signal processing for reactive gaze and posture stabilizing reflexes during self-motion. Additionally, recent evidence from semi-intact animal and isolated CNS experiments has provided compelling evidence that in Xenopus tadpoles, predictive feed-forward signaling from the spinal locomotor pattern generator are engaged in minimizing visual disturbances during tail-based swimming. This new concept questions the traditional view of retinal image stabilization that in vertebrates has been exclusively attributed to sensory-motor transformations of body/head motion-detecting signals. Moreover, changes in visuomotor demands associated with the developmental transition in propulsive strategy from tail- to limb-based locomotion during metamorphosis presumably necessitates corresponding adaptive alterations in the intrinsic spinoextraocular coupling mechanism. Consequently, Xenopus provides a unique opportunity to address basic questions on the developmental dynamics of neural network assembly and sensory-motor computations for vertebrate motor behavior in general.

  12. Solvent-driven symmetry of self-assembled nanocrystal superlattices-A computational study

    KAUST Repository

    Kaushik, Ananth P.

    2012-10-29

    The preference of experimentally realistic sized 4-nm facetted nanocrystals (NCs), emulating Pb chalcogenide quantum dots, to spontaneously choose a crystal habit for NC superlattices (Face Centered Cubic (FCC) vs. Body Centered Cubic (BCC)) is investigated using molecular simulation approaches. Molecular dynamics simulations, using united atom force fields, are conducted to simulate systems comprised of cube-octahedral-shaped NCs covered by alkyl ligands, in the absence and presence of experimentally used solvents, toluene and hexane. System sizes in the 400,000-500,000-atom scale followed for nanoseconds are required for this computationally intensive study. The key questions addressed here concern the thermodynamic stability of the superlattice and its preference of symmetry, as we vary the ligand length of the chains, from 9 to 24 CH2 groups, and the choice of solvent. We find that hexane and toluene are "good" solvents for the NCs, which penetrate the ligand corona all the way to the NC surfaces. We determine the free energy difference between FCC and BCC NC superlattice symmetries to determine the system\\'s preference for either geometry, as the ratio of the length of the ligand to the diameter of the NC is varied. We explain these preferences in terms of different mechanisms in play, whose relative strength determines the overall choice of geometry. © 2012 Wiley Periodicals, Inc.

  13. ASBench: benchmarking sets for allosteric discovery.

    Science.gov (United States)

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  14. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  15. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  16. Application of FORSS sensitivity and uncertainty methodology to fast reactor benchmark analysis

    Energy Technology Data Exchange (ETDEWEB)

    Weisbin, C.R.; Marable, J.H.; Lucius, J.L.; Oblow, E.M.; Mynatt, F.R.; Peelle, R.W.; Perey, F.G.

    1976-12-01

    FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions, and associated uncertainties. This paper presents the theory and code description as well as the first results of applying FORSS to fast reactor benchmarks. Specifically, for various assemblies and reactor performance parameters, the nuclear data sensitivities were computed by nuclide, reaction type, and energy. Comprehensive libraries of energy-dependent coefficients have been developed in a computer retrievable format and released for distribution by RSIC and NNCSC. Uncertainties induced by nuclear data were quantified using preliminary, energy-dependent relative covariance matrices evaluated with ENDF/B-IV expectation values and processed for /sup 238/U(n,f), /sup 238/U(n,..gamma..), /sup 239/Pu(n,f), and /sup 239/Pu(..nu..). Nuclear data accuracy requirements to meet specified performance criteria at minimum experimental cost were determined.

  17. Benchmark Generation and Simulation at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Lagadapati, Mahesh [North Carolina State University (NCSU), Raleigh; Mueller, Frank [North Carolina State University (NCSU), Raleigh; Engelmann, Christian [ORNL

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  18. Algorithm and Architecture Independent Benchmarking with SEAK

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  19. Entropy-based benchmarking methods

    OpenAIRE

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  20. A New Benchmark For Evaluation Of Graph-Theoretic Algorithms

    CERN Document Server

    Yoo, Andy B; Vaidya, Sheila; Poole, Stephen

    2010-01-01

    We propose a new graph-theoretic benchmark in this paper. The benchmark is developed to address shortcomings of an existing widely-used graph benchmark. We thoroughly studied a large number of traditional and contemporary graph algorithms reported in the literature to have clear understanding of their algorithmic and run-time characteristics. Based on this study, we designed a suite of kernels, each of which represents a specific class of graph algorithms. The kernels are designed to capture the typical run-time behavior of target algorithms accurately, while limiting computational and spatial overhead to ensure its computation finishes in reasonable time. We expect that the developed benchmark will serve as a much needed tool for evaluating different architectures and programming models to run graph algorithms.

  1. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  2. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  3. Monte Carlo and deterministic simulations of activation ratio experiments for 238U(n,f), 238U(n,g) and 238U(n,2n) in the Big Ten benchmark critical assembly

    Energy Technology Data Exchange (ETDEWEB)

    Descalle, M; Clouse, C; Pruet, J

    2009-07-28

    The authors have compared calculations of critical assembly activation ratios using 3 different Monte Carlo codes and one deterministic code. There is excellent agreement. Discrepancies between the different Monte Carlo codes are the 1-2% level. Notably, the deterministic calculations with 87 groups are also in good agreement with the continuous energy Monte Carlo results. The three codes underestimate the {sup 238}U(n,f) reaction, suggesting that there is room for improvement in the evaluation, or in the evaluations of other reactions influencing the spectrum in BigTen. Until statistical uncertainties are implemented in Mercury, they strongly advise long runs to guarantee sufficient convergence of the flux at high energies, and they strongly encourage comparing Mercury results to a well-developed and documented code such as MCNP5 and/or COG. It may be that ENDL2008 will be available for use in COG within a year. Finally, it may be worthwhile to add a 'standard' reaction rate tally similar to those implemented in COG and MCNP5, if the goal is to expand the central fission and activation ratios simulations to include isotopes that are not part of the specifications for the assembly material composition.

  4. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  5. Benchmark job – Watch out!

    CERN Document Server

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  6. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  7. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  8. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth pre

  9. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  10. CAPRG: sequence assembling pipeline for next generation sequencing of non-model organisms.

    Directory of Open Access Journals (Sweden)

    Arun Rawat

    Full Text Available Our goal is to introduce and describe the utility of a new pipeline "Contigs Assembly Pipeline using Reference Genome" (CAPRG, which has been developed to assemble "long sequence reads" for non-model organisms by leveraging a reference genome of a closely related phylogenetic relative. To facilitate this effort, we utilized two avian transcriptomic datasets generated using ROCHE/454 technology as test cases for CAPRG assembly. We compared the results of CAPRG assembly using a reference genome with the results of existing methods that utilize de novo strategies such as VELVET, PAVE, and MIRA by employing parameter space comparisons (intra-assembling comparison. CAPRG performed as well or better than the existing assembly methods based on various benchmarks for "gene-hunting." Further, CAPRG completed the assemblies in a fraction of the time required by the existing assembly algorithms. Additional advantages of CAPRG included reduced contig inflation resulting in lower computational resources for annotation, and functional identification for contigs that may be categorized as "unknowns" by de novo methods. In addition to providing evaluation of CAPRG performance, we observed that the different assembly (inter-assembly results could be integrated to enhance the putative gene coverage for any transcriptomics study.

  11. CAPRG: sequence assembling pipeline for next generation sequencing of non-model organisms.

    Science.gov (United States)

    Rawat, Arun; Elasri, Mohamed O; Gust, Kurt A; George, Glover; Pham, Don; Scanlan, Leona D; Vulpe, Chris; Perkins, Edward J

    2012-01-01

    Our goal is to introduce and describe the utility of a new pipeline "Contigs Assembly Pipeline using Reference Genome" (CAPRG), which has been developed to assemble "long sequence reads" for non-model organisms by leveraging a reference genome of a closely related phylogenetic relative. To facilitate this effort, we utilized two avian transcriptomic datasets generated using ROCHE/454 technology as test cases for CAPRG assembly. We compared the results of CAPRG assembly using a reference genome with the results of existing methods that utilize de novo strategies such as VELVET, PAVE, and MIRA by employing parameter space comparisons (intra-assembling comparison). CAPRG performed as well or better than the existing assembly methods based on various benchmarks for "gene-hunting." Further, CAPRG completed the assemblies in a fraction of the time required by the existing assembly algorithms. Additional advantages of CAPRG included reduced contig inflation resulting in lower computational resources for annotation, and functional identification for contigs that may be categorized as "unknowns" by de novo methods. In addition to providing evaluation of CAPRG performance, we observed that the different assembly (inter-assembly) results could be integrated to enhance the putative gene coverage for any transcriptomics study.

  12. Benchmarks to supplant export FPDR (Floating Point Data Rate) calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D.; Brooks, E.; Dongarra, J.; Hayes, A.; Lyon, G.

    1988-06-01

    Because modern computer architectures render application of the FPDR (Floating Point Data Processing Rate) increasingly difficult, there has been increased interest in export evaluation via actual system performances. The report discusses benchmarking of uniprocessor (usually vector) machines for scientific computation (SIMD array processors are not included), and parallel processing and its characterization for export control.

  13. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  14. Reduced-order computational model in nonlinear structural dynamics for structures having numerous local elastic modes in the low-frequency range. Application to fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Batou, A., E-mail: anas.batou@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Soize, C., E-mail: christian.soize@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Brie, N., E-mail: nicolas.brie@edf.fr [EDF R and D, Département AMA, 1 avenue du général De Gaulle, 92140 Clamart (France)

    2013-09-15

    Highlights: • A ROM of a nonlinear dynamical structure is built with a global displacements basis. • The reduced order model of fuel assemblies is accurate and of very small size. • The shocks between grids of a row of seven fuel assemblies are computed. -- Abstract: We are interested in the construction of a reduced-order computational model for nonlinear complex dynamical structures which are characterized by the presence of numerous local elastic modes in the low-frequency band. This high modal density makes the use of the classical modal analysis method not suitable. Therefore the reduced-order computational model is constructed using a basis of a space of global displacements, which is constructed a priori and which allows the nonlinear dynamical response of the structure observed on the stiff part to be predicted with a good accuracy. The methodology is applied to a complex industrial structure which is made up of a row of seven fuel assemblies with possibility of collisions between grids and which is submitted to a seismic loading.

  15. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  16. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  17. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  18. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  19. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  20. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  1. Development and validation of burnup dependent computational schemes for the analysis of assemblies with advanced lattice codes

    Science.gov (United States)

    Ramamoorthy, Karthikeyan

    The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant

  2. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  3. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  4. ABM11 parton distributions and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, Sergey [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institut Fiziki Vysokikh Ehnergij, Protvino (Russian Federation); Bluemlein, Johannes; Moch, Sven-Olaf [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-08-15

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant {alpha}{sub s} at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS scheme for {alpha}{sub s} and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  5. Shielding integral benchmark archive and database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L.; Grove, R.E. [Radiation Safety Information Computational Center RSICC, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831-6171 (United States); Kodeli, I. [Josef Stefan Inst., Jamova 39, 1000 Ljubljana (Slovenia); Gulliford, J.; Sartori, E. [OECD NEA Data Bank, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  6. WIPP Benchmark calculations with the large strain SPECTROM codes

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, G.D.; DeVries, K.L. [RE/SPEC, Inc., Rapid City, SD (United States)

    1995-08-01

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems.

  7. Benchmarking Ligand-Based Virtual High-Throughput Screening with the PubChem Database

    Directory of Open Access Journals (Sweden)

    Mariusz Butkiewicz

    2013-01-01

    Full Text Available With the rapidly increasing availability of High-Throughput Screening (HTS data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR models are built using Artificial Neural Networks (ANNs, Support Vector Machines (SVMs, Decision Trees (DTs, and Kohonen networks (KNs. Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed.

  8. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  9. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  10. An Interactive Assembly Process Planner

    Institute of Scientific and Technical Information of China (English)

    廖华飞; 张林鍹; 肖田元; 曾理; 古月

    2004-01-01

    This paper describes the implementation and performance of the virtual assembly support system (VASS), a new system that can provide designers and assembly process engineers with a simulation and visualization environment where they can evaluate the assemblability/disassemblability of products, and thereby use a computer to intuitively create assembly plans and interactively generate assembly process charts. Subassembly planning and assembly priority reasoning techniques were utilized to find heuristic information to improve the efficiency of assembly process planning. Tool planning was implemented to consider tool requirements in the product design stage. New methods were developed to reduce the computation amount involved in interference checking. As an important feature of the VASS, human interaction was integrated into the whole process of assembly process planning, extending the power of computer reasoning by including human expertise, resulting in better assembly plans and better designs.

  11. First TPC-Energy Benchmark: Lessons Learned in Practice

    Science.gov (United States)

    Young, Erik; Cao, Paul; Nikolaiev, Mike

    TPC-Energy specification augments the existing TPC Benchmarks with Energy Metrics. The TPC-Energy specification is designed to help hardware buyers identify energy efficient equipment that meets both their computational and budgetary requirements. In this paper we discuss our experience is publishing industry's first-ever TPC-Energy metric publication.

  12. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  13. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...... already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity. © IWA Publishing 2013....

  14. The contextual benchmark method: benchmarking e-government services

    NARCIS (Netherlands)

    Jansen, Jurjen; Vries, de Sjoerd; Schaik, van Paul

    2010-01-01

    This paper offers a new method for benchmarking e-Government services. Government organizations no longer doubt the need to deliver their services on line. Instead, the question that is more relevant is how well the electronic services offered by a particular organization perform in comparison with

  15. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  16. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  17. A Commodity Computing Cluster

    Science.gov (United States)

    Teuben, P. J.; Wolfire, M. G.; Pound, M. W.; Mundy, L. G.

    We have assembled a cluster of Intel-Pentium based PCs running Linux to compute a large set of Photodissociation Region (PDR) and Dust Continuum models. For various reasons the cluster is heterogeneous, currently ranging from a single Pentium-II 333 MHz to dual Pentium-III 450 MHz CPU machines. Although this will be sufficient for our ``embarrassingly parallelizable problem'' it may present some challenges for as yet unplanned future use. In addition the cluster was used to construct a MIRIAD benchmark, and compared to equivalent Ultra-Sparc based workstations. Currently the cluster consists of 8 machines, 14 CPUs, 50GB of disk-space, and a total peak speed of 5.83 GHz, or about 1.5 Gflops. The total cost of this cluster has been about $12,000, including all cabling, networking equipment, rack, and a CD-R backup system. The URL for this project is http://dustem.astro.umd.edu.

  18. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  19. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2009-01-01

    The classic textbook for computer systems analysis and design, Computer Organization and Design, has been thoroughly updated to provide a new focus on the revolutionary change taking place in industry today: the switch from uniprocessor to multicore microprocessors. This new emphasis on parallelism is supported by updates reflecting the newest technologies with examples highlighting the latest processor designs, benchmarking standards, languages and tools. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, compu

  20. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  1. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  2. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  3. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  4. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  5. Solution and computed structure of O-lithium N,N-diisopropyl-P,P-diphenylphosphinic amide. Unprecedented Li-O-Li-O self-assembly of an aryllithium.

    Science.gov (United States)

    Fernández, Ignacio; Oña-Burgos, Pascual; Oliva, Josep M; Ortiz, Fernando López

    2010-04-14

    The structural characterization of an ortho-lithiated diphenylphosphinic amide is described for the first time. Multinuclear magnetic resonance ((1)H, (7)Li, (13)C, (31)P) studies as a function of temperature and concentration employing 1D and 2D methods showed that the anion exists as a mixture of one monomer and two diastereomeric dimers. In the dimers the chiral monomer units are assembled in a like and unlike manner through oxygen-lithium bonds, leading to fluxional ladder structures. This self-assembling mode leads to the formation of Li(2)O(2) four-membered rings, a structural motif unprecedented in aryllithium compounds. DFT computations of representative model compounds of ortho-lithiated phosphinic amide monomer and Li(2)C(2) and Li(2)O(2) dimers with different degrees of solvation by THF molecules showed that Li(2)O(2) dimers are thermodynamically favored with respect to the alternative Li(2)C(2) structures by 4.3 kcal mol(-1) in solvent-free species and by 2.3 kcal mol(-1) when each lithium atom is coordinated to one THF molecule. Topological analysis of the electron density distribution revealed that the Li(2)O(2) four-membered ring is characterized by four carbon-lithium bond paths and one oxygen-oxygen bond path. The latter divides the Li-O-Li-O ring into two Li-O-Li three-sided rings, giving rise to two ring critical points. On the contrary, the bond path network in the Li(2)C(2) core includes a catastrophe point, suggesting that this molecular system can be envisaged as an intermediate in the formation of Li(2)O(2) dimers. The computed (13)C chemical shifts of the C-Li carbons support the existence of monomeric and dimeric species containing only one C-Li bond and are consistent with the existence of tricoordinated lithium atoms in all species in solution.

  6. 载人航天某装置人机交互式结构优化设计%Structural optimization using human-computer interaction for an aerospace assembly

    Institute of Scientific and Technical Information of China (English)

    刘磊; 刘洪英; 马爱军; 胡清华; 冯雪梅; 石蒙; 董睿; 赵亚雄

    2016-01-01

    To solve the problem of the structural optimization of a complicated structure under dynamic response constraints, a human-computer interaction method is proposed to take the advantages of human and computer in the structural optimization, and it is used in the structural optimization of an aerospace assembly. The assembly, after the structural optimization, exhibits remarkable performance improvement in that the first integral vibration frequency increases 41.1% and the maximal the frequency response acceleration of cared points drops 24.3% under the sinusoidal vibration test load conditions while the mass remains essentially unchanged. The result satisfies the requirement of the optimal design and proves the effectiveness and feasibility of the method.%为了解决复杂结构在动力学响应约束下优化的难题,综合人工以及计算机在复杂结构优化中的优点,提出一种人机交互式优化方法用于载人航天某复杂装置的优化设计。经过结构优化后的装置,在质量保持基本不变的情况下一阶振动频率提升41.1%,正弦试验条件下关心节点的最大加速度响应值减小24.3%,优化效果明显,满足优化设计要求,验证了该优化设计方法的可行、有效。

  7. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  8. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Science.gov (United States)

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  9. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  10. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  11. Initiation of assembly of tau(273-284) and its ΔK280 mutant: an experimental and computational study.

    Science.gov (United States)

    Larini, Luca; Gessel, Megan Murray; LaPointe, Nichole E; Do, Thanh D; Bowers, Michael T; Feinstein, Stuart C; Shea, Joan-Emma

    2013-06-21

    The microtubule associated protein tau is essential for the development and maintenance of the nervous system. Tau dysfunction is associated with a class of diseases called tauopathies, in which tau is found in an aggregated form. This paper focuses on a small aggregating fragment of tau, (273)GKVQIINKKLDL(284), encompassing the (PHF6*) region that plays a central role in tau aggregation. Using a combination of simulations and experiments, we probe the self-assembly of this peptide, with an emphasis on characterizing the early steps of aggregation. Ion-mobility mass spectrometry experiments provide a size distribution of early oligomers, TEM studies provide a time course of aggregation, and enhanced sampling molecular dynamics simulations provide atomistically detailed structural information about this intrinsically disordered peptide. Our studies indicate that a point mutation, as well the addition of heparin, lead to a shift in the conformations populated by the earliest oligomers, affecting the kinetics of subsequent fibril formation as well as the morphology of the resulting aggregates. In particular, a mutant associated with a K280 deletion (a mutation that causes a heritable form of neurodegeneration/dementia in the context of full length tau) is seen to aggregate more readily than its wild-type counterpart. Simulations and experiment reveal that the ΔK280 mutant peptide adopts extended conformations to a greater extent than the wild-type peptide, facilitating aggregation through the pre-structuring of the peptide into a fibril-competent structure.

  12. Initiation of Assembly of Tau(273–284) and its ΔK280 Mutant: An Experimental and Computational Study†

    Science.gov (United States)

    Larini, Luca; Gessel, Megan Murray; LaPointe, Nichole E.; Do, Thanh D.; Bowers, Michael T.; Feinstein, Stuart C.; Shea, Joan-Emma

    2013-01-01

    The microtubule associated protein tau is essential for the development and maintenance of the nervous system. Tau dysfunction is associated with a class of diseases called tauopathies, in which tau is found in an aggregated form. This paper focuses on a small aggregating fragment of tau, 273GKVQIINKKLDL284, encompassing the (PHF6*) region that plays a central role in tau aggregation. Using a combination of simulations and experiments, we probe the self-assembly of this peptide, with an emphasis on characterizing the early steps of aggregation. Ion-mobility mass spectrometry experiments provide a size distribution of early oligomers, TEM studies provide a time course of aggregation, and enhanced sampling molecular dynamics simulations provide atomistically detailed structural information about this intrinsically disordered peptide. Our studies indicate that a point mutation, as well the addition of heparin, lead to a shift in the conformations populated by the earliest oligomers, affecting the kinetics of subsequent fibril formation as well as the morphology of the resulting aggregates. In particular, a mutant associated with a K280 deletion (a mutation that causes a heritable form of neurodegeneration/dementia in the context of full length tau) is seen to aggregate more readily than its wild-type counterpart. Simulations and experiment reveal that the ΔK280 mutant peptide adopts extended conformations to a greater extent than the wild-type peptide, facilitating aggregation through the pre-structuring of the peptide into a fibril-competent structure. PMID:23515417

  13. A Benchmark for Management Effectiveness

    OpenAIRE

    Zimmermann, Bill; Chanaron, Jean-Jacques; Klieb, Leslie

    2007-01-01

    International audience; This study presents a tool to gauge managerial effectiveness in the form of a questionnaire that is easy to administer and score. The instrument covers eight distinct areas of organisational climate and culture of management inside a company or department. Benchmark scores were determined by administering sample-surveys to a wide cross-section of individuals from numerous firms in Southeast Louisiana, USA. Scores remained relatively constant over a seven-year timeframe...

  14. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  15. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  16. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  17. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  18. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  19. Statistical benchmark for BosonSampling

    Science.gov (United States)

    Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas

    2016-03-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.

  20. Benchmarks of Global Clean Energy Manufacturing: Summary of Findings

    Energy Technology Data Exchange (ETDEWEB)

    2017-01-01

    The Benchmarks of Global Clean Energy Manufacturing will help policymakers and industry gain deeper understanding of global manufacturing of clean energy technologies. Increased knowledge of the product supply chains can inform decisions related to manufacturing facilities for extracting and processing raw materials, making the array of required subcomponents, and assembling and shipping the final product. This brochure summarized key findings from the analysis and includes important figures from the report. The report was prepared by the Clean Energy Manufacturing Analysis Center (CEMAC) analysts at the U.S. Department of Energy's National Renewable Energy Laboratory.

  1. NRC-BNL Benchmark Program on Evaluation of Methods for Seismic Analysis of Coupled Systems

    Energy Technology Data Exchange (ETDEWEB)

    Chokshi, N.; DeGrassi, G.; Xu, J.

    1999-03-24

    A NRC-BNL benchmark program for evaluation of state-of-the-art analysis methods and computer programs for seismic analysis of coupled structures with non-classical damping is described. The program includes a series of benchmarking problems designed to investigate various aspects of complexities, applications and limitations associated with methods for analysis of non-classically damped structures. Discussions are provided on the benchmarking process, benchmark structural models, and the evaluation approach, as well as benchmarking ground rules. It is expected that the findings and insights, as well as recommendations from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems.

  2. NRC-BNL BENCHMARK PROGRAM ON EVALUATION OF METHODS FOR SEISMIC ANALYSIS OF COUPLED SYSTEMS.

    Energy Technology Data Exchange (ETDEWEB)

    XU,J.

    1999-08-15

    A NRC-BNL benchmark program for evaluation of state-of-the-art analysis methods and computer programs for seismic analysis of coupled structures with non-classical damping is described. The program includes a series of benchmarking problems designed to investigate various aspects of complexities, applications and limitations associated with methods for analysis of non-classically damped structures. Discussions are provided on the benchmarking process, benchmark structural models, and the evaluation approach, as well as benchmarking ground rules. It is expected that the findings and insights, as well as recommendations from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems.

  3. RESRAD benchmarking against six radiation exposure pathway models

    Energy Technology Data Exchange (ETDEWEB)

    Faillace, E.R.; Cheng, J.J.; Yu, C.

    1994-10-01

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, input parameters such as occupancy, shielding, and consumption factors.

  4. Benchmarking of neutron production of heavy-ion transport codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I. [Oak Ridge National Laboratory, Oak Ridge, TN 37831-6172 (United States); Ronningen, R. M. [Michigan State Univ., National Superconductiong Cyclotron Laboratory, East Lansing, MI 48824-1321 (United States); Heilbronn, L. [Univ. of Tennessee, 1004 Estabrook Rd., Knoxville, TN 37996-2300 (United States)

    2011-07-01

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)

  5. Benchmarks for electronically excited states: CASPT2, CC2, CCSD, and CC3

    Science.gov (United States)

    Schreiber, Marko; Silva-Junior, Mario R.; Sauer, Stephan P. A.; Thiel, Walter

    2008-04-01

    A benchmark set of 28 medium-sized organic molecules is assembled that covers the most important classes of chromophores including polyenes and other unsaturated aliphatic compounds, aromatic hydrocarbons, heterocycles, carbonyl compounds, and nucleobases. Vertical excitation energies and one-electron properties are computed for the valence excited states of these molecules using both multiconfigurational second-order perturbation theory, CASPT2, and a hierarchy of coupled cluster methods, CC2, CCSD, and CC3. The calculations are done at identical geometries (MP2/6-31G*) and with the same basis set (TZVP). In most cases, the CC3 results are very close to the CASPT2 results, whereas there are larger deviations with CC2 and CCSD, especially in singlet excited states that are not dominated by single excitations. Statistical evaluations of the calculated vertical excitation energies for 223 states are presented and discussed in order to assess the relative merits of the applied methods. CC2 reproduces the CC3 reference data for the singlets better than CCSD. On the basis of the current computational results and an extensive survey of the literature, we propose best estimates for the energies of 104 singlet and 63 triplet excited states.

  6. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  7. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  8. Thermal Analysis of a TREAT Fuel Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Papadias, Dionissios [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, Arthur E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-07-09

    The objective of this study was to explore options as to reduce peak cladding temperatures despite an increase in peak fuel temperatures. A 3D thermal-hydraulic model for a single TREAT fuel assembly was benchmarked to reproduce results obtained with previous thermal models developed for a TREAT HEU fuel assembly. In exercising this model, and variants thereof depending on the scope of analysis, various options were explored to reduce the peak cladding temperatures.

  9. International benchmark study of advanced thermal hydraulic safety analysis codes against measurements on IEA-R1 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Hainoun, A., E-mail: pscientific2@aec.org.sy [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Doval, A. [Nuclear Engineering Department, Av. Cmdt. Luis Piedrabuena 4950, C.P. 8400 S.C de Bariloche, Rio Negro (Argentina); Umbehaun, P. [Centro de Engenharia Nuclear – CEN, IPEN-CNEN/SP, Av. Lineu Prestes 2242-Cidade Universitaria, CEP-05508-000 São Paulo, SP (Brazil); Chatzidakis, S. [School of Nuclear Engineering, Purdue University, West Lafayette, IN 47907 (United States); Ghazi, N. [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Park, S. [Research Reactor Design and Engineering Division, Basic Science Project Operation Dept., Korea Atomic Energy Research Institute (Korea, Republic of); Mladin, M. [Institute for Nuclear Research, Campului Street No. 1, P.O. Box 78, 115400 Mioveni, Arges (Romania); Shokr, A. [Division of Nuclear Installation Safety, Research Reactor Safety Section, International Atomic Energy Agency, A-1400 Vienna (Austria)

    2014-12-15

    Highlights: • A set of advanced system thermal hydraulic codes are benchmarked against IFA of IEA-R1. • Comparative safety analysis of IEA-R1 reactor during LOFA by 7 working teams. • This work covers both experimental and calculation effort and presents new out findings on TH of RR that have not been reported before. • LOFA results discrepancies from 7% to 20% for coolant and peak clad temperatures are predicted conservatively. - Abstract: In the framework of the IAEA Coordination Research Project on “Innovative methods in research reactor analysis: Benchmark against experimental data on neutronics and thermal hydraulic computational methods and tools for operation and safety analysis of research reactors” the Brazilian research reactor IEA-R1 has been selected as reference facility to perform benchmark calculations for a set of thermal hydraulic codes being widely used by international teams in the field of research reactor (RR) deterministic safety analysis. The goal of the conducted benchmark is to demonstrate the application of innovative reactor analysis tools in the research reactor community, validation of the applied codes and application of the validated codes to perform comprehensive safety analysis of RR. The IEA-R1 is equipped with an Instrumented Fuel Assembly (IFA) which provided measurements for normal operation and loss of flow transient. The measurements comprised coolant and cladding temperatures, reactor power and flow rate. Temperatures are measured at three different radial and axial positions of IFA summing up to 12 measuring points in addition to the coolant inlet and outlet temperatures. The considered benchmark deals with the loss of reactor flow and the subsequent flow reversal from downward forced to upward natural circulation and presents therefore relevant phenomena for the RR safety analysis. The benchmark calculations were performed independently by the participating teams using different thermal hydraulic and safety

  10. Verification of the code DYN3D/R with the help of international benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Grundmann, U.; Rohde, U.

    1997-10-01

    Different benchmarks for reactors with quadratic fuel assemblies were calculated with the code DYN3D/R. In this report comparisons with the results of the reference solutions are carried out. The results of DYN3D/R and the reference calculation for the eigenvalue k{sub eff} and the power distribution are shown for the steady-state 3-dimensional IAEA-Benchmark. The results of NEACRP-Benchmarks on control rod ejections in a standard PWR were compared with the reference solutions published by the NEA Data Bank. For assessing the accuracy of DYN3D/R results in comparison to other codes the deviations to the reference solutions are considered. Detailed comparisons with the published reference solutions of the NEA-NSC Benchmarks on uncontrolled withdrawal of control rods are made. The influence of the axial nodalization is also investigated. All in all, a good agreement of the DYN3D/R results with the reference solutions can be seen for the considered benchmark problems. (orig.) [Deutsch] Verschiedene Benchmarks fuer Reaktoren mit quadratischen Brennelementen wurden mit dem Code DYN3D/R berechnet. In diesem Bericht erfolgen Vergleiche mit den Ergebnissen der Referenzloesungen. Die Ergebnisse von DYN3D/R und der Referenzrechnung fuer Eigenwert k{sub eff} und Leistungsverteilung des stationaeren 3-dimensionalen IAEA-Benchmarks werden dargestellt. Die Ergebnisse der NEACRP-Benchmarks fuer die Auswuerfe von Steuerstaeben in einem typischen DWR werden mit den von der NEA Data Bank veroeffentlichten Referenzloesungen verglichen. Zur Einschaetzung der Genauigkeit der DYN3D/R Resultate im Vergleich zu anderen Rechenprogrammen werden die Abweichungen zu den Referenzloesungen betrachtet. Detaillierte Vergleiche mit den Referenzloesungen erfolgen fuer die NEA-NSC Benchmarks zum unkontrollierten Ausfahren von Steuerstaeben. Dabei wird der Einfluss der axialen Nodalisierung untersucht. Insgesamt wird eine gute Uebereinstimmung der DYN3D/R Resultate mit den Referenzloesungen fuer die

  11. [Benchmarking in health care: conclusions and recommendations].

    Science.gov (United States)

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  13. Comparative Analysis of CTF and Trace Thermal-Hydraulic Codes Using OECD/NRC PSBT Benchmark Void Distribution Database

    OpenAIRE

    2013-01-01

    The international OECD/NRC PSBT benchmark has been established to provide a test bed for assessing the capabilities of thermal-hydraulic codes and to encourage advancement in the analysis of fluid flow in rod bundles. The benchmark was based on one of the most valuable databases identified for the thermal-hydraulics modeling developed by NUPEC, Japan. The database includes void fraction and departure from nucleate boiling measurements in a representative PWR fuel assembly. On behalf of the be...

  14. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  15. An Effective Approach for Benchmarking Implementation

    OpenAIRE

    B. M. Deros; Tan, J.; M.N.A. Rahman; N. A.Q.M. Daud

    2011-01-01

    Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty res...

  16. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  19. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  20. A computational module assembled from different protease family motifs identifies PI PLC from Bacillus cereus as a putative prolyl peptidase with a serine protease scaffold.

    Science.gov (United States)

    Rendón-Ramírez, Adela; Shukla, Manish; Oda, Masataka; Chakraborty, Sandeep; Minda, Renu; Dandekar, Abhaya M; Ásgeirsson, Bjarni; Goñi, Félix M; Rao, Basuthkar J

    2013-01-01

    Proteolytic enzymes have evolved several mechanisms to cleave peptide bonds. These distinct types have been systematically categorized in the MEROPS database. While a BLAST search on these proteases identifies homologous proteins, sequence alignment methods often fail to identify relationships arising from convergent evolution, exon shuffling, and modular reuse of catalytic units. We have previously established a computational method to detect functions in proteins based on the spatial and electrostatic properties of the catalytic residues (CLASP). CLASP identified a promiscuous serine protease scaffold in alkaline phosphatases (AP) and a scaffold recognizing a β-lactam (imipenem) in a cold-active Vibrio AP. Subsequently, we defined a methodology to quantify promiscuous activities in a wide range of proteins. Here, we assemble a module which encapsulates the multifarious motifs used by protease families listed in the MEROPS database. Since APs and proteases are an integral component of outer membrane vesicles (OMV), we sought to query other OMV proteins, like phospholipase C (PLC), using this search module. Our analysis indicated that phosphoinositide-specific PLC from Bacillus cereus is a serine protease. This was validated by protease assays, mass spectrometry and by inhibition of the native phospholipase activity of PI-PLC by the well-known serine protease inhibitor AEBSF (IC50 = 0.018 mM). Edman degradation analysis linked the specificity of the protease activity to a proline in the amino terminal, suggesting that the PI-PLC is a prolyl peptidase. Thus, we propose a computational method of extending protein families based on the spatial and electrostatic congruence of active site residues.

  1. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  2. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  3. Development of solutions to benchmark piping problems. [EPIPE code

    Energy Technology Data Exchange (ETDEWEB)

    Reich, M.; Chang, T.Y.; Prachuktam, S.

    1976-01-01

    Piping analysis is one of the most extensive engineering efforts required for the design of nuclear reactors. Such analysis is normally carried out by use of computer programs which can handle complex piping geometries and various loading conditions, (static or dynamic). A brief outline is presented of the theoretical background for the EPIPE program, together with four benchmark problems: two for the static case and two for the dynamic case. The results obtained from EPIPE runs compare well with those available from known analytical solutions or from other independent computer programs.

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  5. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  6. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  7. Benchmarking ICRF simulations for ITER

    Energy Technology Data Exchange (ETDEWEB)

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  8. COG validation: SINBAD Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  9. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  10. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  11. Quantitative Performance Analysis of the SPEC OMPM2001 Benchmarks

    Directory of Open Access Journals (Sweden)

    Vishal Aslot

    2003-01-01

    Full Text Available The state of modern computer systems has evolved to allow easy access to multiprocessor systems by supporting multiple processors on a single physical package. As the multiprocessor hardware evolves, new ways of programming it are also developed. Some inventions may merely be adopting and standardizing the older paradigms. One such evolving standard for programming shared-memory parallel computers is the OpenMP API. The Standard Performance Evaluation Corporation (SPEC has created a suite of parallel programs called SPEC OMP to compare and evaluate modern shared-memory multiprocessor systems using the OpenMP standard. We have studied these benchmarks in detail to understand their performance on a modern architecture. In this paper, we present detailed measurements of the benchmarks. We organize, summarize, and display our measurements using a Quantitative Model. We present a detailed discussion and derivation of the model. Also, we discuss the important loops in the SPEC OMPM2001 benchmarks and the reasons for less than ideal speedup on our platform.

  12. Heterogeneous Distributed Computing for Computational Aerosciences

    Science.gov (United States)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  13. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  14. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  15. Benchmark Assessment for Improved Learning. AACC Report

    Science.gov (United States)

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  16. Benchmark analysis of railway networks and undertakings

    NARCIS (Netherlands)

    Hansen, I.A.; Wiggenraad, P.B.L.; Wolff, J.W.

    2013-01-01

    Benchmark analysis of railway networks and companies has been stimulated by the European policy of deregulation of transport markets, the opening of national railway networks and markets to new entrants and separation of infrastructure and train operation. Recent international railway benchmarking s

  17. Machines are benchmarked by code, not algorithms

    NARCIS (Netherlands)

    Poss, R.

    2013-01-01

    This article highlights how small modifications to either the source code of a benchmark program or the compilation options may impact its behavior on a specific machine. It argues that for evaluating machines, benchmark providers and users be careful to ensure reproducibility of results based on th

  18. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  19. Synergetic effect of benchmarking competitive advantages

    Directory of Open Access Journals (Sweden)

    N.P. Tkachova

    2011-12-01

    Full Text Available It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  20. Synergetic effect of benchmarking competitive advantages

    OpenAIRE

    N.P. Tkachova; P.G. Pererva

    2011-01-01

    It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  1. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  2. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  3. Melcor benchmarking against integral severe fuel damage tests

    Energy Technology Data Exchange (ETDEWEB)

    Madni, I.K. [Brookhaven National Lab., Upton, NY (United States)

    1995-09-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the U.S. Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC to provide independent assessment of MELCOR, and a very important part of this program is to benchmark MELCOR against experimental data from integral severe fuel damage tests and predictions of that data from more mechanistic codes such as SCDAP or SCDAP/RELAP5. Benchmarking analyses with MELCOR have been carried out at BNL for five integral severe fuel damage tests, namely, PBF SFD 1-1, SFD 14, and NRU FLHT-2, analyses, and their role in identifying areas of modeling strengths and weaknesses in MELCOR.

  4. Gaia FGK Benchmark Stars: Effective temperatures and surface gravities

    CERN Document Server

    Heiter, U; Gustafsson, B; Korn, A J; Soubiran, C; Thévenin, F

    2015-01-01

    Large Galactic stellar surveys and new generations of stellar atmosphere models and spectral line formation computations need to be subjected to careful calibration and validation and to benchmark tests. We focus on cool stars and aim at establishing a sample of 34 Gaia FGK Benchmark Stars with a range of different metallicities. The goal was to determine the effective temperature and the surface gravity independently from spectroscopy and atmospheric models as far as possible. Fundamental determinations of Teff and logg were obtained in a systematic way from a compilation of angular diameter measurements and bolometric fluxes, and from a homogeneous mass determination based on stellar evolution models. The derived parameters were compared to recent spectroscopic and photometric determinations and to gravity estimates based on seismic data. Most of the adopted diameter measurements have formal uncertainties around 1%, which translate into uncertainties in effective temperature of 0.5%. The measurements of bol...

  5. COMPUTING

    CERN Document Server

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  6. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  11. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  14. Coded nanoscale self-assembly

    Indian Academy of Sciences (India)

    Prathyush Samineni; Debabrata Goswami

    2008-12-01

    We demonstrate coded self-assembly in nanostructures using the code seeded at the component level through computer simulations. Defects or cavities occur in all natural assembly processes including crystallization and our simulations capture this essential aspect under surface minimization constraints for self-assembly. Our bottom-up approach to nanostructures would provide a new dimension towards nanofabrication and better understanding of defects and crystallization process.

  15. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  16. Benchmarking Measures of Network Influence

    Science.gov (United States)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  17. Benchmark solutions for transport in $d$-dimensional Markov binary mixtures

    CERN Document Server

    Larmier, Coline; Malvagi, Fausto; Mazzolo, Alain; Zoia, Andrea

    2016-01-01

    Linear particle transport in stochastic media is key to such relevant applications as neutron diffusion in randomly mixed immiscible materials, light propagation through engineered optical materials, and inertial confinement fusion, only to name a few. We extend the pioneering work by Adams, Larsen and Pomraning \\cite{benchmark_adams} (recently revisited by Brantley \\cite{brantley_benchmark}) by considering a series of benchmark configurations for mono-energetic and isotropic transport through Markov binary mixtures in dimension $d$. The stochastic media are generated by resorting to Poisson random tessellations in $1d$ slab, $2d$ extruded, and full $3d$ geometry. For each realization, particle transport is performed by resorting to the Monte Carlo simulation. The distributions of the transmission and reflection coefficients on the free surfaces of the geometry are subsequently estimated, and the average values over the ensemble of realizations are computed. Reference solutions for the benchmark have never be...

  18. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  20. COMPUTING

    CERN Document Server

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  1. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  3. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  4. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  5. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  6. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador;

    2016-01-01

    -recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  7. Benchmarking Attosecond Physics with Atomic Hydrogen

    Science.gov (United States)

    2015-05-25

    Final 3. DATES COVERED (From - To) 12 Mar 12 – 11 Mar 15 4. TITLE AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a...AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a. CONTRACT NUMBER FA2386-12-1-4025 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Final Report for AOARD Grant FA2386-12-1-4025 “ Benchmarking

  8. On Constraints in Assembly Planning

    Energy Technology Data Exchange (ETDEWEB)

    Calton, T.L.; Jones, R.E.; Wilson, R.H.

    1998-12-17

    Constraints on assembly plans vary depending on product, assembly facility, assembly volume, and many other factors. Assembly costs and other measures to optimize vary just as widely. To be effective, computer-aided assembly planning systems must allow users to express the plan selection criteria that appIy to their products and production environments. We begin this article by surveying the types of user criteria, both constraints and quality measures, that have been accepted by assembly planning systems to date. The survey is organized along several dimensions, including strategic vs. tactical criteria; manufacturing requirements VS. requirements of the automated planning process itself and the information needed to assess compliance with each criterion. The latter strongly influences the efficiency of planning. We then focus on constraints. We describe a framework to support a wide variety of user constraints for intuitive and efficient assembly planning. Our framework expresses all constraints on a sequencing level, specifying orders and conditions on part mating operations in a number of ways. Constraints are implemented as simple procedures that either accept or reject assembly operations proposed by the planner. For efficiency, some constraints are supplemented with special-purpose modifications to the planner's algorithms. Fast replanning enables an interactive plan-view-constrain-replan cycle that aids in constraint discovery and documentation. We describe an implementation of the framework in a computer-aided assembly planning system and experiments applying the system to a number of complex assemblies, including one with 472 parts.

  9. Protein Structure Determination by Assembling Super-Secondary Structure Motifs Using Pseudocontact Shifts.

    Science.gov (United States)

    Pilla, Kala Bharath; Otting, Gottfried; Huber, Thomas

    2017-03-07

    Computational and nuclear magnetic resonance hybrid approaches provide efficient tools for 3D structure determination of small proteins, but currently available algorithms struggle to perform with larger proteins. Here we demonstrate a new computational algorithm that assembles the 3D structure of a protein from its constituent super-secondary structural motifs (Smotifs) with the help of pseudocontact shift (PCS) restraints for backbone amide protons, where the PCSs are produced from different metal centers. The algorithm, DINGO-PCS (3D assembly of Individual Smotifs to Near-native Geometry as Orchestrated by PCSs), employs the PCSs to recognize, orient, and assemble the constituent Smotifs of the target protein without any other experimental data or computational force fields. Using a universal Smotif database, the DINGO-PCS algorithm exhaustively enumerates any given Smotif. We benchmarked the program against ten different protein targets ranging from 100 to 220 residues with different topologies. For nine of these targets, the method was able to identify near-native Smotifs.

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  11. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    these issues, and describes how effects are closely connected to the perception of benchmarking, the intended users of the system and the application of the benchmarking results. The fundamental basis of this paper is taken from the development of benchmarking in the Danish construction sector. Two distinct...... perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind...... of benchmarking. In conclusion it is argued that clients and the Danish government are the intended users of the benchmarking system. The benchmarking results are primarily used by the government for monitoring and regulation of the construction sector and by clients for contractor selection. The dominating use...

  12. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and the...

  13. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically receive bureaucratic benchmarking information from the administration. We find that more frequent bureaucratic...

  14. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  15. XWeB: the XML Warehouse Benchmark

    CERN Document Server

    Mahboubi, Hadj

    2011-01-01

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  16. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.

  17. A new benchmark for pose estimation with ground truth from virtual reality

    DEFF Research Database (Denmark)

    Schlette, Christian; Buch, Anders Glent; Aksoy, Eren Erdal

    2014-01-01

    , stereo reconstruction and action recognition. As a basis for the machine vision and learning involved, pose estimation is used for deriving object positions and orientations and thus target frames for robot execution. Our contribution introduces and applies a novel benchmark for typical multi......The development of programming paradigms for industrial assembly currently gets fresh impetus from approaches in human demonstration and programming-by-demonstration. Major low- and mid-level prerequisites for machine vision and learning in these intelligent robotic applications are pose estimation......-sensor setups and algorithms in the field of demonstration-based automated assembly. The benchmark platform is equipped with a multi-sensor setup consisting of stereo cameras and depth scanning devices (see Fig. 1). The dimensions and abilities of the platform have been chosen in order to reflect typical manual...

  18. A framework of benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-02-01

    Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1) targeted aspects of model performance to be evaluated; (2) a set of benchmarks as defined references to test model performance; (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4) model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  19. A framework of benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-02-01

    Full Text Available Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1 targeted aspects of model performance to be evaluated; (2 a set of benchmarks as defined references to test model performance; (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4 model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  20. A framework for benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J. T.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J. B.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  1. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  2. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  3. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  4. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  5. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  6. National Energy Software Center: benchmark problem book. Revision

    Energy Technology Data Exchange (ETDEWEB)

    None

    1985-12-01

    Computational benchmarks are given for the following problems: (1) Finite-difference, diffusion theory calculation of a highly nonseparable reactor, (2) Iterative solutions for multigroup two-dimensional neutron diffusion HTGR problem, (3) Reference solution to the two-group diffusion equation, (4) One-dimensional neutron transport transient solutions, (5) To provide a test of the capabilities of multi-group multidimensional kinetics codes in a heavy water reactor, (6) Test of capabilities of multigroup neutron diffusion in LMFBR, and (7) Two-dimensional PWR models.

  7. Reference Solutions for Benchmark Turbulent Flows in Three Dimensions

    Science.gov (United States)

    Diskin, Boris; Thomas, James L.; Pandya, Mohagna J.; Rumsey, Christopher L.

    2016-01-01

    A grid convergence study is performed to establish benchmark solutions for turbulent flows in three dimensions (3D) in support of turbulence-model verification campaign at the Turbulence Modeling Resource (TMR) website. The three benchmark cases are subsonic flows around a 3D bump and a hemisphere-cylinder configuration and a supersonic internal flow through a square duct. Reference solutions are computed for Reynolds Averaged Navier Stokes equations with the Spalart-Allmaras turbulence model using a linear eddy-viscosity model for the external flows and a nonlinear eddy-viscosity model based on a quadratic constitutive relation for the internal flow. The study involves three widely-used practical computational fluid dynamics codes developed and supported at NASA Langley Research Center: FUN3D, USM3D, and CFL3D. Reference steady-state solutions computed with these three codes on families of consistently refined grids are presented. Grid-to-grid and code-to-code variations are described in detail.

  8. Influence of the solvent on the self-assembly of a modified amyloid beta peptide fragment. II. NMR and computer simulation investigation.

    Science.gov (United States)

    Hamley, I W; Nutt, D R; Brown, G D; Miravet, J F; Escuder, B; Rodríguez-Llansola, F

    2010-01-21

    The conformation of a model peptide AAKLVFF based on a fragment of the amyloid beta peptide Abeta16-20, KLVFF, is investigated in methanol and water via solution NMR experiments and molecular dynamics computer simulations. In previous work, we have shown that AAKLVFF forms peptide nanotubes in methanol and twisted fibrils in water. Chemical shift measurements were used to investigate the solubility of the peptide as a function of concentration in methanol and water. This enabled the determination of critical aggregation concentrations. The solubility was lower in water. In dilute solution, diffusion coefficients revealed the presence of intermediate aggregates in concentrated solution, coexisting with NMR-silent larger aggregates, presumed to be beta-sheets. In water, diffusion coefficients did not change appreciably with concentration, indicating the presence mainly of monomers, coexisting with larger aggregates in more concentrated solution. Concentration-dependent chemical shift measurements indicated a folded conformation for the monomers/intermediate aggregates in dilute methanol, with unfolding at higher concentration. In water, an antiparallel arrangement of strands was indicated by certain ROESY peak correlations. The temperature-dependent solubility of AAKLVFF in methanol was well described by a van't Hoff analysis, providing a solubilization enthalpy and entropy. This pointed to the importance of solvophobic interactions in the self-assembly process. Molecular dynamics simulations constrained by NOE values from NMR suggested disordered reverse turn structures for the monomer, with an antiparallel twisted conformation for dimers. To model the beta-sheet structures formed at higher concentration, possible model arrangements of strands into beta-sheets with parallel and antiparallel configurations and different stacking sequences were used as the basis for MD simulations; two particular arrangements of antiparallel beta-sheets were found to be stable, one

  9. NERSC-6 Workload Analysis and Benchmark Selection Process

    Energy Technology Data Exchange (ETDEWEB)

    Antypas, Katie; Shalf, John; Wasserman, Harvey

    2008-08-29

    This report describes efforts carried out during early 2008 to determine some of the science drivers for the"NERSC-6" next-generation high-performance computing system acquisition. Although the starting point was existing Greenbooks from DOE and the NERSC User Group, the main contribution of this work is an analysis of the current NERSC computational workload combined with requirements information elicited from key users and other scientists about expected needs in the 2009-2011 timeframe. The NERSC workload is described in terms of science areas, computer codes supporting research within those areas, and description of key algorithms that comprise the codes. This work was carried out in large part to help select a small set of benchmark programs that accurately capture the science and algorithmic characteristics of the workload. The report concludes with a description of the codes selected and some preliminary performance data for them on several important systems.

  10. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  12. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  13. Automatic generation of reaction energy databases from highly accurate atomization energy benchmark sets.

    Science.gov (United States)

    Margraf, Johannes T; Ranasinghe, Duminda S; Bartlett, Rodney J

    2017-03-31

    In this contribution, we discuss how reaction energy benchmark sets can automatically be created from arbitrary atomization energy databases. As an example, over 11 000 reaction energies derived from the W4-11 database, as well as some relevant subsets are reported. Importantly, there is only very modest computational overhead involved in computing >11 000 reaction energies compared to 140 atomization energies, since the rate-determining step for either benchmark is performing the same 140 quantum chemical calculations. The performance of commonly used electronic structure methods for the new database is analyzed. This allows investigating the relationship between the performances for atomization and reaction energy benchmarks based on an identical set of molecules. The atomization energy is found to be a weak predictor for the overall usefulness of a method. The performance of density functional approximations in light of the number of empirically optimized parameters used in their design is also discussed.

  14. SUMMARY OF GENERAL WORKING GROUP A+B+D: CODES BENCHMARKING.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.; SHAPOSHNIKOVA, E.; ZIMMERMANN, F.; HOFMANN, I.

    2006-05-29

    Computer simulation is an indispensable tool in assisting the design, construction, and operation of accelerators. In particular, computer simulation complements analytical theories and experimental observations in understanding beam dynamics in accelerators. The ultimate function of computer simulation is to study mechanisms that limit the performance of frontier accelerators. There are four goals for the benchmarking of computer simulation codes, namely debugging, validation, comparison and verification: (1) Debugging--codes should calculate what they are supposed to calculate; (2) Validation--results generated by the codes should agree with established analytical results for specific cases; (3) Comparison--results from two sets of codes should agree with each other if the models used are the same; and (4) Verification--results from the codes should agree with experimental measurements. This is the summary of the joint session among working groups A, B, and D of the HI32006 Workshop on computer codes benchmarking.

  15. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  16. SULEC: Benchmarking a new ALE finite-element code

    Science.gov (United States)

    Buiter, S.; Ellis, S.

    2012-04-01

    We have developed a 2-D/3-D arbitrary lagrangian-eulerian (ALE) finite-element code, SULEC, based on known techniques from literature. SULEC is successful in tackling many of the problems faced by numerical models of lithosphere and mantle processes, such as the combination of viscous, elastic, and plastic rheologies, the presence of a free surface, the contrast in viscosity between lithosphere and the underlying asthenosphere, and the occurrence of large deformations including viscous flow and offset on shear zones. The aim of our presentation is (1) to describe SULEC, and (2) to present a set of analytical and numerical benchmarks that we use to continuously test our code. SULEC solves the incompressible momentum equation coupled with the energy equation. It uses a structured mesh that is built of quadrilateral or brick elements that can vary in size in all dimensions, allowing to achieve high resolutions where required. The elements are either linear in velocity with constant pressure, or quadratic in velocity with linear pressure. An accurate pressure field is obtained through an iterative penalty (Uzawa) formulation. Material properties are carried on tracer particles that are advected through the Eulerian mesh. Shear elasticity is implemented following the approach of Moresi et al. [J. Comp. Phys. 184, 2003], brittle materials deform following a Drucker-Prager criterion, and viscous flow is by temperature- and pressure-dependent power-law creep. The top boundary of our models is a true free surface (with free surface stabilisation) on which simple surface processes models may be imposed. We use a set of benchmarks that test viscous, viscoelastic, elastic and plastic deformation, temperature advection and conduction, free surface behaviour, and pressure computation. Part of our benchmark set is automated allowing easy testing of new code versions. Examples include Poiseuille flow, Couette flow, Stokes flow, relaxation of viscous topography, viscous pure shear

  17. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  18. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  19. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    Science.gov (United States)

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  20. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  1. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  2. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  3. Planeación asistida por computadora del proceso tecnológico de ensamble. //Computer-aided gliding of the assembles technological process.

    Directory of Open Access Journals (Sweden)

    L. L. Tomás García

    2008-01-01

    Full Text Available El presente trabajo está dedicado a la optimización bajo criterios múltiples de la planificación de procesos de ensamblemecánico a partir de su modelo geométrico tridimensional. Se soporta sobre un enfoque que integra tanto informacióngeométrica como restricciones tecnológicas del proceso de ensamble. En el desarrollo de la misma quedó demostrado, queuna vez conocido el modelo geométrico tridimensional de un ensamble, la aplicación de criterios tecnológicos ygeométricos al proceso inverso de desensamble y su posterior tratamiento con métodos evolutivos, genera planes deensamble mecánico próximos a los óptimos de acuerdo al sistema de preferencias del decisor. La integración de lainformación permite disminuir el número de secuencias a evaluar y de elementos a procesar, con lo que se evita lageneración y evaluación de todas las secuencias posibles con la consecuente disminución del tiempo de procesamiento.Como resultado de la aplicación del modelo integrado propuesto, se obtiene la planificación del proceso de ensamblemecánico con una reducción del tiempo de ensamble debido a que en las secuencias obtenidas se reduce el número decambios de dirección de ensamble, los cambios de herramientas y de puestos de trabajo, así como se minimiza la distanciaa recorrer debido al cambio de puestos de trabajo. Esto se logra mediante un modelo de optimización multiobjetivo basadoen algoritmos genéticos.Palabras claves: Ensamble mecánico, algoritmos genéticos, optimización multiobjetivo.______________________________________________________________________________Abstract:This work deals with the combinatorial problem of generating and optimizing technologically feasible assembly sequencesand process planning involving tools and work places. The assembly sequences and related technological decisions areobtained from a 3D model of the assembled parts based on mating conditions along with a set of technological criteria

  4. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallel computing systems and the actual delivered performance for scientific applications. In general this gap is caused by inadequate architectural support for the requirements of modern scientific applications, as commercial applications and the much larger market they represent, have driven the evolution of computer architectures. This gap has raised the importance of developing better benchmarking methodologies to characterize and to understand the performance requirements of scientific applications, to communicate them efficiently to influence the design of future computer architectures. This improved understanding of the performance behavior of scientific applications will allow improved performance predictions, development of adequate benchmarks for identification of hardware and application features that work well or poorly together, and a more systematic performance evaluation in procurement situations. The Berkeley Institute for Performance Studies has developed a three-level approach to evaluating the design of high end machines and the software that runs on them: (1) A suite of representative applications; (2) A set of application kernels; and (3) Benchmarks to measure key system parameters. The three levels yield different type of information, all of which are useful in evaluating systems, and enable NSF and DOE centers to select computer architectures more suited for scientific applications. The analysis will further allow the centers to engage vendors in discussion of strategies to alleviate the present architectural bottlenecks using quantitative information. These may include small hardware changes or larger ones that may be out interest to non-scientific workloads. Providing quantitative models to the vendors allows them to assess the benefits of technology alternatives using their own internal cost-models in the broader marketplace, ideally facilitating the development of future computer

  5. CFD Modeling of Thermal Manikin Heat Loss in a Comfort Evaluation Benchmark Test

    DEFF Research Database (Denmark)

    Nilsson, Håkan O.; Brohus, Henrik; Nielsen, Peter V.

    2007-01-01

    and companies still use several in-house codes for their calculations. The validation and association with human perception and heat losses in reality is consequently very difficult to make. This paper is providing requirements for the design and development of computer manikins and CFD benchmark tests...

  6. United assembly algorithm for optical burst switching

    Institute of Scientific and Technical Information of China (English)

    Jinhui Yu(于金辉); Yijun Yang(杨教军); Yuehua Chen(陈月华); Ge Fan(范戈)

    2003-01-01

    Optical burst switching (OBS) is a promising optical switching technology. The burst assembly algorithm controls burst assembly, which significantly impacts performance of OBS network. This paper provides a new assembly algorithm, united assembly algorithm, which has more practicability than conventional algorithms. In addition, some factors impacting selections of parameters of this algorithm are discussed and the performance of this algorithm is studied by computer simulation.

  7. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Primm III, RT

    2002-05-29

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  8. Benchmarking ontologies: bigger or better?

    Directory of Open Access Journals (Sweden)

    Lixia Yao

    Full Text Available A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the biomedical sciences. Powerful approaches exist for testing the internal consistency of an ontology, but not for assessing the fidelity of its domain representation. We introduce a family of metrics that describe the breadth and depth with which an ontology represents its knowledge domain. We then test these metrics using (1 four of the most common medical ontologies with respect to a corpus of medical documents and (2 seven of the most popular English thesauri with respect to three corpora that sample language from medicine, news, and novels. Here we show that our approach captures the quality of ontological representation and guides efforts to narrow the breach between ontology and collective discourse within a domain. Our results also demonstrate key features of medical ontologies, English thesauri, and discourse from different domains. Medical ontologies have a small intersection, as do English thesauri. Moreover, dialects characteristic of distinct domains vary strikingly as many of the same words are used quite differently in medicine, news, and novels. As ontologies are intended to mirror the state of knowledge, our methods to tighten the fit between ontology and domain will increase their relevance for new areas of biomedical science and improve the accuracy and power of inferences computed across them.

  9. Benchmarking ontologies: bigger or better?

    Science.gov (United States)

    Yao, Lixia; Divoli, Anna; Mayzus, Ilya; Evans, James A; Rzhetsky, Andrey

    2011-01-13

    A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the biomedical sciences. Powerful approaches exist for testing the internal consistency of an ontology, but not for assessing the fidelity of its domain representation. We introduce a family of metrics that describe the breadth and depth with which an ontology represents its knowledge domain. We then test these metrics using (1) four of the most common medical ontologies with respect to a corpus of medical documents and (2) seven of the most popular English thesauri with respect to three corpora that sample language from medicine, news, and novels. Here we show that our approach captures the quality of ontological representation and guides efforts to narrow the breach between ontology and collective discourse within a domain. Our results also demonstrate key features of medical ontologies, English thesauri, and discourse from different domains. Medical ontologies have a small intersection, as do English thesauri. Moreover, dialects characteristic of distinct domains vary strikingly as many of the same words are used quite differently in medicine, news, and novels. As ontologies are intended to mirror the state of knowledge, our methods to tighten the fit between ontology and domain will increase their relevance for new areas of biomedical science and improve the accuracy and power of inferences computed across them.

  10. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  11. FGK Benchmark Stars A new metallicity scale

    CERN Document Server

    Jofre, Paula; Soubiran, C; Blanco-Cuaresma, S; Pancino, E; Bergemann, M; Cantat-Gaudin, T; Hernandez, J I Gonzalez; Hill, V; Lardo, C; de Laverny, P; Lind, K; Magrini, L; Masseron, T; Montes, D; Mucciarelli, A; Nordlander, T; Recio-Blanco, A; Sobeck, J; Sordo, R; Sousa, S G; Tabernero, H; Vallenari, A; Van Eck, S; Worley, C C

    2013-01-01

    In the era of large spectroscopic surveys of stars of the Milky Way, atmospheric parameter pipelines require reference stars to evaluate and homogenize their values. We provide a new metallicity scale for the FGK benchmark stars based on their corresponding fundamental effective temperature and surface gravity. This was done by analyzing homogeneously with up to seven different methods a spectral library of benchmark stars. Although our direct aim was to provide a reference metallicity to be used by the Gaia-ESO Survey, the fundamental effective temperatures and surface gravities of benchmark stars of Heiter et al. 2013 (in prep) and their metallicities obtained in this work can also be used as reference parameters for other ongoing surveys, such as Gaia, HERMES, RAVE, APOGEE and LAMOST.

  12. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated with profess......Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated...... for 191 orthopaedics departments of German hospitals matched with survey data on bureaucratic benchmarking information provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically...

  13. DWEB: A Data Warehouse Engineering Benchmark

    CERN Document Server

    Darmont, Jérôme; Boussaïd, Omar

    2005-01-01

    Data warehouse architectural choices and optimization techniques are critical to decision support query performance. To facilitate these choices, the performance of the designed data warehouse must be assessed. This is usually done with the help of benchmarks, which can either help system users comparing the performances of different systems, or help system engineers testing the effect of various design choices. While the TPC standard decision support benchmarks address the first point, they are not tuneable enough to address the second one and fail to model different data warehouse schemas. By contrast, our Data Warehouse Engineering Benchmark (DWEB) allows to generate various ad-hoc synthetic data warehouses and workloads. DWEB is fully parameterized to fulfill data warehouse design needs. However, two levels of parameterization keep it relatively easy to tune. Finally, DWEB is implemented as a Java free software that can be interfaced with most existing relational database management systems. A sample usag...

  14. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  15. Investigating the Transonic Flutter Boundary of the Benchmark Supercritical Wing

    Science.gov (United States)

    Heeg, Jennifer; Chwalowski, Pawel

    2017-01-01

    This paper builds on the computational aeroelastic results published previously and generated in support of the second Aeroelastic Prediction Workshop for the NASA Benchmark Supercritical Wing configuration. The computational results are obtained using FUN3D, an unstructured grid Reynolds-Averaged Navier-Stokes solver developed at the NASA Langley Research Center. The analysis results focus on understanding the dip in the transonic flutter boundary at a single Mach number (0.74), exploring an angle of attack range of ??1 to 8 and dynamic pressures from wind off to beyond flutter onset. The rigid analysis results are examined for insights into the behavior of the aeroelastic system. Both static and dynamic aeroelastic simulation results are also examined.

  16. 计算机组装与维护公共选修课教学探析%Analysis on Teaching of Computer Assembly and Maintenance Public Elective Course

    Institute of Scientific and Technical Information of China (English)

    刘延锋; 徐晓昭

    2013-01-01

    This paper points out that, in the higher vocational colleges in teaching computer assembly and maintenance public elective course for non computer majors, mainly exist the old curriculum system, lack of practice condition, lag of teaching mate-rials, single teaching mode problems, through the analysis of the problems, gives the corresponding teaching suggestions, make ap-propriate syllabus, apply the virtual machine technology to carry out the experiment and training, collect and supplement comput-er knowledge of new technology, use the various teaching methods;in addition, this paper also gives through social investigation, organize competition of computer assembly and maintenance, set up campus voluntary maintenance team and other activities in the spare time, to improve their computer assembly and maintenance skills.%该文指出了在高职院校针对非计算机专业学生开设计算机组装与维护公共选修课教学实践中主要存在的课程体系陈旧、实践实训条件匮乏、教材滞后、授课方式单一等问题,通过对问题的分析,给出了相应的教学建议,制定合适的教学大纲、应用虚拟机技术开展实验实训、搜集补充计算机新技术知识、灵活采用多种方式授课;此外,该文还给出了在课余时间通过社会实践调查、组织计算机组装与维护大赛、成立校园义务维修队等多种活动,切实提高学生的计算机组装维护技能。

  17. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  18. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  19. Planificación y optimización asistida por computadora de secuencias de ensamble mecánico // Computer aided Planning and optimization for mechanical assembly.

    Directory of Open Access Journals (Sweden)

    L. L. Tomás-García

    2009-01-01

    Full Text Available El presente trabajo versa sobre la generación, planificación y optimización de secuencias deensamble mecánico a partir de su modelo geométrico tridimensional. Se soporta sobre un enfoqueque integra tanto información geométrica como restricciones tecnológicas del proceso deensamble. En el desarrollo de la misma quedó demostrado, que una vez conocido el modelogeométrico tridimensional de un ensamble, la aplicación de criterios tecnológicos y geométricos alproceso inverso de desensamble y su posterior tratamiento con el método de algoritmosevolutivos, genera una planificación optimizada del su proceso de ensamble mecánico. Laintegración de la información permite disminuir el número de secuencias a evaluar y de elementosa procesar, con lo que se evita la generación y evaluación de todas las secuencias posibles con laconsecuente disminución del tiempo de procesamiento. Como resultado de la aplicación delmodelo integrado propuesto, se obtiene la planificación del proceso de ensamble mecánico conuna reducción del tiempo de ensamble debido a que en las secuencias de ensamble obtenidas sereduce el número de cambios de dirección de ensamble, los cambios de herramientas y de puestosde trabajo, así como se minimiza la distancia a recorrer debido al cambio de puestos de trabajo.Esto se logra mediante un modelo de optimización multiobjetivo basado en algoritmos evolutivos.Palabras claves: ensamble mecánico, algoritmos genéticos, optimización multiobjetivo.____________________________________________________________________________AbstractThis work deals with the combinatorial problem of generating and optimizing feasible assemblysequences and doing the process planning involving tools and work places. The assembly sequencesare obtained from a 3D model of the assembled parts based on mating conditions along with a setof technological criteria, which allows automatically analyzing and generating the sequences. Thegenerated

  20. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  1. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  2. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  3. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallelcomputing systems and the actual delivered performance for scientificapplications. In general this gap is caused by inadequate architecturalsupport for the requirements of modern scientific applications, ascommercial applications and the much larger market they represent, havedriven the evolution of computer architectures. This gap has raised theimportance of developing better benchmarking methodologies tocharacterize and to understand the performance requirements of scientificapplications, to communicate them efficiently to influence the design offuture computer architectures. This improved understanding of theperformance behavior of scientific applications will allow improvedperformance predictions, development of adequate benchmarks foridentification of hardware and application features that work well orpoorly together, and a more systematic performance evaluation inprocurement situations. The Berkeley Institute for Performance Studieshas developed a three-level approach to evaluating the design of high endmachines and the software that runs on them: 1) A suite of representativeapplications; 2) A set of application kernels; and 3) Benchmarks tomeasure key system parameters. The three levels yield different type ofinformation, all of which are useful in evaluating systems, and enableNSF and DOE centers to select computer architectures more suited forscientific applications. The analysis will further allow the centers toengage vendors in discussion of strategies to alleviate the presentarchitectural bottlenecks using quantitative information. These mayinclude small hardware changes or larger ones that may be out interest tonon-scientific workloads. Providing quantitative models to the vendorsallows them to assess the benefits of technology alternatives using theirown internal cost-models in the broader marketplace, ideally facilitatingthe development of future computer architectures more suited forscientific

  4. Sabot assembly

    Energy Technology Data Exchange (ETDEWEB)

    Bzorgi, Fariborz

    2016-11-08

    A sabot assembly includes a projectile and a housing dimensioned and configured for receiving the projectile. An air pressure cavity having a cavity diameter is disposed between a front end and a rear end of the housing. Air intake nozzles are in fluid communication with the air pressure cavity and each has a nozzle diameter less than the cavity diameter. In operation, air flows through the plurality of air intake nozzles and into the air pressure cavity upon firing of the projectile from a gun barrel to pressurize the air pressure cavity for assisting in separation of the housing from the projectile upon the sabot assembly exiting the gun barrel.

  5. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Polyethylene Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Celik, Cihangir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); McMahan, Kimberly L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Yi-kang [French Atomic Energy Commission (CEA), Saclay (France); Gagnier, Emmanuel [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Authier, Nicolas [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Piot, Jerome [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Jacquet, Xavier [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Rousseau, Guillaume [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Reynolds, Kevin H. [Y-12 National Security Complex, Oak Ridge, TN (United States)

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 19, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc depositing energy in a Si solid state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  6. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Lead Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Celik, Cihangir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Isbell, Kimberly McMahan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Yi-kang [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Gagnier, Emmanuel [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Authier, Nicolas [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Piot, Jerome [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Jacquet, Xavier [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Rousseau, Guillaume [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Reynolds, Kevin H. [Y-12 National Security Complex, Oak Ridge, TN (United States)

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 13, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube, and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc, depositing energy in a Si solid-state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  7. Neutron Activation and Thermoluminescent Detector Responses to a Bare Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [ORNL; Isbell, Kimberly McMahan [ORNL; Lee, Yi-kang [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Gagnier, Emmanuel [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Authier, Nicolas [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Piot, Jerome [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Jacquet, Xavier [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Rousseau, Guillaume [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Reynolds, Kevin H. [Y-12 National Security Complex

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 11, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  8. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Polyethylene Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Celik, Cihangir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); McMahan, Kimberly L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Yi-kang [French Atomic Energy Commission (CEA), Saclay (France); Gagnier, Emmanuel [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Authier, Nicolas [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Piot, Jerome [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Jacquet, Xavier [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Rousseau, Guillaume [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Reynolds, Kevin H. [Y-12 National Security Complex, Oak Ridge, TN (United States)

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 19, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc depositing energy in a Si solid state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  9. Benchmarking of methods for genomic taxonomy

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Cosentino, Salvatore; Lukjancenko, Oksana;

    2014-01-01

    . Nevertheless, the method has been found to have a number of shortcomings. In the current study, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene...

  10. Seven Benchmarks for Information Technology Investment.

    Science.gov (United States)

    Smallen, David; Leach, Karen

    2002-01-01

    Offers benchmarks to help campuses evaluate their efforts in supplying information technology (IT) services. The first three help understand the IT budget, the next three provide insight into staffing levels and emphases, and the seventh relates to the pervasiveness of institutional infrastructure. (EV)

  11. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  12. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated with profess......Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated...... with professional performance. Employed professionals will further be more open to consider bureaucratic benchmarking information provided by the administration, if they are aware that their professional performance is low. To test our hypotheses, we rely on a sample of archival public professional performance data...... for 191 orthopaedics departments of German hospitals matched with survey data on bureaucratic benchmarking information provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically...

  13. Benchmarking Peer Production Mechanisms, Processes & Practices

    Science.gov (United States)

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  14. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  15. Benchmarking 2010: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  16. Benchmark Experiment for Beryllium Slab Samples

    Institute of Scientific and Technical Information of China (English)

    NIE; Yang-bo; BAO; Jie; HAN; Rui; RUAN; Xi-chao; REN; Jie; HUANG; Han-xiong; ZHOU; Zu-ying

    2015-01-01

    In order to validate the evaluated nuclear data on beryllium,a benchmark experiment has been performed at China Institution of Atomic Energy(CIAE).Neutron leakage spectra from pure beryllium slab samples(10cm×10cm×11cm)were measured at 61°and 121°using timeof-

  17. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  18. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  19. Thermodynamic benchmark study using Biacore technology

    NARCIS (Netherlands)

    Navratilova, I.; Papalia, G.A.; Rich, R.L.; Bedinger, D.; Brophy, S.; Condon, B.; Deng, T.; Emerick, A.W.; Guan, H.W.; Hayden, T.; Heutmekers, T.; Hoorelbeke, B.; McCroskey, M.C.; Murphy, M.M.; Nakagawa, T.; Parmeggiani, F.; Xiaochun, Q.; Rebe, S.; Nenad, T.; Tsang, T.; Waddell, M.B.; Zhang, F.F.; Leavitt, S.; Myszka, D.G.

    2007-01-01

    A total of 22 individuals participated in this benchmark study to characterize the thermodynamics of small-molecule inhibitor-enzyme interactions using Biacore instruments. Participants were provided with reagents (the enzyme carbonic anhydrase II, which was immobilized onto the sensor surface, and

  20. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  1. Alberta K-12 ESL Proficiency Benchmarks

    Science.gov (United States)

    Salmon, Kathy; Ettrich, Mike

    2012-01-01

    The Alberta K-12 ESL Proficiency Benchmarks are organized by division: kindergarten, grades 1-3, grades 4-6, grades 7-9, and grades 10-12. They are descriptors of language proficiency in listening, speaking, reading, and writing. The descriptors are arranged in a continuum of seven language competences across five proficiency levels. Several…

  2. A human benchmark for language recognition

    NARCIS (Netherlands)

    Orr, R.; Leeuwen, D.A. van

    2009-01-01

    In this study, we explore a human benchmark in language recognition, for the purpose of comparing human performance to machine performance in the context of the NIST LRE 2007. Humans are categorised in terms of language proficiency, and performance is presented per proficiency. Themain challenge in

  3. Benchmarking Year Five Students' Reading Abilities

    Science.gov (United States)

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  4. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  5. ActivityNet: A Large-Scale Video Benchmark for Human Activity Understanding

    KAUST Repository

    Heilbron, Fabian Caba

    2015-06-02

    In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper we introduce ActivityNet, a new largescale video benchmark for human activity understanding. Our benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. We illustrate three scenarios in which ActivityNet can be used to compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.

  6. Synthetic graph generation for data-intensive HPC benchmarking: Scalability, analysis and real-world application

    Energy Technology Data Exchange (ETDEWEB)

    Powers, Sarah S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lothian, Joshua [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-12-01

    The benchmarking effort within the Extreme Scale Systems Center at Oak Ridge National Laboratory seeks to provide High Performance Computing benchmarks and test suites of interest to the DoD sponsor. The work described in this report is a part of the effort focusing on graph generation. A previously developed benchmark, SystemBurn, allows the emulation of a broad spectrum of application behavior profiles within a single framework. To complement this effort, similar capabilities are desired for graph-centric problems. This report described the in-depth analysis of the generated synthetic graphs' properties at a variety of scales using different generator implementations and examines their applicability to replicating real world datasets.

  7. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  8. Piping benchmark problems for the ABB/CE System 80+ Standardized Plant

    Energy Technology Data Exchange (ETDEWEB)

    Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K. [Brookhaven National Lab., Upton, NY (United States)

    1994-07-01

    To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for the ABB/Combustion Engineering System 80+ Standardized Plant, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the System 80+ standard design. It will be required that the combined license licensees demonstrate that their solution to these problems are in agreement with the benchmark problem set. The first System 80+ piping benchmark is a uniform support motion response spectrum solution for one section of the feedwater piping subjected to safe shutdown seismic loads. The second System 80+ piping benchmark is a time history solution for the feedwater piping subjected to the transient loading induced by a water hammer. The third System 80+ piping benchmark is a time history solution of the pressurizer surge line subjected to the accelerations induced by a main steam line pipe break. The System 80+ reactor is an advanced PWR type.

  9. Modeling of the ORNL PCA Benchmark Using SCALE6.0 Hybrid Deterministic-Stochastic Methodology

    Directory of Open Access Journals (Sweden)

    Mario Matijević

    2013-01-01

    Full Text Available Revised guidelines with the support of computational benchmarks are needed for the regulation of the allowed neutron irradiation to reactor structures during power plant lifetime. Currently, US NRC Regulatory Guide 1.190 is the effective guideline for reactor dosimetry calculations. A well known international shielding database SINBAD contains large selection of models for benchmarking neutron transport methods. In this paper a PCA benchmark has been chosen from SINBAD for qualification of our methodology for pressure vessel neutron fluence calculations, as required by the Regulatory Guide 1.190. The SCALE6.0 code package, developed at Oak Ridge National Laboratory, was used for modeling of the PCA benchmark. The CSAS6 criticality sequence of the SCALE6.0 code package, which includes KENO-VI Monte Carlo code, as well as MAVRIC/Monaco hybrid shielding sequence, was utilized for calculation of equivalent fission fluxes. The shielding analysis was performed using multigroup shielding library v7_200n47g derived from general purpose ENDF/B-VII.0 library. As a source of response functions for reaction rate calculations with MAVRIC we used international reactor dosimetry libraries (IRDF-2002 and IRDF-90.v2 and appropriate cross-sections from transport library v7_200n47g. The comparison of calculational results and benchmark data showed a good agreement of the calculated and measured equivalent fission fluxes.

  10. Characterization of a benchmark database for myoelectric movement classification.

    Science.gov (United States)

    Atzori, Manfredo; Gijsberts, Arjan; Kuzborskij, Ilja; Elsig, Simone; Hager, Anne-Gabrielle Mittaz; Deriaz, Olivier; Castellini, Claudio; Muller, Henning; Caputo, Barbara

    2015-01-01

    In this paper, we characterize the Ninapro database and its use as a benchmark for hand prosthesis evaluation. The database is a publicly available resource that aims to support research on advanced myoelectric hand prostheses. The database is obtained by jointly recording surface electromyography signals from the forearm and kinematics of the hand and wrist while subjects perform a predefined set of actions and postures. Besides describing the acquisition protocol, overall features of the datasets and the processing procedures in detail, we present benchmark classification results using a variety of feature representations and classifiers. Our comparison shows that simple feature representations such as mean absolute value and waveform length can achieve similar performance to the computationally more demanding marginal discrete wavelet transform. With respect to classification methods, the nonlinear support vector machine was found to be the only method consistently achieving high performance regardless of the type of feature representation. Furthermore, statistical analysis of these results shows that classification accuracy is negatively correlated with the subject's Body Mass Index. The analysis and the results described in this paper aim to be a strong baseline for the Ninapro database. Thanks to the Ninapro database (and the characterization described in this paper), the scientific community has the opportunity to converge to a common position on hand movement recognition by surface electromyography, a field capable to strongly affect hand prosthesis capabilities.

  11. Overview and Discussion of the OECD/NRC Benchmark Based on NUPEC PWR Subchannel and Bundle Tests

    Directory of Open Access Journals (Sweden)

    M. Avramova

    2013-01-01

    Full Text Available The Pennsylvania State University (PSU under the sponsorship of the US Nuclear Regulatory Commission (NRC has prepared, organized, conducted, and summarized the Organisation for Economic Co-operation and Development/US Nuclear Regulatory Commission (OECD/NRC benchmark based on the Nuclear Power Engineering Corporation (NUPEC pressurized water reactor (PWR subchannel and bundle tests (PSBTs. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency (NEA of OECD and the Japan Nuclear Energy Safety Organization (JNES, Japan. The OECD/NRC PSBT benchmark was organized to provide a test bed for assessing the capabilities of various thermal-hydraulic subchannel, system, and computational fluid dynamics (CFDs codes. The benchmark was designed to systematically assess and compare the participants’ numerical models for prediction of detailed subchannel void distribution and department from nucleate boiling (DNB, under steady-state and transient conditions, to full-scale experimental data. This paper provides an overview of the objectives of the benchmark along with a definition of the benchmark phases and exercises. The NUPEC PWR PSBT facility and the specific methods used in the void distribution measurements are discussed followed by a summary of comparative analyses of submitted final results for the exercises of the two benchmark phases.

  12. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    Science.gov (United States)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  13. The ACRV Picking Benchmark (APB): A Robotic Shelf Picking Benchmark to Foster Reproducible Research

    OpenAIRE

    Leitner, Jürgen; Tow, Adam W.; Dean, Jake E.; Suenderhauf, Niko; Durham, Joseph W.; Cooper, Matthew; Eich, Markus; Lehnert, Christopher; Mangels, Ruben; McCool, Christopher; Kujala, Peter; Nicholson, Lachlan; Van Pham, Trung; Sergeant, James; Wu, Liao

    2016-01-01

    Robotic challenges like the Amazon Picking Challenge (APC) or the DARPA Challenges are an established and important way to drive scientific progress. They make research comparable on a well-defined benchmark with equal test conditions for all participants. However, such challenge events occur only occasionally, are limited to a small number of contestants, and the test conditions are very difficult to replicate after the main event. We present a new physical benchmark challenge for robotic pi...

  14. A chemical EOR benchmark study of different reservoir simulators

    Science.gov (United States)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    chemical design for field-scale studies using commercial simulators. The benchmark tests illustrate the potential of commercial simulators for chemical flooding projects and provide a comprehensive table of strengths and limitations of each simulator for a given chemical EOR process. Mechanistic simulations of chemical EOR processes will provide predictive capability and can aid in optimization of the field injection projects. The objective of this paper is not to compare the computational efficiency and solution algorithms; it only focuses on the process modeling comparison.

  15. Effects of Exposure Imprecision on Estimation of the Benchmark Dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose......Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose...

  16. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  17. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  18. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    OpenAIRE

    Zaharchenko Lolita A.; Kolesnyk Oksana A.

    2013-01-01

    The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking an...

  19. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  20. Benchmarking of corporate social responsibility: Methodological problems and robustness

    OpenAIRE

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  1. A Proposed Atomic Structure of the Self-Assembly of the Non-Amyloid-β Component of Human α-Synuclein As Derived by Computational Tools.

    Science.gov (United States)

    Atsmon-Raz, Yoav; Miller, Yifat

    2015-08-06

    α-Synuclein (AS) fibrils are the major hallmarks of Parkinson's disease (PD). It is known that the central domain of the 140-residue AS protein, known as the non-amyloid-β component (NAC), plays a crucial role in aggregation. The secondary structure of AS fibrils (including the NAC domain) has been proposed on the basis of solid-state nuclear magnetic resonance studies, but the atomic structure of the self-assembly of NAC (or AS itself) is still elusive. This is the first study that presents a detailed three-dimensional structure of NAC at atomic resolution. The proposed self-assembled structure of NAC consists of three β-strands connected by two turn regions. Our study shows that calculated structural parameter values of the simulated fibril-like cross-β structure of NAC are in excellent agreement with the experimental values. Moreover, the diameter dimensions of the proposed fibril-like structure are also in agreement with experimental measurements. The proposed fibril-like structure of NAC may assist in future work aimed at understanding the formation of aggregates in PD and developing compounds to modulate aggregation.

  2. Analog computation through high-dimensional physical chaotic neuro-dynamics

    Science.gov (United States)

    Horio, Yoshihiko; Aihara, Kazuyuki

    2008-07-01

    Conventional von Neumann computers have difficulty in solving complex and ill-posed real-world problems. However, living organisms often face such problems in real life, and must quickly obtain suitable solutions through physical, dynamical, and collective computations involving vast assemblies of neurons. These highly parallel computations through high-dimensional dynamics (computation through dynamics) are completely different from the numerical computations on von Neumann computers (computation through algorithms). In this paper, we explore a novel computational mechanism with high-dimensional physical chaotic neuro-dynamics. We physically constructed two hardware prototypes using analog chaotic-neuron integrated circuits. These systems combine analog computations with chaotic neuro-dynamics and digital computation through algorithms. We used quadratic assignment problems (QAPs) as benchmarks. The first prototype utilizes an analog chaotic neural network with 800-dimensional dynamics. An external algorithm constructs a solution for a QAP using the internal dynamics of the network. In the second system, 300-dimensional analog chaotic neuro-dynamics drive a tabu-search algorithm. We demonstrate experimentally that both systems efficiently solve QAPs through physical chaotic dynamics. We also qualitatively analyze the underlying mechanism of the highly parallel and collective analog computations by observing global and local dynamics. Furthermore, we introduce spatial and temporal mutual information to quantitatively evaluate the system dynamics. The experimental results confirm the validity and efficiency of the proposed computational paradigm with the physical analog chaotic neuro-dynamics.

  3. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./ Russian Progress Report for Fiscal Year 1997, Volume 4, Part 8 - Neutron Poison Plates in Assemblies Containing Homogeneous Mixtures of Polystyrene-Moderated Plutonium and Uranium Oxides

    Energy Technology Data Exchange (ETDEWEB)

    Yavuz, M.

    1999-05-01

    In the 1970s at the Battelle Pacific Northwest Laboratory (PNL), a series of critical experiments using a remotely operated Split-Table Machine was performed with homogeneous mixtures of (Pu-U)O{sub 2}-polystyrene fuels in the form of square compacts having different heights. The experiments determined the critical geometric configurations of MOX fuel assemblies with and without neutron poison plates. With respect to PuO{sub 2} content and moderation [H/(Pu+U)atomic] ratio (MR), two different homogeneous (Pu-U) O{sub 2}-polystyrene mixtures were considered: Mixture (1) 14.62 wt% PuO{sub 2} with 30.6 MR, and Mixture (2) 30.3 wt% PuO{sub 2} with 2.8 MR. In all mixtures, the uranium was depleted to about O.151 wt% U{sup 235}. Assemblies contained copper, copper-cadmium or aluminum neutron poison plates having thicknesses up to {approximately}2.5 cm. This evaluation contains 22 experiments for Mixture 1, and 10 for Mixture 2 compacts. For Mixture 1, there are 10 configurations with copper plates, 6 with aluminum, and 5 with copper-cadmium. One experiment contained no poison plate. For Mixture 2 compacts, there are 3 configurations with copper, 3 with aluminum, and 3 with copper-cadmium poison plates. One experiment contained no poison plate.

  4. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint US/Russian Progress Report for Fiscal 1997. Volume 3 - Calculations Performed in the Russian Federation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-06-01

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the Russian Federation during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the contaminated benchmarks that the United States and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  5. Dump assembly

    Science.gov (United States)

    Goldmann, Louis H.

    1986-01-01

    A dump assembly having a fixed conduit and a rotatable conduit provided with overlapping plates, respectively, at their adjacent ends. The plates are formed with openings, respectively, normally offset from each other to block flow. The other end of the rotatable conduit is provided with means for securing the open end of a filled container thereto. Rotation of the rotatable conduit raises and inverts the container to empty the contents while concurrently aligning the conduit openings to permit flow of material therethrough.

  6. 47 CFR 69.108 - Transport rate benchmark.

    Science.gov (United States)

    2010-10-01

    ... with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone company... benchmark ratio of 9.6 to 1 or higher. (c) If a telephone company's initial transport rates are based on... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section...

  7. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  8. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  9. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying an...... organizational relations, behaviors and actions. In closing it is briefly considered how to study the calculative practices of benchmarking....

  10. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  11. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  12. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  13. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  15. 29 CFR 1952.203 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.203 Section 1952.203... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  17. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  19. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  20. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  1. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  2. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    Science.gov (United States)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  3. General Assembly

    CERN Multimedia

    Staff Association

    2016-01-01

    5th April, 2016 – Ordinary General Assembly of the Staff Association! In the first semester of each year, the Staff Association (SA) invites its members to attend and participate in the Ordinary General Assembly (OGA). This year the OGA will be held on Tuesday, April 5th 2016 from 11:00 to 12:00 in BE Auditorium, Meyrin (6-2-024). During the Ordinary General Assembly, the activity and financial reports of the SA are presented and submitted for approval to the members. This is the occasion to get a global view on the activities of the SA, its financial management, and an opportunity to express one’s opinion, including taking part in the votes. Other points are listed on the agenda, as proposed by the Staff Council. Who can vote? Only “ordinary” members (MPE) of the SA can vote. Associated members (MPA) of the SA and/or affiliated pensioners have a right to vote on those topics that are of direct interest to them. Who can give his/her opinion? The Ordinary General Asse...

  4. Physical properties of the benchmark models program supercritical wing

    Science.gov (United States)

    Dansberry, Bryan E.; Durham, Michael H.; Bennett, Robert M.; Turnock, David L.; Silva, Walter A.; Rivera, Jose A., Jr.

    1993-01-01

    The goal of the Benchmark Models Program is to provide data useful in the development and evaluation of aeroelastic computational fluid dynamics (CFD) codes. To that end, a series of three similar wing models are being flutter tested in the Langley Transonic Dynamics Tunnel. These models are designed to simultaneously acquire model response data and unsteady surface pressure data during wing flutter conditions. The supercritical wing is the second model of this series. It is a rigid semispan model with a rectangular planform and a NASA SC(2)-0414 supercritical airfoil shape. The supercritical wing model was flutter tested on a flexible mount, called the Pitch and Plunge Apparatus, that provides a well-defined, two-degree-of-freedom dynamic system. The supercritical wing model and associated flutter test apparatus is described and experimentally determined wind-off structural dynamic characteristics of the combined rigid model and flexible mount system are included.

  5. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    Science.gov (United States)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  6. Smart Meter Data Analytics: Systems, Algorithms and Benchmarking

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Golab, Lukasz; Golab, Wojciech;

    2016-01-01

    the proposed benchmark using five representative platforms: a traditional numeric computing platform (Matlab), a relational DBMS with a built-in machine learning toolkit (PostgreSQL/MADlib), a main-memory column store (“System C”), and two distributed data processing platforms (Hive and Spark/Spark Streaming......Smart electricity meters have been replacing conventional meters worldwide, enabling automated collection of fine-grained (e.g., every 15 minutes or hourly) consumption data. A variety of smart meter analytics algorithms and applications have been proposed, mainly in the smart grid literature......). We compare the five platforms in terms of application development effort and performance on a multicore machine as well as a cluster of 16 commodity servers....

  7. A benchmark for non-covalent interactions in solids.

    Science.gov (United States)

    Otero-de-la-Roza, A; Johnson, Erin R

    2012-08-01

    A benchmark for non-covalent interactions in solids (C21) based on the experimental sublimation enthalpies and geometries of 21 molecular crystals is presented. Thermal and zero-point effects are carefully accounted for and reference lattice energies and thermal pressures are provided, which allow dispersion-corrected density functionals to be assessed in a straightforward way. Other thermal corrections to the sublimation enthalpy (the 2RT term) are reexamined. We compare the recently implemented exchange-hole dipole moment (XDM) model with other approaches in the literature to find that XDM roughly doubles the accuracy of DFT-D2 and non-local functionals in computed lattice energies (4.8 kJ/mol mean absolute error) while, at the same time, predicting cell geometries within less than 2% of the experimental result on average. The XDM model of dispersion interactions is confirmed as a very promising approach in solid-state applications.

  8. Characterization of addressability by simultaneous randomized benchmarking

    CERN Document Server

    Gambetta, Jay M; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-01-01

    The control and handling of errors arising from cross-talk and unwanted interactions in multi-qubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking each qubit individually and then simultaneously, and the amount of addressability is related to the difference of the average gate fidelities of those experiments. We present the results on two similar samples with different amounts of cross-talk and unwanted interactions, which agree with predictions based on simple models for the amount of residual coupling.

  9. Active vibration control of nonlinear benchmark buildings

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xing-de; CHEN Dao-zheng

    2007-01-01

    The present nonlinear model reduction methods unfit the nonlinear benchmark buildings as their vibration equations belong to a non-affine system. Meanwhile,the controllers designed directly by the nonlinear control strategy have a high order, and they are difficult to be applied actually. Therefore, a new active vibration control way which fits the nonlinear buildings is proposed. The idea of the proposed way is based on the model identification and structural model linearization, and exerting the control force to the built model according to the force action principle. This proposed way has a better practicability as the built model can be reduced by the balance reduction method based on the empirical Grammian matrix. A three-story benchmark structure is presented and the simulation results illustrate that the proposed method is viable for the civil engineering structures.

  10. The PROOF benchmark suite measuring PROOF performance

    Science.gov (United States)

    Ryu, S.; Ganis, G.

    2012-06-01

    The PROOF benchmark suite is a new utility suite of PROOF to measure performance and scalability. The primary goal of the benchmark suite is to determine optimal configuration parameters for a set of machines to be used as PROOF cluster. The suite measures the performance of the cluster for a set of standard tasks as a function of the number of effective processes. Cluster administrators can use the suite to measure the performance of the cluster and find optimal configuration parameters. PROOF developers can also utilize the suite to help them measure, identify problems and improve their software. In this paper, the new tool is explained in detail and use cases are presented to illustrate the new tool.

  11. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  12. Measuring NUMA effects with the STREAM benchmark

    CERN Document Server

    Bergstrom, Lars

    2011-01-01

    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

  13. Physics benchmarks of the VELO upgrade

    CERN Document Server

    Eklund, Lars

    2017-01-01

    The LHCb Experiment at the LHC is successfully performing precision measurements primarily in the area of flavour physics. The collaboration is preparing an upgrade that will start taking data in 2021 with a trigger-less readout at five times the current luminosity. The vertex locator has been crucial in the success of the experiment and will continue to be so for the upgrade. It will be replaced by a hybrid pixel detector and this paper discusses the performance benchmarks of the upgraded detector. Despite the challenging experimental environment, the vertex locator will maintain or improve upon its benchmark figures compared to the current detector. Finally the long term plans for LHCb, beyond those of the upgrade currently in preparation, are discussed.

  14. Argonne Code Center: benchmark problem book

    Energy Technology Data Exchange (ETDEWEB)

    1977-06-01

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification. (RWR)

  15. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  16. Simulation benchmark based on THAI-experiment on dissolution of a steam stratification by natural convection

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, M., E-mail: freitag@becker-technologies.com; Schmidt, E.; Gupta, S.; Poss, G.

    2016-04-01

    Highlights: . • We studied the generation and dissolution of steam stratification in natural convection. • We performed a computer code benchmark including blind and open phases. • The dissolution of stratification predicted only qualitatively by LP and CFD models during the blind simulation phase. - Abstract: Locally enriched hydrogen as in stratification may contribute to early containment failure in the course of severe nuclear reactor accidents. During accident sequences steam might accumulate as well to stratifications which can directly influence the distribution and ignitability of hydrogen mixtures in containments. An international code benchmark including Computational Fluid Dynamics (CFD) and Lumped Parameter (LP) codes was conducted in the frame of the German THAI program. Basis for the benchmark was experiment TH24.3 which investigates the dissolution of a steam layer subject to natural convection in the steam-air atmosphere of the THAI vessel. The test provides validation data for the development of CFD and LP models to simulate the atmosphere in the containment of a nuclear reactor installation. In test TH24.3 saturated steam is injected into the upper third of the vessel forming a stratification layer which is then mixed by a superposed thermal convection. In this paper the simulation benchmark will be evaluated in addition to the general discussion about the experimental transient of test TH24.3. Concerning the steam stratification build-up and dilution of the stratification, the numerical programs showed very different results during the blind evaluation phase, but improved noticeable during open simulation phase.

  17. Healthy Foodservice Benchmarking and Leading Practices

    Science.gov (United States)

    2012-07-01

    reduction in weight status, the prevalence of high waist circumference values, and fasting insulin as compared to the control group. The high- risk portion...whole experienced a slight reduction in weight status (BMI z-score), the prevalence of high waist circumference values, and fasting insulin as... risk for contamination with pesticide residues was 30% lower for organically grown Healthy Foodservice Benchmarking and Leading Practices | 32

  18. Benchmarking polish basic metal manufacturing companies

    Directory of Open Access Journals (Sweden)

    P. Pomykalski

    2014-01-01

    Full Text Available Basic metal manufacturing companies are undergoing substantial strategic changes resulting from global changes in demand. During such periods managers should closely monitor and benchmark the financial results of companies operating in their section. Proper and timely identification of the consequences of changes in these areas may be crucial as managers seek to exploit opportunities and avoid threats. The paper examines changes in financial ratios of basic metal manufacturing companies operating in Poland in the period 2006-2011.

  19. Benchmarking Performance of Web Service Operations

    OpenAIRE

    Zhang, Shuai

    2011-01-01

    Web services are often used for retrieving data from servers providing information of different kinds. A data providing web service operation returns collections of objects for a given set of arguments without any side effects. In this project a web service benchmark (WSBENCH) is developed to simulate the performance of web service calls. Web service operations are specified as SQL statements. The function generator of WSBENCH converts user specified SQL queries into functions and automatical...

  20. Self-interacting Dark Matter Benchmarks

    OpenAIRE

    Kaplinghat, M.; Tulin, S.; Yu, H-B

    2017-01-01

    Dark matter self-interactions have important implications for the distributions of dark matter in the Universe, from dwarf galaxies to galaxy clusters. We present benchmark models that illustrate characteristic features of dark matter that is self-interacting through a new light mediator. These models have self-interactions large enough to change dark matter densities in the centers of galaxies in accord with observations, while remaining compatible with large-scale structur...

  1. BENCHMARK AS INSTRUMENT OF CRISIS MANAGEMENT

    OpenAIRE

    Haievskyi, Vladyslav

    2017-01-01

    In the article is determined the essence of a question's benchmark through synthesis of such concepts as “benchmark”, “crisis management” as an instrument of crisis management, the powerful tool which the entity carries out the comparative analysis of processes and effective activities and allows to reduce costs for production's of products in case of limitation's resources, to raise profit and to achieve success in optimization of strategy's activities of the entity.

  2. Benchmarking Nature Tourism between Zhangjiajie and Repovesi

    OpenAIRE

    Wu, Zhou

    2014-01-01

    Since nature tourism became a booming business in modern society, more and more tourists choose nature-based tourism destination for their holidays. To find ways to promote Repovesi national park is quite significant, in a bid to reinforce the competitiveness of Repovesi national park. The topic of this thesis is both to find good marketing strategies used by the Zhangjiajie national park, via benchmarking and to provide some suggestions to Repovesi national park. The Method used in t...

  3. BENCHMARKING ON-LINE SERVICES INDUSTRIES

    Institute of Scientific and Technical Information of China (English)

    John HAMILTON

    2006-01-01

    The Web Quality Analyser (WQA) is a new benchmarking tool for industry. It hasbeen extensively tested across services industries. Forty five critical success features are presented as measures that capture the user's perception of services industry websites. This tool differs to previous tools, in that it captures the information technology (IT) related driver sectors of website performance, along with the marketing-services related driver sectors. These driver sectors capture relevant structure, function and performance components.An 'on-off' switch measurement approach determines each component. Relevant component measures scale into a relative presence of the applicable feature, with a feature block delivering one of the sector drivers. Although it houses both measurable and a few subjective components, the WQA offers a proven and useful means to compare relevant websites.The WQA defines website strengths and weaknesses, thereby allowing for corrections to the website structure of the specific business. WQA benchmarking against services related business competitors delivers a position on the WQA index, facilitates specific website driver rating comparisons, and demonstrates where key competitive advantage may reside. This paper reports on the marketing-services driver sectors of this new benchmarking WQA tool.

  4. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  5. Benchmarking and accounting for the (private) cloud

    Science.gov (United States)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  6. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-03-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  7. Validation of VHTRC calculation benchmark of critical experiment using the MCB code

    Directory of Open Access Journals (Sweden)

    Stanisz Przemysław

    2016-01-01

    Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.

  8. NODAL3 Sensitivity Analysis for NEACRP 3D LWR Core Transient Benchmark (PWR

    Directory of Open Access Journals (Sweden)

    Surian Pinem

    2016-01-01

    Full Text Available This paper reports the results of sensitivity analysis of the multidimension, multigroup neutron diffusion NODAL3 code for the NEACRP 3D LWR core transient benchmarks (PWR. The code input parameters covered in the sensitivity analysis are the radial and axial node sizes (the number of radial node per fuel assembly and the number of axial layers, heat conduction node size in the fuel pellet and cladding, and the maximum time step. The output parameters considered in this analysis followed the above-mentioned core transient benchmarks, that is, power peak, time of power peak, power, averaged Doppler temperature, maximum fuel centerline temperature, and coolant outlet temperature at the end of simulation (5 s. The sensitivity analysis results showed that the radial node size and maximum time step give a significant effect on the transient parameters, especially the time of power peak, for the HZP and HFP conditions. The number of ring divisions for fuel pellet and cladding gives negligible effect on the transient solutions. For productive work of the PWR transient analysis, based on the present sensitivity analysis results, we recommend NODAL3 users to use 2×2 radial nodes per assembly, 1×18 axial layers per assembly, the maximum time step of 10 ms, and 9 and 1 ring divisions for fuel pellet and cladding, respectively.

  9. Aerodynamic improvement of the assembly through which gas conduits are taken into a smoke stack by simulating gas flow on a computer

    Science.gov (United States)

    Prokhorov, V. B.; Fomenko, M. V.; Grigor'ev, I. V.

    2012-06-01

    Results from computer simulation of gas flow motion for gas conduits taken on one and two sides into the gas-removal shaft of a smoke stack with a constant cross section carried out using the SolidWorks and FlowVision application software packages are presented.

  10. Proposal and analysis of the benchmark problem suite for reactor physics study of LWR next generation fuels

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-10-01

    In order to investigate the calculation accuracy of the nuclear characteristics of LWR next generation fuels, the Research Committee on Reactor Physics organized by JAERI has established the Working Party on Reactor Physics for LWR Next Generation Fuels. The next generation fuels mean the ones aiming for further extended burn-up such as 70 GWd/t over the current design. The Working Party has proposed six benchmark problems, which consists of pin-cell, PWR fuel assembly and BWR fuel assembly geometries loaded with uranium and MOX fuels, respectively. The specifications of the benchmark problem neglect some of the current limitations such as 5 wt% {sup 235}U to achieve the above-mentioned target. Eleven organizations in the Working Party have carried out the analyses of the benchmark problems. As a result, status of accuracy with the current data and method and some problems to be solved in the future were clarified. In this report, details of the benchmark problems, result by each organization, and their comparisons are presented. (author)

  11. TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes over a Range in Parameter Space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B.; Azmy, Yousry Y. [Department of Mechanical and Nuclear Engineering, Penn State University, University Park, PA 16802 (United States)

    2008-07-01

    We present the TORT solutions to the 3-D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40x40x40, 200 angles) to the finest model (160x160x160, 800 angles), and employed the results of the finest computational model as reference values for evaluating the mesh-refinement effects. The presented results show that the solutions for most cases in the suite of benchmarks as computed by TORT are in the asymptotic regime. (authors)

  12. Assembling consumption

    DEFF Research Database (Denmark)

    Assembling Consumption marks a definitive step in the institutionalisation of qualitative business research. By gathering leading scholars and educators who study markets, marketing and consumption through the lenses of philosophy, sociology and anthropology, this book clarifies and applies...... the investigative tools offered by assemblage theory, actor-network theory and non-representational theory. Clear theoretical explanation and methodological innovation, alongside empirical applications of these emerging frameworks will offer readers new and refreshing perspectives on consumer culture and market...... societies. This is an essential reading for both seasoned scholars and advanced students of markets, economies and social forms of consumption....

  13. State of the art: benchmarking microprocessors for embedded automotive applications

    Directory of Open Access Journals (Sweden)

    Adnan Shaout

    2016-09-01

    Full Text Available Benchmarking microprocessors provides a way for consumers to evaluate the performance of the processors. This is done by using either synthetic or real world applications. There are a number of benchmarks that exist today to assist consumers in evaluating the vast number of microprocessors that are available in the market. In this paper an investigation of the various benchmarks available for evaluating microprocessors for embedded automotive applications will be performed. We will provide an overview of the following benchmarks: Whetstone, Dhrystone, Linpack, standard performance evaluation corporation (SPEC CPU2006, embedded microprocessor benchmark consortium (EEMBC AutoBench and MiBench. A comparison of existing benchmarks will be given based on relevant characteristics of automotive applications which will give the proper recommendation when benchmarking processors for automotive applications.

  14. Reform on teaching the course, Assembly Language in the college computer specialty%高校计算机专业“汇编语言”课程教学改革探究

    Institute of Scientific and Technical Information of China (English)

    姚富光

    2012-01-01

    分析了目前高校计算机专业汇编语言课程的教学现状及存在问题的缘由,指出了汇编语言教学改革的必要性,对理论教学及实践课程教学的改革做了探讨,提出了具体改革措施。%This paper analyses the teaching status of and the problems existing in the course, Assembly Language in the college computer specialty, points out the necessity of the teaching reform, discusses the reform of theoretical and practical teaching and presents some concrete measures.

  15. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Ganapol, B.D.; Kornreich, D.E. [Univ. of Arizona, Tucson, AZ (United States). Dept. of Nuclear Engineering

    1997-07-01

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) point source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.

  16. Numerical Investigations of the Benchmark Supercritical Wing in Transonic Flow

    Science.gov (United States)

    Chwalowski, Pawel; Heeg, Jennifer; Biedron, Robert T.

    2017-01-01

    This paper builds on the computational aeroelastic results published previously and generated in support of the second Aeroelastic Prediction Workshop for the NASA Benchmark Supercritical Wing (BSCW) configuration. The computational results are obtained using FUN3D, an unstructured grid Reynolds-Averaged Navier-Stokes solver developed at the NASA Langley Research Center. The analysis results show the effects of the temporal and spatial resolution, the coupling scheme between the flow and the structural solvers, and the initial excitation conditions on the numerical flutter onset. Depending on the free stream condition and the angle of attack, the above parameters do affect the flutter onset. Two conditions are analyzed: Mach 0.74 with angle of attack 0 and Mach 0.85 with angle of attack 5. The results are presented in the form of the damping values computed from the wing pitch angle response as a function of the dynamic pressure or in the form of dynamic pressure as a function of the Mach number.

  17. Optimizing Linpack Benchmark on GPU-Accelerated Petascale Supercomputer

    Institute of Scientific and Technical Information of China (English)

    Feng Wang; Can-Qun Yang; Yun-Fei Du; Juan Chen; Hui-Zhan Yi; Wei-Xia Xu

    2011-01-01

    In this paper we present the programming of the Linpack benchmark on TianHe-1 system,the first petascale supercomputer system of China,and the largest GPU-accelerated heterogeneous system ever attempted before.A hybrid programming model consisting of MPI,OpenMP and streaming computing is described to explore the task parallel,thread parallel and data parallel of the Linpack.We explain how we optimized the load distribution across the CPUs and GPUs using the two-level adaptive method and describe the implementation in details.To overcome the low-bandwidth between the CPU and GPU communication,we present a software pipelining technique to hide the communication overhead.Combined with other traditional optimizations,the Linpack we developed achieved 196.7 GFLOPS on a single compute element of TianHe-1.This result is 70.1% of the peak compute capability,3.3 times faster than the result by using the vendor's library.On the full configuration of TianHe-1 our optimizations resulted in a Linpack performance of 0.563 PFLOPS,which made TianHe-1 the 5th fastest supercomputer on the Top500 list in November,2009.

  18. Benchmark Solution For The Category 3, Problem 2: Cascade - Gust Interaction

    Science.gov (United States)

    Envia, Edmane

    2004-01-01

    The benchmark solution for the cascade-gust interaction problem is computed using a linearized Euler code called LINFLUX. The inherently three-dimensional code is run in the thin-annulus limit to compute the two-dimensional cascade response. The calculations are carried out in the frequency-domain and the unsteady response at each of the gust s three frequency component is computed. The results are presented on modal basis for pressure perturbations (i.e., acoustic modes) as well as velocity perturbations (i.e., convected gust modes) at each frequency.

  19. A CUMULATIVE MIGRATION METHOD FOR COMPUTING RIGOROUS TRANSPORT CROSS SECTIONS AND DIFFUSION COEFFICIENTS FOR LWR LATTICES WITH MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi

    2016-05-01

    A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.

  20. General Assembly

    CERN Multimedia

    Staff Association

    2015-01-01

    Mardi 5 mai à 11 h 00 Salle 13-2-005 Conformément aux statuts de l’Association du personnel, une Assemblée générale ordinaire est organisée une fois par année (article IV.2.1). Projet d’ordre du jour : 1- Adoption de l’ordre du jour. 2- Approbation du procès-verbal de l’Assemblée générale ordinaire du 22 mai 2014. 3- Présentation et approbation du rapport d’activités 2014. 4- Présentation et approbation du rapport financier 2014. 5- Présentation et approbation du rapport des vérificateurs aux comptes pour 2014. 6- Programme 2015. 7- Présentation et approbation du projet de budget 2015 et taux de cotisation pour 2015. 8- Pas de modifications aux Statuts de l'Association du personnel proposée. 9- Élections des membres de la Commission é...

  1. General Assembly

    CERN Multimedia

    Staff Association

    2016-01-01

    Mardi 5 avril à 11 h 00 BE Auditorium Meyrin (6-2-024) Conformément aux statuts de l’Association du personnel, une Assemblée générale ordinaire est organisée une fois par année (article IV.2.1). Projet d’ordre du jour : Adoption de l’ordre du jour. Approbation du procès-verbal de l’Assemblée générale ordinaire du 5 mai 2015. Présentation et approbation du rapport d’activités 2015. Présentation et approbation du rapport financier 2015. Présentation et approbation du rapport des vérificateurs aux comptes pour 2015. Programme de travail 2016. Présentation et approbation du projet de budget 2016 Approbation du taux de cotisation pour 2017. Modifications aux Statuts de l'Association du personnel proposée. Élections des membres de la Commissio...

  2. General assembly

    CERN Multimedia

    Staff Association

    2015-01-01

    Mardi 5 mai à 11 h 00 Salle 13-2-005 Conformément aux statuts de l’Association du personnel, une Assemblée générale ordinaire est organisée une fois par année (article IV.2.1). Projet d’ordre du jour : Adoption de l’ordre du jour. Approbation du procès-verbal de l’Assemblée générale ordinaire du 22 mai 2014. Présentation et approbation du rapport d’activités 2014. Présentation et approbation du rapport financier 2014. Présentation et approbation du rapport des vérificateurs aux comptes pour 2014. Programme 2015. Présentation et approbation du projet de budget 2015 et taux de cotisation pour 2015. Pas de modifications aux Statuts de l'Association du personnel proposée. Élections des membres de la Commission électorale. &am...

  3. Balancing parallel assembly lines with disabled workers

    OpenAIRE

    Araújo, Felipe F. B.; Costa,Alysson M.; Miralles, Cristóbal

    2013-01-01

    We study an assembly line balancing problem that occurs in sheltered worker centers for the disabled, where workers with very different characteristics are present. We are interested in the situation in which parallel assembly lines are allowed and name the resulting problem as parallel assembly line worker assignment and balancing problem. We present a linear mixed-integer formulation and a four-stage heuristic algorithm. Computational results with a large set of instances recently proposed ...

  4. Integral measurements on Caliban and Prospero assemblies for nuclear data validation; Mesures integrales sur les assemblages Caliban et Prospero pour la validation des donnees nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Casoli, P.; Authier, N.; Richard, B. [CEA Valduc, 21 - Is-sur-Tille (France); Ducauze-Philippe, M.; Cartier, J. [CEA Bruyeres-le-Chatel, 91 (France)

    2011-07-15

    How can the quality of nuclear data libraries be checked? Performing reference experiments also called benchmarks allows the testing of evaluated data. During these experiments, integral values such as reaction rates or neutron effective multiplication coefficients are measured. In this paper, the principles of benchmark construction are explained and illustrated with several works performed on the CALIBAN et PROSPERO critical assemblies operated by the Valduc center: benchmarks for dosimetry, activation reactions studies, neutron noise measurements. (authors)

  5. Benchmarking and Optimizing Techniques for Inverting Images of DIII-D Soft X-Ray Emissions

    Science.gov (United States)

    Chandler, E.; Unterberg, E. A.; Shafer, M. W.; Wingen, A.

    2012-10-01

    A tangential 2-D soft x-ray (SXR) imaging system is installed on DIII-D to directly measure the 3-D magnetic topology at the plasma edge. This diagnostic allows the study of the plasma SXR emissivity at time resolutions >=,0 ms and spatial resolutions ˜1 cm. Extracting 3-D structure from the 2-D image requires the inversion of large ill-posed matrices - a ubiquitous problem in mathematics. The goal of this work is to reduce the memory usage and computational time of the inversion to a point where image inversions can be processed between shots. We implement the Phillips-Tikohnov and Maximum Entropy regularization techniques on a parallel GPU processor. To optimize the memory demands of computing these matrixes, effects of reducing the inversion grid size and binning images are analyzed and benchmarked. Further benchmarking includes a characterization of the final image quality (with respect to numerical and instrumentation noise).

  6. Glassy Chimeras Could Be Blind to Quantum Speedup: Designing Better Benchmarks for Quantum Annealing Machines

    Science.gov (United States)

    Katzgraber, Helmut G.; Hamze, Firas; Andrist, Ruben S.

    2014-04-01

    Recently, a programmable quantum annealing machine has been built that minimizes the cost function of hard optimization problems by, in principle, adiabatically quenching quantum fluctuations. Tests performed by different research teams have shown that, indeed, the machine seems to exploit quantum effects. However, experiments on a class of random-bond instances have not yet demonstrated an advantage over classical optimization algorithms on traditional computer hardware. Here, we present evidence as to why this might be the case. These engineered quantum annealing machines effectively operate coupled to a decohering thermal bath. Therefore, we study the finite-temperature critical behavior of the standard benchmark problem used to assess the computational capabilities of these complex machines. We simulate both random-bond Ising models and spin glasses with bimodal and Gaussian disorder on the D-Wave Chimera topology. Our results show that while the worst-case complexity of finding a ground state of an Ising spin glass on the Chimera graph is not polynomial, the finite-temperature phase space is likely rather simple because spin glasses on Chimera have only a zero-temperature transition. This means that benchmarking optimization methods using spin glasses on the Chimera graph might not be the best benchmark problems to test quantum speedup. We propose alternative benchmarks by embedding potentially harder problems on the Chimera topology. Finally, we also study the (reentrant) disorder-temperature phase diagram of the random-bond Ising model on the Chimera graph and show that a finite-temperature ferromagnetic phase is stable up to 19.85(15)% antiferromagnetic bonds. Beyond this threshold, the system only displays a zero-temperature spin-glass phase. Our results therefore show that a careful design of the hardware architecture and benchmark problems is key when building quantum annealing machines.

  7. CT Performance Evaluation Using Multi Material Assemblies

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo

    2015-01-01

    This paper concerns an investigation of the accuracy of Computed Tomography measurements using multi-material assemblies. In this study, assemblies involving similar densities for elementary parts were considered. The investigation includes dimensional and geometrical measurements of two 10 mm hi...

  8. Fourth Doctoral Student Assembly

    CERN Multimedia

    Ingrid Haug

    2016-01-01

    On 10 May, over 130 PhD students and their supervisors, from both CERN and partner universities, gathered for the 4th Doctoral Student Assembly in the Council Chamber.   The assembly was followed by a poster session, at which eighteen doctoral students presented the outcome of their scientific work. The CERN Doctoral Student Programme currently hosts just over 200 students in applied physics, engineering, computing and science communication/education. The programme has been in place since 1985. It enables students to do their research at CERN for a maximum of three years and to work on a PhD thesis, which they defend at their University. The programme is steered by the TSC committee, which holds two selection committees per year, in June and December. The Doctoral Student Assembly was opened by the Director-General, Fabiola Gianotti, who stressed the importance of the programme in the scientific environment at CERN, emphasising that there is no more rewarding activity than lear...

  9. VERIFIKASI PAKET PROGRAM MVP-II DAN SRAC2006 PADA KASUS TERAS REAKTOR VERA BENCHMARK.

    Directory of Open Access Journals (Sweden)

    Jati Susilo

    2015-03-01

    Full Text Available Dalam penelitian ini dilakukan verifikasi perhitungan benchmark VERA pada kasus Zero Power Physical Test (ZPPT teras reaktor Watts Bar 1. Reaktor tersebut merupakan jenis PWR kelas 1000 MWe yang didesain oleh Westinghouse, tersusun dari 193 perangkat bahan bakar 17×17 dengan 3 jenis pengkayaan UO2 yaitu 2,1wt%, 2,619wt% dan 3,1wt%. Perhitungan nilai k-eff dan distribusi faktor daya dilakukan pada siklus operasi pertama teras dengan kondisi beginning of cycle (BOC dan hot zero power (HZP. Posisi batang kendali dibedakan menjadi uncontrolled (semua batang kendali berada di luar teras, dan controlled (batang kendali Bank D didalam teras. Paket program komputer yang digunakan dalam perhitungan adalah MVP-II dan SRAC2006 modul CITATION dengan data pustaka tampang lintang ENDF/B-VII.0. Hasil perhitungan menunjukkan bahwa perbedaan nilai k-eff teras pada kondisi controlled dan uncontrolled antara referensi dengan MVP-II (-0,07% dan -0,014% dan SRAC2006 (0,92% dan 0,99% sangat kecil atau masih dibawah 1%. Perbedaan faktor daya maksimum teras pada kondisi controlled dan uncontrolled dengan referensi dengan MVP-II adalah 0,38% dan 1,53%, sedangkan dengan SRAC2006 adalah 1,13% dan -2,45%. Dapat dikatakan bahwa kedua paket program komputer menunjukkan hasil perhitungan yang sesuai dengan nilai referensi. Dalam hal penentuan kekritisan teras, maka hasil perhitungan MVP-II lebih konservatif dibandingkan dengan SRAC2006. Kata kunci : MVP-II, SRAC2006, PWR, VERA   In this research, verification calculation for VERA core physics benchmark on the Zero Power Physical Test (ZPPT of the nuclear reactor Watts Bar 1. The reactor is a 1000 MWe class of PWR designed by Westinghouse, arranged from 193 unit of 17×17 fuel assembly consisting 3 type enrichment of UO2 that are 2.1wt%, 2.619wt% and 3.1wt%. Core power factor distribution and k-eff calculation has been done for the first cycle operation of the core at beginning of cycle (BOC and hot zero power (HZP. In this

  10. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  11. Towards Systematic Benchmarking of Climate Model Performance

    Science.gov (United States)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  12. Influence of the ab-initio nd cross sections in the critical heavy-water benchmarks

    CERN Document Server

    Morillon, B; Carbonell, J

    2013-01-01

    The n-d elastic and breakup cross sections are computed by solving the three-body Faddeev equations for realistic and semi-realistic Nucleon-Nucleon potentials. These cross sections are inserted in the Monte Carlo simulation of the nuclear processes considered in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP). The results obtained using thes ab initio n-d cross sections are compared with those provided by the most renown international libraries.

  13. Real time test benchmark design for photovoltaic grid connected control systems

    OpenAIRE

    Ruz Vila, Francisco de Asís; Torrelo Ponce, José Manuel; Nieto Morote, Ana María; Cánovas Rodríguez, Francisco Javier; Rey Boué, Alexis Bonifacio

    2011-01-01

    This paper presents a dual digital signal processor (DSP) hardware architecture for a grid-connected photovoltaic interface test benchmark, based on a cascade DC/DC converter and DC/AC inverter, with coordinated control algorithms. The control hardware has been designed to test distributed generation (DG) interfaces to be integrated in a hierarchical structure of computational agents, to apply distributed control techniques to the power system management. The proposed dual DSP architecture en...

  14. The Implications of Diverse Applications and Scalable Data Sets in Benchmarking Big Data Systems

    OpenAIRE

    Jia, Zhen; Zhou, Runlin; Zhu, Chunge; Wang, Lei; Gao, Wanling; Shi, Yingjie; Zhan, Jianfeng; Zhang, Lixin

    2013-01-01

    Now we live in an era of big data, and big data applications are becoming more and more pervasive. How to benchmark data center computer systems running big data applications (in short big data systems) is a hot topic. In this paper, we focus on measuring the performance impacts of diverse applications and scalable volumes of data sets on big data systems. For four typical data analysis applications---an important class of big data applications, we find two major results through experiments: ...

  15. CFD Simulation of Thermal-Hydraulic Benchmark V1000CT-2 Using ANSYS CFX

    OpenAIRE

    2009-01-01

    Plant measured data from VVER-1000 coolant mixing experiments were used within the OECD/NEA and AER coupled code benchmarks for light water reactors to test and validate computational fluid dynamic (CFD) codes. The task is to compare the various calculations with measured data, using specified boundary conditions and core power distributions. The experiments, which are provided for CFD validation, include single loop cooling down or heating-up by disturbing the heat transfer in the steam gene...

  16. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich

    2015-01-01

    In multicomponent electrolyte solutions, the tendency of ions to diffuse at different rates results in a charge imbalance that is counteracted by the electrostatic coupling between charged species leading to a process called “electrochemical migration” or “electromigration.” Although not commonly...... not been published to date. This contribution provides a set of three benchmark problems that demonstrate the effect of electric coupling during multicomponent diffusion and electrochemical migration and at the same time facilitate the intercomparison of solutions from existing reactive transport codes...

  17. Benchmarks of Global Clean Energy Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-01-01

    The Clean Energy Manufacturing Analysis Center (CEMAC), sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), provides objective analysis and up-to-date data on global supply chains and manufacturing of clean energy technologies. Benchmarks of Global Clean Energy Manufacturing sheds light on several fundamental questions about the global clean technology manufacturing enterprise: How does clean energy technology manufacturing impact national economies? What are the economic opportunities across the manufacturing supply chain? What are the global dynamics of clean energy technology manufacturing?

  18. Benchmarking East Tennessee`s economic capacity

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-04-20

    This presentation is comprised of viewgraphs delineating major economic factors operating in 15 counties in East Tennessee. The purpose of the information presented is to provide a benchmark analysis of economic conditions for use in guiding economic growth in the region. The emphasis of the presentation is economic infrastructure, which is classified into six categories: human resources, technology, financial resources, physical infrastructure, quality of life, and tax and regulation. Data for analysis of key indicators in each of the categories are presented. Preliminary analyses, in the form of strengths and weaknesses and comparison to reference groups, are given.

  19. An OpenMP Compiler Benchmark

    Directory of Open Access Journals (Sweden)

    Matthias S. Müller

    2003-01-01

    Full Text Available The purpose of this benchmark is to propose several optimization techniques and to test their existence in current OpenMP compilers. Examples are the removal of redundant synchronization constructs, effective constructs for alternative code and orphaned directives. The effectiveness of the compiler generated code is measured by comparing different OpenMP constructs and compilers. If possible, we also compare with the hand coded "equivalent" solution. Six out of seven proposed optimization techniques are already implemented in different compilers. However, most compilers implement only one or two of them.

  20. Robust randomized benchmarking of quantum processes

    CERN Document Server

    Magesan, Easwar; Emerson, Joseph

    2010-01-01

    We describe a simple randomized benchmarking protocol for quantum information processors and obtain a sequence of models for the observable fidelity decay as a function of a perturbative expansion of the errors. We are able to prove that the protocol provides an efficient and reliable estimate of an average error-rate for a set operations (gates) under a general noise model that allows for both time and gate-dependent errors. We determine the conditions under which this estimate remains valid and illustrate the protocol through numerical examples.

  1. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    While the knowledge management literature has addressed the explicit and tacit skills needed for successful performance in the modern enterprise, little attention has been paid to date in this particular literature as to how these wide-ranging skills may be suitably acquired during the course...... experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...

  2. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  3. International Benchmarking of Electricity Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2014-01-01

    Electricity transmission system operators (TSO) in Europe are increasing subject to high-powered performance-based regulation, such as revenue-cap regimes. The determination of the parameters in such regimes is challenging for national regulatory authorities (NRA), since there is normally a single...... TSO operating in each jurisdiction. The solution for European regulators has been found in international regulatory benchmarking, organized in collaboration with the Council of European Energy Regulators (CEER) in 2008 and 2012 for 22 and 23 TSOs, respectively. The frontier study provides static cost...

  4. Measurement Methods in the field of benchmarking

    Directory of Open Access Journals (Sweden)

    István Szűts

    2004-05-01

    Full Text Available In benchmarking we often come across with parameters being difficultto measure while executing comparisons or analyzing performance, yet they haveto be compared and measured so as to be able to choose the best practices. Thesituation is similar in the case of complex, multidimensional evaluation as well,when the relative importance and order of different dimensions, parameters to beevaluated have to be determined or when the range of similar performanceindicators have to be decreased with regard to simpler comparisons. In suchcases we can use the ordinal or interval scales of measurement elaborated by S.S.Stevens.

  5. Benchmarking research of steel companies in Europe

    Directory of Open Access Journals (Sweden)

    M. Antošová

    2013-07-01

    Full Text Available In present time steelworks are at a stage of permanent changes that are marked with still stronger competition pressure. Therefore managers must solve questions of how to decrease production costs, how to overcome competition and how to survive in the world market. Still more attention should be paid to the modern managerial methods of market research and comparison with competition. Benchmarking research is one of the effective tools for such research. The goal of this contribution is to compare chosen steelworks and to indicate new directions for their development with the possibility of increasing the productivity of steel production.

  6. Toward Establishing a Realistic Benchmark for Airframe Noise Research: Issues and Challenges

    Science.gov (United States)

    Khorrami, Mehdi R.

    2010-01-01

    The availability of realistic benchmark configurations is essential to enable the validation of current Computational Aeroacoustic (CAA) methodologies and to further the development of new ideas and concepts that will foster the technologies of the next generation of CAA tools. The selection of a real-world configuration, the subsequent design and fabrication of an appropriate model for testing, and the acquisition of the necessarily comprehensive aeroacoustic data base are critical steps that demand great care and attention. In this paper, a brief account of the nose landing-gear configuration, being proposed jointly by NASA and the Gulfstream Aerospace Company as an airframe noise benchmark, is provided. The underlying thought processes and the resulting building block steps that were taken during the development of this benchmark case are given. Resolution of critical, yet conflicting issues is discussed - the desire to maintain geometric fidelity versus model modifications required to accommodate instrumentation; balancing model scale size versus Reynolds number effects; and time, cost, and facility availability versus important parameters like surface finish and installation effects. The decisions taken during the experimental phase of a study can significantly affect the ability of a CAA calculation to reproduce the prevalent flow conditions and associated measurements. For the nose landing gear, the most critical of such issues are highlighted and the compromises made to resolve them are discussed. The results of these compromises will be summarized by examining the positive attributes and shortcomings of this particular benchmark case.

  7. Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction

    Science.gov (United States)

    Krueger, Ronald

    2011-01-01

    An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.

  8. Installation and Benchmark Tests of CINDER'90 Activation Code

    Energy Technology Data Exchange (ETDEWEB)

    Kum, Oyeon; Heo, Seung Uk; Oh, Sunju [Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of)

    2014-10-15

    For the reactor community, CINDER'90 is now integrated directly into MCNPX so that the transmutation processes can be considered in an integrated manner with the particle transport by using the 'BURN' option card. However, the much smaller accelerator community cannot take advantage of these improvements because the BURN option card of MCNPX is only accessible in eigenvalue calculations of radioactive systems. In this study, we introduce preliminary results in activation calculations with the CINDER'90 code by using previously approved benchmarking problems. Transmutation computing code, CINDER'90, which is widely used in the USA and European national laboratories, was successfully installed and benchmarked. Basic theory of atomic transmutation is introduced and three typical benchmark test problems are solved. The code works with MCNPX and is easy to use with the Perl script. The results are well summarized in a tabular format by using a post processing code, TABCODE. The cross benchmark tests with FLUKA are underway and the results will be presented in the near future. On the other hand, creating a new integrated activation code (MCNPX + CINDER) is now considered seriously for the more successful activation analysis.

  9. Building with Benchmarks: The Role of the District in Philadelphia's Benchmark Assessment System

    Science.gov (United States)

    Bulkley, Katrina E.; Christman, Jolley Bruce; Goertz, Margaret E.; Lawrence, Nancy R.

    2010-01-01

    In recent years, interim assessments have become an increasingly popular tool in districts seeking to improve student learning and achievement. Philadelphia has been at the forefront of this change, implementing a set of Benchmark assessments aligned with its Core Curriculum district-wide in 2004. In this article, we examine the overall context…

  10. Synthesis, characterization, self-assembly, gelation, morphology and computational studies of alkynylgold(III) complexes of 2,6-bis(benzimidazol-2'-yl)pyridine derivatives.

    Science.gov (United States)

    Yim, King-Chin; Lam, Elizabeth Suk-Hang; Wong, Keith Man-Chung; Au, Vonika Ka-Man; Ko, Chi-Chiu; Lam, Wai Han; Yam, Vivian Wing-Wah

    2014-08-04

    A novel class of alkynylgold(III) complexes of the dianionic ligands derived from 2,6-bis(benzimidazol-2'-yl)pyridine (H2bzimpy) derivatives has been synthesized and characterized. The structure of one of the complexes has also been determined by X-ray crystallography. Electronic absorption studies showed low-energy absorption bands at 378-466 nm, which are tentatively assigned as metal-perturbed π-π* intraligand transitions of the bzimpy(2-) ligands. A computational study has been performed to provide further insights into the nature of the electronic transitions for this class of complexes. One of the complexes has been found to show gelation properties, driven by π-π and hydrophobic-hydrophobic interactions. This complex exhibited concentration- and temperature-dependent (1)H NMR spectra. The morphology of the gel has been characterized by transmission electron microscopy (TEM) and scanning electron microscopy (SEM).

  11. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  12. Baseline and benchmark model development for hotels

    Science.gov (United States)

    Hooks, Edward T., Jr.

    The hotel industry currently faces rising energy costs and requires the tools to maximize energy efficiency. In order to achieve this goal a clear definition of the current methods used to measure and monitor energy consumption is made. Uncovering the limitations to the most common practiced analysis strategies and presenting methods that can potentially overcome those limitations is the main purpose. Techniques presented can be used for measurement and verification of energy efficiency plans and retrofits. Also, modern energy modeling tool are introduced to demonstrate how they can be utilized for benchmarking and baseline models. This will provide the ability to obtain energy saving recommendations and parametric analysis to explore energy savings potential. These same energy models can be used in design decisions for new construction. An energy model is created of a resort style hotel that over one million square feet and has over one thousand rooms. A simulation and detailed analysis is performed on a hotel room. The planning process for creating the model and acquiring data from the hotel room to calibrate and verify the simulation will be explained. An explanation as to how this type of modeling can potentially be beneficial for future baseline and benchmarking strategies for the hotel industry. Ultimately the conclusion will address some common obstacles the hotel industry has in reaching their full potential of energy efficiency and how these techniques can best serve them.

  13. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  14. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  15. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc.

  16. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  17. Transparency benchmarking on audio watermarks and steganography

    Science.gov (United States)

    Kraetzer, Christian; Dittmann, Jana; Lang, Andreas

    2006-02-01

    The evaluation of transparency plays an important role in the context of watermarking and steganography algorithms. This paper introduces a general definition of the term transparency in the context of steganography, digital watermarking and attack based evaluation of digital watermarking algorithms. For this purpose the term transparency is first considered individually for each of the three application fields (steganography, digital watermarking and watermarking algorithm evaluation). From the three results a general definition for the overall context is derived in a second step. The relevance and applicability of the definition given is evaluated in practise using existing audio watermarking and steganography algorithms (which work in time, frequency and wavelet domain) as well as an attack based evaluation suite for audio watermarking benchmarking - StirMark for Audio (SMBA). For this purpose selected attacks from the SMBA suite are modified by adding transparency enhancing measures using a psychoacoustic model. The transparency and robustness of the evaluated audio watermarking algorithms by using the original and modifid attacks are compared. The results of this paper show hat transparency benchmarking will lead to new information regarding the algorithms under observation and their usage. This information can result in concrete recommendations for modification, like the ones resulting from the tests performed here.

  18. Multisensor benchmark data for riot control

    Science.gov (United States)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  19. Selecting Superior De Novo Transcriptome Assemblies: Lessons Learned by Leveraging the Best Plant Genome.

    Directory of Open Access Journals (Sweden)

    Loren A Honaas

    Full Text Available Whereas de novo assemblies of RNA-Seq data are being published for a growing number of species across the tree of life, there are currently no broadly accepted methods for evaluating such assemblies. Here we present a detailed comparison of 99 transcriptome assemblies, generated with 6 de novo assemblers including CLC, Trinity, SOAP, Oases, ABySS and NextGENe. Controlled analyses of de novo assemblies for Arabidopsis thaliana and Oryza sativa transcriptomes provide new insights into the strengths and limitations of transcriptome assembly strategies. We find that the leading assemblers generate reassuringly accurate assemblies for the majority of transcripts. At the same time, we find a propensity for assemblers to fail to fully assemble highly expressed genes. Surprisingly, the instance of true chimeric assemblies is very low for all assemblers. Normalized libraries are reduced in highly abundant transcripts, but they also lack 1000s of low abundance transcripts. We conclude that the quality of de novo transcriptome assemblies is best assessed through consideration of a combination of metrics: 1 proportion of reads mapping to an assembly 2 recovery of conserved, widely expressed genes, 3 N50 length statistics, and 4 the total number of unigenes. We provide benchmark Illumina transcriptome data and introduce SCERNA, a broadly applicable modular protocol for de novo assembly improvement. Finally, our de novo assembly of the Arabidopsis leaf transcriptome revealed ~20 putative Arabidopsis genes lacking in the current annotation.

  20. Genome Sequence Databases (Overview): Sequencing and Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Lapidus, Alla L.

    2009-01-01

    From the date its role in heredity was discovered, DNA has been generating interest among scientists from different fields of knowledge: physicists have studied the three dimensional structure of the DNA molecule, biologists tried to decode the secrets of life hidden within these long molecules, and technologists invent and improve methods of DNA analysis. The analysis of the nucleotide sequence of DNA occupies a special place among the methods developed. Thanks to the variety of sequencing technologies available, the process of decoding the sequence of genomic DNA (or whole genome sequencing) has become robust and inexpensive. Meanwhile the assembly of whole genome sequences remains a challenging task. In addition to the need to assemble millions of DNA fragments of different length (from 35 bp (Solexa) to 800 bp (Sanger)), great interest in analysis of microbial communities (metagenomes) of different complexities raises new problems and pushes some new requirements for sequence assembly tools to the forefront. The genome assembly process can be divided into two steps: draft assembly and assembly improvement (finishing). Despite the fact that automatically performed assembly (or draft assembly) is capable of covering up to 98% of the genome, in most cases, it still contains incorrectly assembled reads. The error rate of the consensus sequence produced at this stage is about 1/2000 bp. A finished genome represents the genome assembly of much higher accuracy (with no gaps or incorrectly assembled areas) and quality ({approx}1 error/10,000 bp), validated through a number of computer and laboratory experiments.