WorldWideScience

Sample records for assembly computational benchmark

  1. A VVER-1000 LEU and MOX assembly computational benchmark analysis using the lattice burnup code EXCEL

    Energy Technology Data Exchange (ETDEWEB)

    Thilagam, L. [AERB-Safety Research Institute, Kalpakkam, Tamilnadu 603 102 (India)], E-mail: thilagam@igcar.gov.in; Sunil Sunny, C. [AERB-Safety Research Institute, Kalpakkam, Tamilnadu 603 102 (India); Jagannathan, V. [Light Water Reactor Physics Section, Reactor Physics Design Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India)], E-mail: v_jagan1952@rediffmail.com; Subbaiah, K.V. [AERB-Safety Research Institute, Kalpakkam, Tamilnadu 603 102 (India)

    2009-05-01

    Utilization of Mixed Uranium-Plutonium Oxide (MOX) fuel in VVER-1000 reactors envisages the core physics analysis using computational methods and validation of the related computer codes. Towards this objective, an international experts group has been established at OECD/NEA. The experts group facilitates sharing of existing information on physics parameters and fuel behaviour. Several benchmark exercises have been proposed by them with intent to investigate the core physics behaviour of a VVER-1000 reactor loaded with 2/3rd of low enriched uranium (LEU) fuel assemblies (FA) and 1/3rd of weapons grade mixed oxide (MOX) FA. In the present study an attempt is made to analyse 'AVVER-1000LEUandMOXAssemblyComputationalBenchmark' and predict the neutronics behaviour at the lattice level. The lattice burnup code EXCEL, developed at Light Water Reactor Physics Section, BARC is employed for this task. The EXCEL code uses the 172 energy group 'JEFF31GX' cross-section library in WIMS-D format. Assembly level fuel depletion calculations are performed up to a burnup of 40 MWD/kg of heavy metal (HM). Studies are made for the parametric variations of fuel and moderator temperatures, coolant density and boron content in the coolant. Both operational and off-normal states are analysed to determine the corresponding infinite neutron multiplication factor (k{sub {infinity}}). Pin wise isotopic compositions are computed as a function of burnup. Isotopic compositions in different annular regions of Uranium-Gadolinium (UGD) pin, fission rate distributions in UGD, UO{sub 2} and MOX pin cells are also computed. The predicted results are compared with the benchmark mean results.

  2. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  3. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  4. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  5. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  6. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...

  7. Improved benchmarks for computational motif discovery

    Directory of Open Access Journals (Sweden)

    Walseng Vegard

    2007-06-01

    Full Text Available Abstract Background An important step in annotation of sequenced genomes is the identification of transcription factor binding sites. More than a hundred different computational methods have been proposed, and it is difficult to make an informed choice. Therefore, robust assessment of motif discovery methods becomes important, both for validation of existing tools and for identification of promising directions for future research. Results We use a machine learning perspective to analyze collections of transcription factors with known binding sites. Algorithms are presented for finding position weight matrices (PWMs, IUPAC-type motifs and mismatch motifs with optimal discrimination of binding sites from remaining sequence. We show that for many data sets in a recently proposed benchmark suite for motif discovery, none of the common motif models can accurately discriminate the binding sites from remaining sequence. This may obscure the distinction between the potential performance of the motif discovery tool itself versus the intrinsic complexity of the problem we are trying to solve. Synthetic data sets may avoid this problem, but we show on some previously proposed benchmarks that there may be a strong bias towards a presupposed motif model. We also propose a new approach to benchmark data set construction. This approach is based on collections of binding site fragments that are ranked according to the optimal level of discrimination achieved with our algorithms. This allows us to select subsets with specific properties. We present one benchmark suite with data sets that allow good discrimination between positive and negative instances with the common motif models. These data sets are suitable for evaluating algorithms for motif discovery that rely on these models. We present another benchmark suite where PWM, IUPAC and mismatch motif models are not able to discriminate reliably between positive and negative instances. This suite could be used

  8. GABenchToB: a genome assembly benchmark tuned on bacteria and benchtop sequencers.

    Science.gov (United States)

    Jünemann, Sebastian; Prior, Karola; Albersmeier, Andreas; Albaum, Stefan; Kalinowski, Jörn; Goesmann, Alexander; Stoye, Jens; Harmsen, Dag

    2014-01-01

    De novo genome assembly is the process of reconstructing a complete genomic sequence from countless small sequencing reads. Due to the complexity of this task, numerous genome assemblers have been developed to cope with different requirements and the different kinds of data provided by sequencers within the fast evolving field of next-generation sequencing technologies. In particular, the recently introduced generation of benchtop sequencers, like Illumina's MiSeq and Ion Torrent's Personal Genome Machine (PGM), popularized the easy, fast, and cheap sequencing of bacterial organisms to a broad range of academic and clinical institutions. With a strong pragmatic focus, here, we give a novel insight into the line of assembly evaluation surveys as we benchmark popular de novo genome assemblers based on bacterial data generated by benchtop sequencers. Therefore, single-library assemblies were generated, assembled, and compared to each other by metrics describing assembly contiguity and accuracy, and also by practice-oriented criteria as for instance computing time. In addition, we extensively analyzed the effect of the depth of coverage on the genome assemblies within reasonable ranges and the k-mer optimization problem of de Bruijn Graph assemblers. Our results show that, although both MiSeq and PGM allow for good genome assemblies, they require different approaches. They not only pair with different assembler types, but also affect assemblies differently regarding the depth of coverage where oversampling can become problematic. Assemblies vary greatly with respect to contiguity and accuracy but also by the requirement on the computing power. Consequently, no assembler can be rated best for all preconditions. Instead, the given kind of data, the demands on assembly quality, and the available computing infrastructure determines which assembler suits best. The data sets, scripts and all additional information needed to replicate our results are freely available at ftp://ftp.cebitec.uni-bielefeld.de/pub/GABenchToB.

  9. GABenchToB: a genome assembly benchmark tuned on bacteria and benchtop sequencers.

    Directory of Open Access Journals (Sweden)

    Sebastian Jünemann

    Full Text Available De novo genome assembly is the process of reconstructing a complete genomic sequence from countless small sequencing reads. Due to the complexity of this task, numerous genome assemblers have been developed to cope with different requirements and the different kinds of data provided by sequencers within the fast evolving field of next-generation sequencing technologies. In particular, the recently introduced generation of benchtop sequencers, like Illumina's MiSeq and Ion Torrent's Personal Genome Machine (PGM, popularized the easy, fast, and cheap sequencing of bacterial organisms to a broad range of academic and clinical institutions. With a strong pragmatic focus, here, we give a novel insight into the line of assembly evaluation surveys as we benchmark popular de novo genome assemblers based on bacterial data generated by benchtop sequencers. Therefore, single-library assemblies were generated, assembled, and compared to each other by metrics describing assembly contiguity and accuracy, and also by practice-oriented criteria as for instance computing time. In addition, we extensively analyzed the effect of the depth of coverage on the genome assemblies within reasonable ranges and the k-mer optimization problem of de Bruijn Graph assemblers. Our results show that, although both MiSeq and PGM allow for good genome assemblies, they require different approaches. They not only pair with different assembler types, but also affect assemblies differently regarding the depth of coverage where oversampling can become problematic. Assemblies vary greatly with respect to contiguity and accuracy but also by the requirement on the computing power. Consequently, no assembler can be rated best for all preconditions. Instead, the given kind of data, the demands on assembly quality, and the available computing infrastructure determines which assembler suits best. The data sets, scripts and all additional information needed to replicate our results are freely

  10. VVER-1000 MOX Core Computational Benchmark: Specification and Results

    National Research Council Canada - National Science Library

    Mikhail Kalugin; Eugeny Gomin; Dmitry Oleynik

    2006-01-01

    This report presents the VVER MOX Core Computational Benchmark Specification and Results, which was proposed as a benchmark within the OECD/NEA Expert Group on Reactor-based Plutonium Disposition (TFRPD...

  11. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Directory of Open Access Journals (Sweden)

    Seyhan Yazar

    Full Text Available A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR on Amazon EC2 instances and Google Compute Engine (GCE, using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2 for E.coli and 53.5% (95% CI: 34.4-72.6 for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1 and 173.9% (95% CI: 134.6-213.1 more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  12. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Science.gov (United States)

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  13. Benchmarking

    OpenAIRE

    Meylianti S, Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  14. Benchmarking of de novo assembly algorithms for Nanopore data reveals optimal performance of OLC approaches.

    Science.gov (United States)

    Cherukuri, Yesesri; Janga, Sarath Chandra

    2016-08-22

    Improved DNA sequencing methods have transformed the field of genomics over the last decade. This has become possible due to the development of inexpensive short read sequencing technologies which have now resulted in three generations of sequencing platforms. More recently, a new fourth generation of Nanopore based single molecule sequencing technology, was developed based on MinION(®) sequencer which is portable, inexpensive and fast. It is capable of generating reads of length greater than 100 kb. Though it has many specific advantages, the two major limitations of the MinION reads are high error rates and the need for the development of downstream pipelines. The algorithms for error correction have already emerged, while development of pipelines is still at nascent stage. In this study, we benchmarked available assembler algorithms to find an appropriate framework that can efficiently assemble Nanopore sequenced reads. To address this, we employed genome-scale Nanopore sequenced datasets available for E. coli and yeast genomes respectively. In order to comprehensively evaluate multiple algorithmic frameworks, we included assemblers based on de Bruijn graphs (Velvet and ABySS), Overlap Layout Consensus (OLC) (Celera) and Greedy extension (SSAKE) approaches. We analyzed the quality, accuracy of the assemblies as well as the computational performance of each of the assemblers included in our benchmark. Our analysis unveiled that OLC-based algorithm, Celera, could generate a high quality assembly with ten times higher N50 & mean contig values as well as one-fifth the number of total number of contigs compared to other tools. Celera was also found to exhibit an average genome coverage of 12 % in E. coli dataset and 70 % in Yeast dataset as well as relatively lesser run times. In contrast, de Bruijn graph based assemblers Velvet and ABySS generated the assemblies of moderate quality, in less time when there is no limitation on the memory allocation, while greedy

  15. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Ravindrudu, Rahul [Iowa State Univ., Ames, IA (United States)

    2004-01-01

    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access

  16. The design of a scalable, fixed-time computer benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, J.; Rover, D.; Elbert, S.; Carter, M.

    1990-10-01

    By using the principle of fixed time benchmarking, it is possible to compare a very wide range of computers, from a small personal computer to the most powerful parallel supercomputer, an a single scale. Fixed-time benchmarks promise far greater longevity than those based on a particular problem size, and are more appropriate for grand challenge'' capability comparison. We present the design of a benchmark, SLALOM{trademark}, that scales automatically to the computing power available, and corrects several deficiencies in various existing benchmarks: it is highly scalable, it solves a real problem, it includes input and output times, and it can be run on parallel machines of all kinds, using any convenient language. The benchmark provides a reasonable estimate of the size of problem solvable on scientific computers. Results are presented that span six orders of magnitude for contemporary computers of various architectures. The benchmarks also can be used to demonstrate a new source of superlinear speedup in parallel computers. 15 refs., 14 figs., 3 tabs.

  17. Analysis of experiments in the Phase III GCFR benchmark critical assembly

    Energy Technology Data Exchange (ETDEWEB)

    Hess, A.L.; Baylor, K.J.

    1980-04-01

    Experiments carried out in the third gas-cooled fast breeder reactor (GCFR) benchmark critical assembly on the Zero Power Reactor-9 at Argonne National Laboratory were analyzed using methods and computer codes employed routinely for design and performance evaluations on power-plant GCFR cores. The program for the Phase III GCFR assembly, with a 1900-liter, three-enrichment zone core, included measurements of reaction-rate profiles in a typical power-flattened design, studies of material reactivity coefficients, reaction ratio and breeding parameter determinations, and comparison of pin with plate fuel loadings. Calculated parameters to compare with all of the measured results were obtained using 10-group cross sections based on ENDF/B-4 and two-dimensional diffusion theory, with adjustments for fuel-cell heterogeneity and void-lattice streaming effects.

  18. VVER-1000 MOX Core Computational Benchmark analysis using indigenous codes EXCEL, TRIHEX-FA and HEXPIN

    Energy Technology Data Exchange (ETDEWEB)

    Thilagam, L. [AERB-Safety Research Institute, Kalpakkam, Tamilnadu 603 102 (India)], E-mail: thilagam@igcar.gov.in; Jagannathan, V. [Light Water Reactors Physics Section, Reactor Physics Design Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India)], E-mail: v_jagan1952@rediffmail.com; Sunil sunny, C.; Subbaiah, K.V. [AERB-Safety Research Institute, Kalpakkam, Tamilnadu 603 102 (India)

    2009-10-15

    Validation studies based on the analysis of theoretical benchmarks play a key role in the identification of deficiencies in the reactor physics design computational codes and the associated nuclear data libraries. Implementation of improvements, if any, in theoretical models and the choice of appropriate nuclear data libraries help in enhancing the accuracy of calculations. As part of the effort for the validation of computer codes for plutonium utilization in VVER type reactors, the indigenous codes EXCEL, TRIHEX-FA and HEXPIN, developed at Light Water Reactor Physics Section (LWRPS), RPDD, BARC, and the associated nuclear data library (JEF22XS), were employed to analyse 'VVER-1000 MOX Core Computational Benchmark'. The few group homogenized parameters of assembly cell or individual lattice cells were obtained by the hexagonal lattice burn-up code EXCEL and the core diffusion calculations were then performed using hexagonal assembly geometric code TRIHEX-FA or the pin-by-pin diffusion code HEXPIN. VVER-1000 reactor core loaded with 2/3rd of Low-Enriched Uranium (LEU) fuel assemblies (FAs) and 1/3rd of weapons grade MOX FAs was investigated. Effective multiplication factors and assembly average fission reaction rate distributions have been calculated for various reactor state descriptions using 3-D diffusion theory codes TRIHEX-FA and HEXPIN. Further, estimate of detailed pin-by-pin fission reaction rate distributions of a few selected assemblies were made for the normal working state of the reactor using pin-by-pin core simulation code HEXPIN. A comparison of results was done with the reported Monte Carlo (MC) values of the benchmark and in most cases good agreement was observed with the benchmark results.

  19. Benchmarking of HEU mental annuli critical assemblies with internally reflected graphite cylinder

    Directory of Open Access Journals (Sweden)

    Xiaobo Liu

    2017-01-01

    Full Text Available Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00057, 0.00058 and 0.00057 respectively, and biases to the benchmark models which are − 0.00286, − 0.00242 and − 0.00168 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified models. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF/B-VII.1 agree well to the benchmark experimental results within difference less than 0.2%. The benchmarking results were accepted for the inclusion of ICSBEP Handbook.

  20. Benchmarking of HEU Mental Annuli Critical Assemblies with Internally Reflected Graphite Cylinder

    Energy Technology Data Exchange (ETDEWEB)

    Xiaobo, Liu; Bess, John D.; Marshall, Margaret A.

    2016-09-01

    Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches) metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00055, 0.00055 and 0.00055 respectively, and biases to the detailed benchmark models which are -0.00179, -0.00189 and -0.00114 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified model. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF VII.1 agree well to the benchmark experimental results with a difference of less than 0.2%. These are acceptable benchmark experiments for inclusion in the ICSBEP Handbook.

  1. Benchmarking of HEU mental annuli critical assemblies with internally reflected graphite cylinder

    Science.gov (United States)

    Xiaobo, Liu; Bess, John D.; Marshall, Margaret A.

    2017-09-01

    Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches) metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00057, 0.00058 and 0.00057 respectively, and biases to the benchmark models which are - 0.00286, - 0.00242 and - 0.00168 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified models. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF/B-VII.1 agree well to the benchmark experimental results within difference less than 0.2%. The benchmarking results were accepted for the inclusion of ICSBEP Handbook.

  2. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  3. Benchmarking Neuromorphic Vision: Lessons Learnt from Computer Vision

    Directory of Open Access Journals (Sweden)

    Cheston eTan

    2015-10-01

    Full Text Available Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, and algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  4. Benchmarking neuromorphic vision: lessons learnt from computer vision.

    Science.gov (United States)

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  5. The new deterministic 3-D radiation transport code Multitrans: C5G7 MOX fuel assembly benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Kotiluoto, P. [VTT Technical Research Centre (Finland)

    2003-07-01

    The novel deterministic three-dimensional radiation transport code MultiTrans is based on combination of the advanced tree multigrid technique and the simplified P3 (SP3) radiation transport approximation. In the tree multigrid technique, an automatic mesh refinement is performed on material surfaces. The tree multigrid is generated directly from stereo-lithography (STL) files exported by computer-aided design (CAD) systems, thus allowing an easy interface for construction and upgrading of the geometry. The deterministic MultiTrans code allows fast solution of complicated three-dimensional transport problems in detail, offering a new tool for nuclear applications in reactor physics. In order to determine the feasibility of a new code, computational benchmarks need to be carried out. In this work, MultiTrans code is tested for a seven-group three-dimensional MOX fuel assembly transport benchmark without spatial homogenization (NEA C5G7 MOX). (author)

  6. Benchmarking severe accident computer codes for heavy water reactor applications

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.H. [International Atomic Energy Agency, Vienna (Austria)

    2010-07-01

    Consideration of severe accidents at a nuclear power plant (NPP) is an essential component of the defence in depth approach used in nuclear safety. Severe accident analysis involves very complex physical phenomena that occur sequentially during various stages of accident progression. Computer codes are essential tools for understanding how the reactor and its containment might respond under severe accident conditions. International cooperative research programmes are established by the IAEA in areas that are of common interest to a number of Member States. These co-operative efforts are carried out through coordinated research projects (CRPs), typically 3 to 6 years in duration, and often involving experimental activities. Such CRPs allow a sharing of efforts on an international basis, foster team-building and benefit from the experience and expertise of researchers from all participating institutes. The IAEA is organizing a CRP on benchmarking severe accident computer codes for heavy water reactor (HWR) applications. The CRP scope includes defining the severe accident sequence and conducting benchmark analyses for HWRs, evaluating the capabilities of existing computer codes to predict important severe accident phenomena, and suggesting necessary code improvements and/or new experiments to reduce uncertainties. The CRP has been planned on the advice and with the support of the IAEA Nuclear Energy Department's Technical Working Groups on Advanced Technologies for HWRs. (author)

  7. Benchmark experiment on vanadium assembly with D-T neutrons. Leakage neutron spectrum measurement

    Energy Technology Data Exchange (ETDEWEB)

    Kokooo; Murata, I.; Nakano, D.; Takahashi, A. [Osaka Univ., Suita (Japan); Maekawa, F.; Ikeda, Y.

    1998-03-01

    The fusion neutronics benchmark experiments have been done for vanadium and vanadium alloy by using the slab assembly and time-of-flight (TOF) method. The leakage neutron spectra were measured from 50 keV to 15 MeV and comparison were done with MCNP-4A calculations which was made by using evaluated nuclear data of JENDL-3.2, JENDL-Fusion File and FENDL/E-1.0. (author)

  8. OECD/NEA burnup credit criticality benchmarks phase IIIA: Criticality calculations of BWR spent fuel assemblies in storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ando, Yoshihira [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    2000-09-01

    The report describes the final results of Phase IIIA Benchmarks conducted by the Burnup Credit Criticality Calculation Working Group under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD/NEA). The benchmarks are intended to confirm the predictive capability of the current computer code and data library combinations for the neutron multiplication factor (k{sub eff}) of a layer of irradiated BWR fuel assembly array model. In total 22 benchmark problems are proposed for calculations of k{sub eff}. The effects of following parameters are investigated: cooling time, inclusion/exclusion of FP nuclides and axial burnup profile, and inclusion of axial profile of void fraction or constant void fractions during burnup. Axial profiles of fractional fission rates are further requested for five cases out of the 22 problems. Twenty-one sets of results are presented, contributed by 17 institutes from 9 countries. The relative dispersion of k{sub eff} values calculated by the participants from the mean value is almost within the band of {+-}1%{delta}k/k. The deviations from the averaged calculated fission rate profiles are found to be within {+-}5% for most cases. (author)

  9. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  10. Benchmark exercise for fluid flow simulations in a liquid metal fast reactor fuel assembly

    Energy Technology Data Exchange (ETDEWEB)

    Merzari, E., E-mail: emerzari@anl.gov [Mathematics and Computer Science Division, Argonne National Laboratory, 9700 S. Cass Avenue, Lemont, IL 60439 (United States); Fischer, P. [Mathematics and Computer Science Division, Argonne National Laboratory, 9700 S. Cass Avenue, Lemont, IL 60439 (United States); Yuan, H. [Nuclear Engineering Division, Argonne National Laboratory, Lemont, IL (United States); Van Tichelen, K.; Keijers, S. [SCK-CEN, Boeretang 200, Mol (Belgium); De Ridder, J.; Degroote, J.; Vierendeels, J. [Ghent University, Ghent (Belgium); Doolaard, H.; Gopala, V.R.; Roelofs, F. [NRG, Petten (Netherlands)

    2016-03-15

    Highlights: • A EUROTAM-US INERI consortium has performed a benchmark exercise related to fast reactor assembly simulations. • LES calculations for a wire-wrapped rod bundle are compared with RANS calculations. • Results show good agreement for velocity and cross flows. - Abstract: As part of a U.S. Department of Energy International Nuclear Energy Research Initiative (I-NERI), Argonne National Laboratory (Argonne) is collaborating with the Dutch Nuclear Research and consultancy Group (NRG), the Belgian Nuclear Research Centre (SCK·CEN), and Ghent University (UGent) in Belgium to perform and compare a series of fuel-pin-bundle calculations representative of a fast reactor core. A wire-wrapped fuel bundle is a complex configuration for which little data is available for verification and validation of new simulation tools. UGent and NRG performed their simulations with commercially available computational fluid dynamics (CFD) codes. The high-fidelity Argonne large-eddy simulations were performed with Nek5000, used for CFD in the Simulation-based High-efficiency Advanced Reactor Prototyping (SHARP) suite. SHARP is a versatile tool that is being developed to model the core of a wide variety of reactor types under various scenarios. It is intended both to serve as a surrogate for physical experiments and to provide insight into experimental results. Comparison of the results obtained by the different participants with the reference Nek5000 results shows good agreement, especially for the cross-flow data. The comparison also helps highlight issues with current modeling approaches. The results of the study will be valuable in the design and licensing process of MYRRHA, a flexible fast research reactor under design at SCK·CEN that features wire-wrapped fuel bundles cooled by lead-bismuth eutectic.

  11. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  12. SparseBeads data: benchmarking sparsity-regularized computed tomography

    Science.gov (United States)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  13. Beyond the NAS Parallel Benchmarks: Measuring Dynamic Program Performance and Grid Computing Applications

    Science.gov (United States)

    VanderWijngaart, Rob F.; Biswas, Rupak; Frumkin, Michael; Feng, Huiyu; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The contents include: 1) A brief history of NPB; 2) What is (not) being measured by NPB; 3) Irregular dynamic applications (UA Benchmark); and 4) Wide area distributed computing (NAS Grid Benchmarks-NGB). This paper is presented in viewgraph form.

  14. Benchmark calculation of deuterium critical assembly for WIMS-AECL and RFSP

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hang Bok; Min, Byung Joo

    2003-01-01

    The benchmark calculations have been performed for WIMS-AECL (WIMS) and RFSP using experimental data of Deuterium Critical Assembly (DCA). The lattice parameters were generated for the 1.2 wt% enriched uranium and PuO{sub 2}-UO{sub 2} fuels based on ENDF/B-V cross section library of WIMS code. The benchmark calculations were carried out for the criticality and void reactivity by RFSP code using rectangular mesh structure to model whole reactor system that includes both the fuel and structural material. The simulation was performed in two energy groups and the results were compared to the measured values. The results have shown that the WIMS/RFSP over-predicts the criticality and void reactivity by 0.67%{delta}k and 0.28%{delta}(1/k), respectively. The sensitivity calculation on the input parameter has shown that the prediction error can be reduced reasonably by updating the resonance cross-sections of WIMS library and by using finer axial mesh structure in the RFSP model.

  15. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    Energy Technology Data Exchange (ETDEWEB)

    Orii, Shigeo [Japan Atomic Energy Research Inst., Tokyo (Japan)

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  16. Benchmark Generator for the IEEE WCCI-2014 Competition on Evolutionary Computation for Dynamic Optimization Problems: Dynamic Rotation Peak Benchmark Generator (DRPBG) and Dynamic Composition Benchmark Generator (DCBG)

    OpenAIRE

    Li, Changhe; Mavrovouniotis, Michalis; Yang, Shengxiang; Yao, Xin

    2013-01-01

    Based on our previous benchmark generator for the IEEE CEC’12 Competition on Dynamic Optimization, this report updates the two benchmark instances where two new features have 1been developed as well as a constraint to the benchmark instance of the dynamic rotation peak benchmark generator. The source code in C++ language for the two benchmark instances is included in the library of EAlib, which is an open platform to test and compare the performances of EAs.

  17. A Computer Scientist’s Evaluation of Publically Available Hardware Trojan Benchmarks [code

    OpenAIRE

    Slayback, Scott (Michael)

    2015-01-01

    This code supplements the author's thesis: https://calhoun.nps.edu/handle/10945/47330 This archive contains 5 Verilog files developed as part of the thesis "A Computer Scientist’s Evaluation of Publically Available Hardware Trojan Benchmarks." These files are a provided as an aid to researchers who wish to verify conclusions drawn as part of this thesis or conduct additional research into the RS232 benchmarks found at trust-hub.org. The first two files are module libraries whic...

  18. GAP: A computer program for gene assembly

    Energy Technology Data Exchange (ETDEWEB)

    Eisnstein, J.R.; Uberbacher, E.C.; Guan, X.; Mural, R.J.; Mann, R.C.

    1991-09-01

    A computer program, GAP (Gene Assembly Program), has been written to assemble and score hypothetical genes, given a DNA sequence containing the gene, and the outputs of several other programs which analyze the sequence. These programs include the codign-recognition and splice-junction-recognition modules developed in this laboratory. GAP is a prototype of a planned system in which it will be integrated with an expert system and rule base. Initial tests of GAP have been carried out with four sequences, the exons of which have been determined by biochemcial methods. The highest-scoring hypothetical genes for each of the four sequences had percent correct splice junctions ranging from 50 to 100% (average 81%) and percent correct bases ranging from 92 to 100% (average 96%). 9 refs., 1 tab.

  19. Benchmarking five computational methods for analyzing large photonic crystal membrane cavities

    DEFF Research Database (Denmark)

    Gregersen, Niels; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn

    2017-01-01

    We benchmark five state-of-the-art computational methods by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities. The convergence of the methods with respect to resolution, degrees of freedom and number of modes is investigated. Convergence...

  20. Benchmark Numerical Toolkits for High Performance Computing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  1. Benchmark experiment on vanadium assembly with D-T neutrons. In-situ measurement

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Kasugai, Yoshimi; Konno, Chikara; Wada, Masayuki; Oyama, Yukio; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Murata, Isao; Kokooo; Takahashi, Akito

    1998-03-01

    Fusion neutronics benchmark experimental data on vanadium were obtained for neutrons in almost entire energies as well as secondary gamma-rays. Benchmark calculations for the experiment were performed to investigate validity of recent nuclear data files, i.e., JENDL Fusion File, FENDL/E-1.0 and EFF-3. (author)

  2. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Woodworth, Ken [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kuruganti, Teja [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-17

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the software for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.

  3. SparseBeads data: benchmarking sparsity-regularized computed tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer; Coban, Sophia B.; Lionheart, William R. B.

    2017-01-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood......, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number....... Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV...

  4. Benchmarking of Computational Models for NDE and SHM of Composites

    Science.gov (United States)

    Wheeler, Kevin; Leckey, Cara; Hafiychuk, Vasyl; Juarez, Peter; Timucin, Dogan; Schuet, Stefan; Hafiychuk, Halyna

    2016-01-01

    Ultrasonic wave phenomena constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials such as carbon-fiber-reinforced polymer (CFRP) laminates. Computational models of ultrasonic guided-wave excitation, propagation, scattering, and detection in quasi-isotropic laminates can be extremely valuable in designing practically realizable NDE and SHM hardware and software with desired accuracy, reliability, efficiency, and coverage. This paper presents comparisons of guided-wave simulations for CFRP composites implemented using three different simulation codes: two commercial finite-element analysis packages, COMSOL and ABAQUS, and a custom code implementing the Elastodynamic Finite Integration Technique (EFIT). Comparisons are also made to experimental laser Doppler vibrometry data and theoretical dispersion curves.

  5. Frances: A Tool for Understanding Computer Architecture and Assembly Language

    Science.gov (United States)

    Sondag, Tyler; Pokorny, Kian L.; Rajan, Hridesh

    2012-01-01

    Students in all areas of computing require knowledge of the computing device including software implementation at the machine level. Several courses in computer science curricula address these low-level details such as computer architecture and assembly languages. For such courses, there are advantages to studying real architectures instead of…

  6. A computer scientist’s evaluation of publically available hardware Trojan benchmarks

    OpenAIRE

    Slayback, Scott M.

    2015-01-01

    Approved for public release; distribution is unlimited Dr. Hassan Salmani and Dr. Mohammed Tehranipoor have developed a collection of publically available hardware Trojans, meant to be used as common benchmarks for the analysis of detection and mitigation techniques. In this thesis, we evaluate a selection of these Trojans from the perspective of a computer scientist with limited electrical engineering background. Note that this thesis is also intended to serve as a supplement to the exist...

  7. Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flapper, Joris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry – including four dairy processes – cheese, fluid milk, butter, and milk powder.

  8. A benchmarking tool to evaluate computer tomography perfusion infarct core predictions against a DWI standard.

    Science.gov (United States)

    Cereda, Carlo W; Christensen, Søren; Campbell, Bruce Cv; Mishra, Nishant K; Mlynash, Michael; Levi, Christopher; Straka, Matus; Wintermark, Max; Bammer, Roland; Albers, Gregory W; Parsons, Mark W; Lansberg, Maarten G

    2016-10-01

    Differences in research methodology have hampered the optimization of Computer Tomography Perfusion (CTP) for identification of the ischemic core. We aim to optimize CTP core identification using a novel benchmarking tool. The benchmarking tool consists of an imaging library and a statistical analysis algorithm to evaluate the performance of CTP. The tool was used to optimize and evaluate an in-house developed CTP-software algorithm. Imaging data of 103 acute stroke patients were included in the benchmarking tool. Median time from stroke onset to CT was 185 min (IQR 180-238), and the median time between completion of CT and start of MRI was 36 min (IQR 25-79). Volumetric accuracy of the CTP-ROIs was optimal at an rCBF threshold of benchmarking tool can play an important role in optimizing CTP software as it provides investigators with a novel method to directly compare the performance of alternative CTP software packages. © The Author(s) 2015.

  9. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    Science.gov (United States)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  10. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  11. Benchmark and partial validation testing of the FLASH computer code, Version 3.0

    Energy Technology Data Exchange (ETDEWEB)

    Martian, P.; Smith, C.S.

    1993-09-01

    This document presents methods and results of benchmark testing (i.e., code-to-code comparisons) and partial validation testing (i.e., tests which compare field data to the computer generated solutions) of the FLASH computer code, Version 3.0, which were conducted to determine if the code is ready for performance assessment studies of the Radioactive Waste Management Complex. Three test problems are presented that were designed to check computational efficiency, accuracy of the numerical algorithms, and the capability of the code to simulate diverse hydrological conditions. These test problems were designed to specifically test the code`s ability to simulate, (a) seasonal infiltration in response to meteorological conditions, (b) changing watertable elevations due to a transient areal source of water, (i.e., influx from spreading basins), and (c) infiltration into fractured basalt as a result of seasonal water in drainage ditches. The FLASH simulations generally compared well with the benchmark codes, indicating good stability and acceptable computational efficiency while simulating a wide range of conditions. The code appears operational for modeling both unsaturated and saturated flow in fractured, heterogeneous porous media. However, the code failed to converge when a unsaturated to saturated transition occurred. Consequently, the code should not be used when this condition occurs or is expected to occur, i.e. when perched water is present or when infiltration rates exceed the saturated conductivity of the soil.

  12. Microstructure of cotton fibrous assemblies based on computed tomography

    Science.gov (United States)

    Jing, Hui; Yu, Weidong

    2017-12-01

    This paper describes for the first time the analysis of inner microstructure of cotton fibrous assemblies using computed tomography. Microstructure parameters such as packing density, fractal dimension as well as porosity including open porosity, closed porosity and total porosity are calculated based on 2D data from computed tomography. Values of packing density and fractal dimension are stable in random oriented fibrous assemblies, and there exists a satisfactory approximate linear relationship between them. Moreover, poles analysis indicates that porosity represents the tightness of fibrous assemblies and open poles are main existence.

  13. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    Science.gov (United States)

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-01-01

    Abstract RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods. PMID:27190234

  14. Interim report on verification and benchmark testing of the NUFT computer code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, K.H.; Nitao, J.J. [Lawrence Livermore National Lab., CA (United States); Kulshrestha, A. [Weiss Associates, Emeryville, CA (United States)

    1993-10-01

    This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.

  15. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  16. ESTABLISHING A METHODOLOGY FOR BENCHMARKING SPEECH SYNTHESIS FOR COMPUTER-ASSISTED LANGUAGE LEARNING (CALL

    Directory of Open Access Journals (Sweden)

    Zöe Handley

    2005-09-01

    Full Text Available Despite the new possibilities that speech synthesis brings about, few Computer-Assisted Language Learning (CALL applications integrating speech synthesis have found their way onto the market. One potential reason is that the suitability and benefits of the use of speech synthesis in CALL have not been proven. One way to do this is through evaluation. Yet, very few formal evaluations of speech synthesis for CALL purposes have been conducted. One possible reason for the neglect of evaluation in this context is the fact that it is expensive in terms of time and resources. An important concern given that there are several levels of evaluation from which such applications would benefit. Benchmarking, the comparison of the score obtained by a system with that obtained by one which is known, to guarantee user satisfaction in a standard task or set of tasks, is introduced as a potential solution to this problem. In this article, we report on our progress towards the development of one of these benchmarks, namely a benchmark for determining the adequacy of speech synthesis systems for use in CALL. We do so by presenting the results of a case study which aimed to identify the criteria which determine the adequacy of the output of speech synthesis systems for use in its various roles in CALL with a view to the selection of benchmark tests which will address these criteria. These roles (reading machine, pronunciation model, and conversational partner are also discussed here. An agenda for further research and evaluation is proposed in the conclusion.

  17. A Benchmark Dataset for SSVEP-Based Brain-Computer Interfaces.

    Science.gov (United States)

    Wang, Yijun; Chen, Xiaogang; Gao, Xiaorong; Gao, Shangkai

    2017-10-01

    This paper presents a benchmark steady-state visual evoked potential (SSVEP) dataset acquired with a 40-target brain- computer interface (BCI) speller. The dataset consists of 64-channel Electroencephalogram (EEG) data from 35 healthy subjects (8 experienced and 27 naïve) while they performed a cue-guided target selecting task. The virtual keyboard of the speller was composed of 40 visual flickers, which were coded using a joint frequency and phase modulation (JFPM) approach. The stimulation frequencies ranged from 8 Hz to 15.8 Hz with an interval of 0.2 Hz. The phase difference between two adjacent frequencies was . For each subject, the data included six blocks of 40 trials corresponding to all 40 flickers indicated by a visual cue in a random order. The stimulation duration in each trial was five seconds. The dataset can be used as a benchmark dataset to compare the methods for stimulus coding and target identification in SSVEP-based BCIs. Through offline simulation, the dataset can be used to design new system diagrams and evaluate their BCI performance without collecting any new data. The dataset also provides high-quality data for computational modeling of SSVEPs. The dataset is freely available fromhttp://bci.med.tsinghua.edu.cn/download.html.

  18. Paper- and computer-based workarounds to electronic health record use at three benchmark institutions

    Science.gov (United States)

    Flanagan, Mindy E; Saleem, Jason J; Millitello, Laura G; Russ, Alissa L; Doebbeling, Bradley N

    2013-01-01

    Background Healthcare professionals develop workarounds rather than using electronic health record (EHR) systems. Understanding the reasons for workarounds is important to facilitate user-centered design and alignment between work context and available health information technology tools. Objective To examine both paper- and computer-based workarounds to the use of EHR systems in three benchmark institutions. Methods Qualitative data were collected in 11 primary care outpatient clinics across three healthcare institutions. Data collection methods included direct observation and opportunistic questions. In total, 120 clinic staff and providers and 118 patients were observed. All data were analyzed using previously developed workaround categories and examined for potential new categories. Additionally, workarounds were coded as either paper- or computer-based. Results Findings corresponded to 10 of 11 workaround categories identified in previous research. All 10 of these categories applied to paper-based workarounds; five categories also applied to computer-based workarounds. One new category, no correct path (eg, a desired option did not exist in the computer interface, precipitating a workaround), was identified for computer-based workarounds. The most consistent reasons for workarounds across the three institutions were efficiency, memory, and awareness. Conclusions Consistent workarounds across institutions suggest common challenges in outpatient clinical settings and failures to accommodate these challenges in EHR design. An examination of workarounds provides insight into how providers adapt to limiting EHR systems. Part of the design process for computer interfaces should include user-centered methods particular to providers and healthcare settings to ensure uptake and usability. PMID:23492593

  19. The Use of Hebbian Cell Assemblies for Nonlinear Computation

    DEFF Research Database (Denmark)

    Tetzlaff, Christian; Dasgupta, Sakyasingha; Kulvicius, Tomas

    2015-01-01

    preserving a rich diversity of neural dynamics needed for computation is still unknown. Here we show that the combination of synaptic plasticity with the slower process of synaptic scaling achieves (i) the formation of cell assemblies and (ii) enhances the diversity of neural dynamics facilitating...... the learning of complex calculations. Due to synaptic scaling the dynamics of different cell assemblies do not interfere with each other. As a consequence, this type of self-organization allows executing a difficult, six degrees of freedom, manipulation task with a robot where assemblies need to learn...

  20. Neutronics benchmark for the Quad Cities-1 (Cycle 2) mixed oxide assembly irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, S.E.; Difilippo, F.C.

    1998-04-01

    Reactor physics computer programs are important tools that will be used to estimate mixed oxide fuel (MOX) physics performance in support of weapons grade plutonium disposition in US and Russian Federation reactors. Many of the computer programs used today have not undergone calculational comparisons to measured data obtained during reactor operation. Pin power, the buildup of transuranics, and depletion of gadolinium measurements were conducted (under Electric Power Research Institute sponsorship) on uranium and MOX pins irradiated in the Quad Cities-1 reactor in the 1970`s. These measurements are compared to modern computational models for the HELIOS and SCALE computer codes. Good agreement on pin powers was obtained for both MOX and uranium pins. The agreement between measured and calculated values of transuranic isotopes was mixed, depending on the particular isotope.

  1. Thermal Hydraulic Computational Fluid Dynamics Simulations and Experimental Investigation of Deformed Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Mays, Brian [AREVA Federal Services, Lynchburg, VA (United States); Jackson, R. Brian [TerraPower, Bellevue, WA (United States)

    2017-03-08

    The project, Toward a Longer Life Core: Thermal Hydraulic CFD Simulations and Experimental Investigation of Deformed Fuel Assemblies, DOE Project code DE-NE0008321, was a verification and validation project for flow and heat transfer through wire wrapped simulated liquid metal fuel assemblies that included both experiments and computational fluid dynamics simulations of those experiments. This project was a two year collaboration between AREVA, TerraPower, Argonne National Laboratory and Texas A&M University. Experiments were performed by AREVA and Texas A&M University. Numerical simulations of these experiments were performed by TerraPower and Argonne National Lab. Project management was performed by AREVA Federal Services. The first of a kind project resulted in the production of both local point temperature measurements and local flow mixing experiment data paired with numerical simulation benchmarking of the experiments. The project experiments included the largest wire-wrapped pin assembly Mass Index of Refraction (MIR) experiment in the world, the first known wire-wrapped assembly experiment with deformed duct geometries and the largest numerical simulations ever produced for wire-wrapped bundles.

  2. Computing sextic centrifugal distortion constants by DFT: A benchmark analysis on halogenated compounds

    Science.gov (United States)

    Pietropolli Charmet, Andrea; Stoppa, Paolo; Tasinato, Nicola; Giorgianni, Santi

    2017-05-01

    This work presents a benchmark study on the calculation of the sextic centrifugal distortion constants employing cubic force fields computed by means of density functional theory (DFT). For a set of semi-rigid halogenated organic compounds several functionals (B2PLYP, B3LYP, B3PW91, M06, M06-2X, O3LYP, X3LYP, ωB97XD, CAM-B3LYP, LC-ωPBE, PBE0, B97-1 and B97-D) were used for computing the sextic centrifugal distortion constants. The effects related to the size of basis sets and the performances of hybrid approaches, where the harmonic data obtained at higher level of electronic correlation are coupled with cubic force constants yielded by DFT functionals, are presented and discussed. The predicted values were compared to both the available data published in the literature and those obtained by calculations carried out at increasing level of electronic correlation: Hartree-Fock Self Consistent Field (HF-SCF), second order Møller-Plesset perturbation theory (MP2), and coupled-cluster single and double (CCSD) level of theory. Different hybrid approaches, having the cubic force field computed at DFT level of theory coupled to harmonic data computed at increasing level of electronic correlation (up to CCSD level of theory augmented by a perturbational estimate of the effects of connected triple excitations, CCSD(T)) were considered. The obtained results demonstrate that they can represent reliable and computationally affordable methods to predict sextic centrifugal terms with an accuracy almost comparable to that yielded by the more expensive anharmonic force fields fully computed at MP2 and CCSD levels of theory. In view of their reduced computational cost, these hybrid approaches pave the route to the study of more complex systems.

  3. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  4. Computational path planner for product assembly in complex environments

    Science.gov (United States)

    Shang, Wei; Liu, Jianhua; Ning, Ruxin; Liu, Mi

    2013-03-01

    Assembly path planning is a crucial problem in assembly related design and manufacturing processes. Sampling based motion planning algorithms are used for computational assembly path planning. However, the performance of such algorithms may degrade much in environments with complex product structure, narrow passages or other challenging scenarios. A computational path planner for automatic assembly path planning in complex 3D environments is presented. The global planning process is divided into three phases based on the environment and specific algorithms are proposed and utilized in each phase to solve the challenging issues. A novel ray test based stochastic collision detection method is proposed to evaluate the intersection between two polyhedral objects. This method avoids fake collisions in conventional methods and degrades the geometric constraint when a part has to be removed with surface contact with other parts. A refined history based rapidly-exploring random tree (RRT) algorithm which bias the growth of the tree based on its planning history is proposed and employed in the planning phase where the path is simple but the space is highly constrained. A novel adaptive RRT algorithm is developed for the path planning problem with challenging scenarios and uncertain environment. With extending values assigned on each tree node and extending schemes applied, the tree can adapts its growth to explore complex environments more efficiently. Experiments on the key algorithms are carried out and comparisons are made between the conventional path planning algorithms and the presented ones. The comparing results show that based on the proposed algorithms, the path planner can compute assembly path in challenging complex environments more efficiently and with higher success. This research provides the references to the study of computational assembly path planning under complex environments.

  5. Quantum computing applied to calculations of molecular energies: CH2 benchmark.

    Science.gov (United States)

    Veis, Libor; Pittner, Jiří

    2010-11-21

    Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.

  6. Computational fluid dynamics (CFD) round robin benchmark for a pressurized water reactor (PWR) rod bundle

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Shin K., E-mail: paengki1@tamu.edu; Hassan, Yassin A.

    2016-05-15

    Highlights: • The capabilities of steady RANS models were directly assessed for full axial scale experiment. • The importance of mesh and conjugate heat transfer was reaffirmed. • The rod inner-surface temperature was directly compared. • The steady RANS calculations showed a limitation in the prediction of circumferential distribution of the rod surface temperature. - Abstract: This study examined the capabilities and limitations of steady Reynolds-Averaged Navier–Stokes (RANS) approach for pressurized water reactor (PWR) rod bundle problems, based on the round robin benchmark of computational fluid dynamics (CFD) codes against the NESTOR experiment for a 5 × 5 rod bundle with typical split-type mixing vane grids (MVGs). The round robin exercise against the high-fidelity, broad-range (covering multi-spans and entire lateral domain) NESTOR experimental data for both the flow field and the rod temperatures enabled us to obtain important insights into CFD prediction and validation for the split-type MVG PWR rod bundle problem. It was found that the steady RANS turbulence models with wall function could reasonably predict two key variables for a rod bundle problem – grid span pressure loss and the rod surface temperature – once mesh (type, resolution, and configuration) was suitable and conjugate heat transfer was properly considered. However, they over-predicted the magnitude of the circumferential variation of the rod surface temperature and could not capture its peak azimuthal locations for a central rod in the wake of the MVG. These discrepancies in the rod surface temperature were probably because the steady RANS approach could not capture unsteady, large-scale cross-flow fluctuations and qualitative cross-flow pattern change due to the laterally confined test section. Based on this benchmarking study, lessons and recommendations about experimental methods as well as CFD methods were also provided for the future research.

  7. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  8. Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition.

    Science.gov (United States)

    Stallkamp, J; Schlipsing, M; Salmen, J; Igel, C

    2012-08-01

    Traffic signs are characterized by a wide variability in their visual appearance in real-world environments. For example, changes of illumination, varying weather conditions and partial occlusions impact the perception of road signs. In practice, a large number of different sign classes needs to be recognized with very high accuracy. Traffic signs have been designed to be easily readable for humans, who perform very well at this task. For computer systems, however, classifying traffic signs still seems to pose a challenging pattern recognition problem. Both image processing and machine learning algorithms are continuously refined to improve on this task. But little systematic comparison of such systems exist. What is the status quo? Do today's algorithms reach human performance? For assessing the performance of state-of-the-art machine learning algorithms, we present a publicly available traffic sign dataset with more than 50,000 images of German road signs in 43 classes. The data was considered in the second stage of the German Traffic Sign Recognition Benchmark held at IJCNN 2011. The results of this competition are reported and the best-performing algorithms are briefly described. Convolutional neural networks (CNNs) showed particularly high classification accuracies in the competition. We measured the performance of human subjects on the same data-and the CNNs outperformed the human test persons. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. BIGHORN Computational Fluid Dynamics Theory, Methodology, and Code Verification & Validation Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Yidong [Idaho National Lab. (INL), Idaho Falls, ID (United States); Andrs, David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Martineau, Richard Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for time integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.

  10. Verification Benchmarks to Assess the Implementation of Computational Fluid Dynamics Based Hemolysis Prediction Models.

    Science.gov (United States)

    Hariharan, Prasanna; D'Souza, Gavin; Horner, Marc; Malinauskas, Richard A; Myers, Matthew R

    2015-09-01

    As part of an ongoing effort to develop verification and validation (V&V) standards for using computational fluid dynamics (CFD) in the evaluation of medical devices, we have developed idealized flow-based verification benchmarks to assess the implementation of commonly cited power-law based hemolysis models in CFD. Verification process ensures that all governing equations are solved correctly and the model is free of user and numerical errors. To perform verification for power-law based hemolysis modeling, analytical solutions for the Eulerian power-law blood damage model (which estimates hemolysis index (HI) as a function of shear stress and exposure time) were obtained for Couette and inclined Couette flow models, and for Newtonian and non-Newtonian pipe flow models. Subsequently, CFD simulations of fluid flow and HI were performed using Eulerian and three different Lagrangian-based hemolysis models and compared with the analytical solutions. For all the geometries, the blood damage results from the Eulerian-based CFD simulations matched the Eulerian analytical solutions within ∼1%, which indicates successful implementation of the Eulerian hemolysis model. Agreement between the Lagrangian and Eulerian models depended upon the choice of the hemolysis power-law constants. For the commonly used values of power-law constants (α  = 1.9-2.42 and β  = 0.65-0.80), in the absence of flow acceleration, most of the Lagrangian models matched the Eulerian results within 5%. In the presence of flow acceleration (inclined Couette flow), moderate differences (∼10%) were observed between the Lagrangian and Eulerian models. This difference increased to greater than 100% as the beta exponent decreased. These simplified flow problems can be used as standard benchmarks for verifying the implementation of blood damage predictive models in commercial and open-source CFD codes. The current study only used power-law model as an illustrative example to emphasize the need

  11. TiD-Introducing and Benchmarking an Event-Delivery System for Brain-Computer Interfaces.

    Science.gov (United States)

    Breitwieser, Christian; Tavella, Michele; Schreuder, Martijn; Cincotti, Febo; Leeb, Robert; Muller-Putz, Gernot R

    2017-12-01

    In this paper, we present and analyze an event distribution system for brain-computer interfaces. Events are commonly used to mark and describe incidents during an experiment and are therefore critical for later data analysis or immediate real-time processing. The presented approach, called Tools for brain-computer interaction interface D (TiD), delivers messages in XML format via a buslike system using transmission control protocol connections or shared memory. A dedicated server dispatches TiD messages to distributed or local clients. The TiD message is designed to be flexible and contains time stamps for event synchronization, whereas events describe incidents, which occur during an experiment. TiD was tested extensively toward stability and latency. The effect of an occurring event jitter was analyzed and benchmarked on a reference implementation under different conditions as gigabit and 100-Mb Ethernet or Wi-Fi with a different number of event receivers. A 3-dB signal attenuation, which occurs when averaging jitter influenced trials aligned by events, is starting to become visible at around 1-2 kHz in the case of a gigabit connection. Mean event distribution times across operating systems are ranging from 0.3 to 0.5ms for a gigabit network connection for 106 events. Results for other environmental conditions are available in this paper. References already using TiD for event distribution are provided showing the applicability of TiD for event delivery with distributed or local clients.

  12. Inter-comparison of JEF-2.2 and JEFF-3.1 evaluated nuclear data through Monte Carlo analysis of VVER-1000 MOX Core Computational Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Thilagam, L., E-mail: thilagam@igcar.gov.i [AERB-Safety Research Institute, Kalpakkam, Tamilnadu 603 102 (India); Karthikeyan, R., E-mail: rkarthi@barc.gov.i [Light Water Reactors Physics Section, Reactor Physics Design Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India); Jagannathan, V., E-mail: v_jagan1952@rediffmail.co [Light Water Reactors Physics Section, Reactor Physics Design Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India); Subbaiah, K.V.; Lee, S.M. [AERB-Safety Research Institute, Kalpakkam, Tamilnadu 603 102 (India)

    2010-02-15

    The nuclear data forms a vital component in reactor core physics computations. The nuclear data is evaluated and modified on a continuous basis by different nuclear data centres and laboratories worldwide. The work on upgradation of the nuclear data is being carried out using new evaluations obtained through experiments and theoretical models to enhance their accuracy. Use of different sets of cross-section data in the analysis of a benchmark problem is a source of strong feedback for further improvements in data by mutual comparison of results. These comparisons also help to find out the best evaluated cross-section data released. Towards this objective, an attempt has been made to inter-compare JEF-2.2 and JEFF-3.1 evaluated nuclear data through the Monte Carlo simulation of 'VVER-1000 MOX Core Computational Benchmark'. This study deals with the calculation and inter-comparison of reactor parameters such as multiplication factors, cell average and assembly average fission reaction rate distributions estimated for various reactor state descriptions specified in the benchmark. Point-wise cross-section libraries processed from the JEF-2.2 and JEFF-3.1 evaluated data are used in the analysis. Concerning the multiplication factors and fission rate distributions, considerable differences are observed between the two libraries. While performing the MCNP calculations with JEFF-3.1 data, it is observed that the deviations of effective neutron multiplication factors (k{sub eff}) from those of benchmark standard MCU results are lower by about approx0.100% for the most of the states than those computed using JEF-2.2. Fission rate distributions using JEFF-3.1 data are also found to have significant deviations up to +-9.2% compared to calculations with its earlier version JEF-2.2 data. Some interesting trends on the used nuclear data are identified from the discrepancies of the individual results. The cause for considerable changes in the calculated parameters are

  13. Genome Assembly and Computational Analysis Pipelines for Bacterial Pathogens

    KAUST Repository

    Rangkuti, Farania Gama Ardhina

    2011-06-01

    Pathogens lie behind the deadliest pandemics in history. To date, AIDS pandemic has resulted in more than 25 million fatal cases, while tuberculosis and malaria annually claim more than 2 million lives. Comparative genomic analyses are needed to gain insights into the molecular mechanisms of pathogens, but the abundance of biological data dictates that such studies cannot be performed without the assistance of computational approaches. This explains the significant need for computational pipelines for genome assembly and analyses. The aim of this research is to develop such pipelines. This work utilizes various bioinformatics approaches to analyze the high-­throughput genomic sequence data that has been obtained from several strains of bacterial pathogens. A pipeline has been compiled for quality control for sequencing and assembly, and several protocols have been developed to detect contaminations. Visualization has been generated of genomic data in various formats, in addition to alignment, homology detection and sequence variant detection. We have also implemented a metaheuristic algorithm that significantly improves bacterial genome assemblies compared to other known methods. Experiments on Mycobacterium tuberculosis H37Rv data showed that our method resulted in improvement of N50 value of up to 9697% while consistently maintaining high accuracy, covering around 98% of the published reference genome. Other improvement efforts were also implemented, consisting of iterative local assemblies and iterative correction of contiguated bases. Our result expedites the genomic analysis of virulent genes up to single base pair resolution. It is also applicable to virtually every pathogenic microorganism, propelling further research in the control of and protection from pathogen-­associated diseases.

  14. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flapper, Joris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variables affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and

  15. A Computer Model for Analyzing Volatile Removal Assembly

    Science.gov (United States)

    Guo, Boyun

    2010-01-01

    A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.

  16. Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C

    OpenAIRE

    Blaha, Stephen

    2002-01-01

    We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.

  17. Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C Language

    OpenAIRE

    Blaha, Stephen

    2002-01-01

    We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.

  18. The Graphical Benchmark Information Service

    Directory of Open Access Journals (Sweden)

    Mark Papiani

    1995-01-01

    Full Text Available Unlike single-processor benchmarks, multiprocessor benchmarks can yield tens of numbers for each benchmark on each computer, as factors such as the number of processors and problem size are varied. A graphical display of performance surfaces therefore provides a satisfactory way of comparing results. The University of Southampton has developed the Graphical Benchmark Information Service (GBIS on the World Wide Web (WWW to display interactively graphs of user-selected benchmark results from the GENESIS and PARKBENCH benchmark suites.

  19. Availability of Neutronics Benchmarks in the ICSBEP and IRPhEP Handbooks for Computational Tools Testing

    Energy Technology Data Exchange (ETDEWEB)

    Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana; Hill, Ian; Gulliford, Jim

    2017-02-01

    In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount. Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next

  20. Computer simulation of viral-assembly and translocation

    Science.gov (United States)

    Mahalik, Jyoti Prakash

    We investigated four different problems using coarse grained computational models : self-assembly of single stranded (ss) DNA virus, ejection dynamics of double stranded(ds) DNA from phages, translocation of ssDNA through MspA protein pore, and segmental dynamics of a polymer translocating through a synthetic nanopore. In the first part of the project, we investigated the self-assembly of a virus with and without its genome. A coarse-grained model was proposed for the viral subunit proteins and its genome (ssDNA). Langevin dynamics simulation, and replica exchange method were used to determine the kinetics and energetics of the self-assembly process, respectively. The self-assembly follows a nucleation-growth kind of mechanism. The ssDNA plays a crucial role in the self-assembly by acting as a template and enhancing the local concentration of the subunits. The presence of the genome does not changes the mechanism of the self-assembly but it reduces the nucleation time and enhances the growth rate by almost an order of magnitude. The second part of the project involves the investigation of the dynamics of the ejection of dsDNA from phages. A coarse-grained model was used for the phage and dsDNA. Langevin dynamics simulation was used to investigate the kinetics of the ejection. The ejection is a stochastic process and a slow intermediate rate kinetics was observed for most ejection trajectories. We discovered that the jamming of the DNA at the pore mouth at high packing fraction and for a disordered system is the reason for the intermediate slow kinetics. The third part of the project involves translocation of ssDNA through MspA protein pore. MspA protein pore has the potential for genome sequencing because of its ability to clearly distinguish the four different nucleotides based on their blockade current, but it is a challenge to use this pore for any practical application because of the very fast traslocation time. We resolved the state of DNA translocation

  1. An easily assembled laboratory exercise in computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Mylott, Elliot; Klepetka, Ryan; Dunlap, Justin C; Widenhorn, Ralf, E-mail: ralfw@pdx.edu [Department of Physics, Portland State University, PO Box 751, Portland, OR 97207 (United States)

    2011-09-15

    In this paper, we present a laboratory activity in computed tomography (CT) primarily composed of a photogate and a rotary motion sensor that can be assembled quickly and partially automates data collection and analysis. We use an enclosure made with a light filter that is largely opaque in the visible spectrum but mostly transparent to the near IR light of the photogate (880 nm) to scan objects hidden from the human eye. This experiment effectively conveys how an image is formed during a CT scan and highlights the important physical and imaging concepts behind CT such as electromagnetic radiation, the interaction of light and matter, artefacts and windowing. Like our setup, previous undergraduate level laboratory activities which teach the basics of CT have also utilized light sources rather than x-rays; however, they required a more extensive setup and used devices not always easily found in undergraduate laboratories. Our setup is easily implemented with equipment found in many teaching laboratories.

  2. PATHWAY ASSEMBLY ASSISTED BY COMPUTER: TEACHING ANAEROBIC GLYCOLYSIS

    Directory of Open Access Journals (Sweden)

    F. M Sarraipa

    2008-05-01

    Full Text Available The  knowledge on  metabolic pathways  is required in  the higher education courses on biological  field.  This  work  presents  a  computer assisted  approach for metabolic pathways self study, based  on their assembly, reaction-by-reaction.  Anaerobic glycolysis was used as a model.  The software was designed to users who have basic knowledge on enzymatic catalysis,  and  to be used with or without teacher’s help. Every reaction is detailed, and the student can move forward only after having assembled each reaction correctly. The software  contains a tutorial  to help users both  on its use, and  on the  correct assembly of each reaction.  The software was field tested  in the basics biochemistry disciplines to the students of Physical Education, Nursing, Medicine and Biology from the State University of Campinas  –  UNICAMP, and in the physiology discipline to the students of Physical Education from the Institute Adventist Sao Paulo – IASP. A database using MySQL was structured to collect data on the software using . Every action taken by the students were recorded. The statistical analysis showed that the number of tries decreases as the students move forward on the pathway assembly. The most difficult reaction besides the first one, were the ones that presented  pattern changes, for example, the sixth reaction was the  first oxidation-reduction reaction. In the first reaction the most frequent mistakes  were using the phosphohexose isomerase as enzyme or having forgotten to include ATP among the substrates. In the sixth reaction the most frequent mistakes was having forgotten to include NAD+ among the substrates. The recorded data analysis can be used by the teachers to give in their lectures, special attention to the reactions were the students made more mistakes.

  3. Polymorphous Computing Architecture (PCA) Kernel Benchmark Measurements on the MIT Raw Microprocessor

    Science.gov (United States)

    2006-06-14

    grammer . The term "static" applied to the networks implies that the programmer must set up the communication pattern used by the switch in advance of...performance of Raw using the dynamic network in such a context . 58 ing oui pieces upon allocation requests and reassuming responsibility for pieces... free system call that requires hundreds of cycles. Finally, the benchmark performance could also improve through the use o\\’ more sophisticated data

  4. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    Science.gov (United States)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  5. An informatics benchmarking statement.

    Science.gov (United States)

    Pigott, K; de Lusignan, S; Rapley, A; Robinson, J; Pritchard-Copley, A

    2007-01-01

    Benchmarking statements provide a mechanism for making academic standards explicit within a subject area. They allow comparisons between courses to be based on learning outcomes rather than by defining a curriculum. No such statement has been produced for informatics. In the absence of any established benchmarking statements for informatics a new biomedical informatics course at St. George's has developed a first benchmarking statement - which defines the skills knowledge and understanding a biomedical informatics student should acquire by the time they complete the course. Review of national biomedical science and computing subject benchmarking statements and academic educational objectives and national occupational competencies in informatics. We have developed a twenty-item benchmarking statement and this is available on-line at: http://www.gpinformatics.org/benchmark2006/. This benchmarking statement includes a definition and justification for all twenty statements. We found international educational objectives and national informatics competencies useful and these are mapped to each one. National subject benchmarks for computing and biomedical science were less useful and have not been systematically mapped. Benchmarking the skills, knowledge and understanding that a student should acquire during their course of study may be more useful than setting a standard curriculum. This benchmarking statement is a first step towards defining the learning outcomes and competencies a student of this discipline should acquire. The international informatics community should consider moving from a standard curriculum to an agreed subject benchmarking statement for medical, health and biomedical informatics.

  6. Assembly of Ultra-Dense Nanowire-Based Computing Systems

    National Research Council Canada - National Science Library

    Lieber, Charles M

    2006-01-01

    ..., with highly reliable defect- and fault-tolerant architecture. We have fabricated and assembled molecular-scale logic elements based on overlapping semiconducting nanowire arrays using novel wafer-scale assembly techniques...

  7. Research on assembly reliability control technology for computer numerical control machine tools

    Directory of Open Access Journals (Sweden)

    Yan Ran

    2017-01-01

    Full Text Available Nowadays, although more and more companies focus on improving the quality of computer numerical control machine tools, its reliability control still remains as an unsolved problem. Since assembly reliability control is very important in product reliability assurance in China, a new key assembly processes extraction method based on the integration of quality function deployment; failure mode, effects, and criticality analysis; and fuzzy theory for computer numerical control machine tools is proposed. Firstly, assembly faults and assembly reliability control flow of computer numerical control machine tools are studied. Secondly, quality function deployment; failure mode, effects, and criticality analysis; and fuzzy theory are integrated to build a scientific extraction model, by which the key assembly processes meeting both customer functional demands and failure data distribution can be extracted, also an example is given to illustrate the correctness and effectiveness of the method. Finally, the assembly reliability monitoring system is established based on key assembly processes to realize and simplify this method.

  8. Neutron and gamma spectra measurements and calculations in benchmark spherical iron assemblies with sup 2 sup 5 sup 2 Cf neutron source in the centre

    CERN Document Server

    Jansky, B; Turzik, Z; Kyncl, J; Cvachovec, F; Trykov, L A; Volkov, V S

    2002-01-01

    The neutron and gamma spectra measurements have been made for benchmark iron spherical assemblies with the diameter of 30, 50 and 100 cm. The sup 2 sup 5 sup 2 Cf neutron sources with different emissions were placed into the centre of iron spheres. In the first stage of the project, independent laboratories took part in the leakage spectra measurements. The proton recoil method was used with stilbene crystals and hydrogen proportional counters. The working range of spectrometers for neutrons is in energy range from 0.01 to 16 MeV, and for gamma from 0.40 to 12 MeV. Some adequate calculations have been carried out. The propose to carefully analyse the leakage mixed neutron and gamma spectrum from iron sphere of diameter 50 cm and then adopt that field as standard.

  9. Vibroacoustic benchmarking; Vibroakustisches Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kaesler, R.; Prinzler, H. [Freudenberg Dichtungs- und Schwingungstechnik KG, Weinheim (Germany). Technische Entwicklungszentrum

    2000-03-01

    Improved driving comfort is one of the main enhancements in the automotive industry today. It affects not only the constructive design of a car and its equipment but also the vibroacoustic adaptation of the entire vehicle. For several years now, Freudenberg Dichtungs- und Schwingungstechnik has been benchmarking automobiles with respect to their vibroacoustic performance, with the aim of optimising the vibrational layout of engine mounting systems, chassis mounting concepts and finally the overall adaptation of the vehicle. The results of this benchmarking programme are listed according to vehicle class (luxury, mid-range, compact or subcompact class) and constitute an impressive vibroacoustic assessment of current developments in this field. (orig.) [German] Die Steigerung des Fahrkomforts gehoert zu den aktuellen Hauptentwicklungen der Fahrzeughersteller. Dies betrifft nicht nur die konstruktive Gestaltung und Ausstattung der Kraftfahrzeuge, sondern insbesondere die vibroakustische Abstimmung des Gesamtfahrzeugs. Freudenberg Dichtungs- und Schwingungstechnik fuehrte seit mehreren Jahren ein vibroakustisches Fahrzeug-Benchmarking durch, um die schwingungstechnische Auslegung von Aggregatelagerungssystem, Fahrwerklagerungskonzepten und in der Endstufe die Gesamtabstimmung zu optimieren. Die Ergebnisse werden nach Fahrzeugklassen (Luxus-, Mittel-, Kompaktklasse und Kleinwagen) geordnet und stellen eine vibroakustische Bestandsaufnahme der aktuellen Entwicklungen dar. (orig.)

  10. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  11. Verification and benchmarking of MAGNUM-2D: a finite element computer code for flow and heat transfer in fractured porous media

    Energy Technology Data Exchange (ETDEWEB)

    Eyler, L.L.; Budden, M.J.

    1985-03-01

    The objective of this work is to assess prediction capabilities and features of the MAGNUM-2D computer code in relation to its intended use in the Basalt Waste Isolation Project (BWIP). This objective is accomplished through a code verification and benchmarking task. Results are documented which support correctness of prediction capabilities in areas of intended model application. 10 references, 43 figures, 11 tables.

  12. Integrated Quality Control of Precision Assemblies using Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro

    coor-dinate measuring machines (CMMs) when working with complex and fragile parts. This Ph.D. project at DTU Mechanical Engineering concerns the applicability of CT for quality control of precision assem-blies. Investigations to quantify the accuracy of CT measurements, reference artefacts to correct...

  13. Neutron spectra measurement and calculations using data libraries CIELO, JEFF-3.2 and ENDF/B-VII.1 in iron benchmark assemblies

    Science.gov (United States)

    Jansky, Bohumil; Rejchrt, Jiri; Novak, Evzen; Losa, Evzen; Blokhin, Anatoly I.; Mitenkova, Elena

    2017-09-01

    The leakage neutron spectra measurements have been done on benchmark spherical assemblies - iron spheres with diameter of 20, 30, 50 and 100 cm. The Cf-252 neutron source was placed into the centre of iron sphere. The proton recoil method was used for neutron spectra measurement using spherical hydrogen proportional counters with diameter of 4 cm and with pressure of 400 and 1000 kPa. The neutron energy range of spectrometer is from 0.1 to 1.3 MeV. This energy interval represents about 85 % of all leakage neutrons from Fe sphere of diameter 50 cm and about of 74% for Fe sphere of diameter 100 cm. The adequate MCNP neutron spectra calculations based on data libraries CIELO, JEFF-3.2 and ENDF/B-VII.1 were done. Two calculations were done with CIELO library. The first one used data for all Fe-isotopes from CIELO and the second one (CIELO-56) used only Fe-56 data from CIELO and data for other Fe isotopes were from ENDF/B-VII.1. The energy structure used for calculations and measurements was 40 gpd (groups per decade) and 200 gpd. Structure 200 gpd represents lethargy step about of 1%. This relatively fine energy structure enables to analyze the Fe resonance neutron energy structure. The evaluated cross section data of Fe were validated on comparisons between the calculated and experimental spectra.

  14. Benchmarking selected computational gene network growing tools in context of virus-host interactions.

    Science.gov (United States)

    Taye, Biruhalem; Vaz, Candida; Tanavde, Vivek; Kuznetsov, Vladimir A; Eisenhaber, Frank; Sugrue, Richard J; Maurer-Stroh, Sebastian

    2017-07-19

    Several available online tools provide network growing functions where an algorithm utilizing different data sources suggests additional genes/proteins that should connect an input gene set into functionally meaningful networks. Using the well-studied system of influenza host interactions, we compare the network growing function of two free tools GeneMANIA and STRING and the commercial IPA for their performance of recovering known influenza A virus host factors previously identified from siRNA screens. The result showed that given small (~30 genes) or medium (~150 genes) input sets all three network growing tools detect significantly more known host factors than random human genes with STRING overall performing strongest. Extending the networks with all the three tools significantly improved the detection of GO biological processes of known host factors compared to not growing networks. Interestingly, the rate of identification of true host factors using computational network growing is equal or better to doing another experimental siRNA screening study which could also be true and applied to other biological pathways/processes.

  15. A computer-oriented system for assembling and displaying land management information

    Science.gov (United States)

    Elliot L. Amidon

    1964-01-01

    Maps contain information basic to land management planning. By transforming conventional map symbols into numbers which are punched into cards, the land manager can have a computer assemble and display information required for a specific job. He can let a computer select information from several maps, combine it with such nonmap data as treatment cost or benefit per...

  16. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  17. Transuranic Hybrid Materials: Crystallographic and Computational Metrics of Supramolecular Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Surbella, Robert G. [Department; Ducati, Lucas C. [Department; Pellegrini, Kristi L. [Pacific Northwest National Laboratory, 902 Battelle Boulevard, Richland, Washington 99354, United States; McNamara, Bruce K. [Pacific Northwest National Laboratory, 902 Battelle Boulevard, Richland, Washington 99354, United States; Autschbach, Jochen [Department; Schwantes, Jon M. [Pacific Northwest National Laboratory, 902 Battelle Boulevard, Richland, Washington 99354, United States; Cahill, Christopher L. [Department

    2017-07-26

    A family of twelve supramolecular [AnO2Cl4]2- (An = U, Np, Pu) containing compounds assembled via hydrogen and halogen bonds donated by substituted 4-X-pyridinium cations (X = H, Cl, Br, I) is reported. These materials were prepared from a room-temperature synthesis wherein crystallization of unhydrolyzed and valence pure [An(VI)O2Cl4]2- (An = U, Np, Pu) tectons are the norm. We present a hierarchy of assembly criteria based on crystallographic observations, and subsequently quantify the strengths of the non-covalent interactions using Kohn-Sham density functional calculations. We provide, for the first time, a detailed description of the electrostatic potentials (ESPs) of the actinyl tetrahalide dianions and reconcile crystallographically observed structural motifs and non-covalent interaction (NCI) acceptor-donor pairings. Our findings indicate that the average electrostatic potential across the halogen ligands (the acceptors) changes by only ~2 kJ mol-1 across the AnO22+ series, indicating the magnitude of the potential is independent of the metal center. The role of the cation is therefore critical in directing structural motifs and dictating the resulting hydrogen and halogen bond strengths, the former being stronger due to the positive charge centralized on the pyridyl nitrogen N-H+. Subsequent analyses using the Quantum theory of atoms in molecules (QTAIM) and natural bond orbital (NBO) approaches support this conclusion and highlight the structure directing role of the cations. Whereas one can infer that the 2 Columbic attraction is the driver for assembly, the contribution of the non-covalent interaction is to direct the molecular-level arrangement (or disposition) of the tectons.

  18. Logic Gates and Computation from Assembled Nanowire Building Blocks

    National Research Council Canada - National Science Library

    Yu Huang; Xiangfeng Duan; Yi Cui; Lincoln J. Lauhon; Kyoung-Ha Kim; Charles M. Lieber

    2001-01-01

    ... the conducting channel and gate electrode. Nanowire junction arrays have been configured as key OR, AND, and NOR logic-gate structures with substantial gain and have been used to implement basic computation.

  19. Theory of Connectivity: Nature and Nurture of Cell Assemblies and Cognitive Computation.

    Science.gov (United States)

    Li, Meng; Liu, Jun; Tsien, Joe Z

    2016-01-01

    Richard Semon and Donald Hebb are among the firsts to put forth the notion of cell assembly-a group of coherently or sequentially-activated neurons-to represent percept, memory, or concept. Despite the rekindled interest in this century-old idea, the concept of cell assembly still remains ill-defined and its operational principle is poorly understood. What is the size of a cell assembly? How should a cell assembly be organized? What is the computational logic underlying Hebbian cell assemblies? How might Nature vs. Nurture interact at the level of a cell assembly? In contrast to the widely assumed randomness within the mature but naïve cell assembly, the Theory of Connectivity postulates that the brain consists of the developmentally pre-programmed cell assemblies known as the functional connectivity motif (FCM). Principal cells within such FCM is organized by the power-of-two-based mathematical principle that guides the construction of specific-to-general combinatorial connectivity patterns in neuronal circuits, giving rise to a full range of specific features, various relational patterns, and generalized knowledge. This pre-configured canonical computation is predicted to be evolutionarily conserved across many circuits, ranging from these encoding memory engrams and imagination to decision-making and motor control. Although the power-of-two-based wiring and computational logic places a mathematical boundary on an individual's cognitive capacity, the fullest intellectual potential can be brought about by optimized nature and nurture. This theory may also open up a new avenue to examining how genetic mutations and various drugs might impair or improve the computational logic of brain circuits.

  20. Theory of Connectivity: Nature and Nurture of Cell Assemblies and Cognitive Computation

    Directory of Open Access Journals (Sweden)

    Meng eLi

    2016-04-01

    Full Text Available Richard Semon and Donald Hebb are among the firsts to put forth the notion of cell assembly – a group of coherently or sequentially-activated neurons– to represent percept, memory, or concept. Despite the rekindled interest in this age-old idea, the concept of cell assembly still remains ill-defined and its operational principle is poorly understood. What is the size of a cell assembly? How should a cell assembly be organized? What is the computational logic underlying Hebbian cell assemblies? How might Nature vs Nurture interact at the level of a cell assembly? In contrast to the widely assumed local randomness within the mature but naïve cell assembly, the recent Theory of Connectivity postulates that the brain consists of the developmentally pre-programmed cell assemblies known as the functional connectivity motif (FCM. Principal cells within such FCM is organized by the power-of-two-based mathematical principle that guides the construction of specific-to-general combinatorial connectivity patterns in neuronal circuits, giving rise to a full range of specific features, various relational patterns, and generalized knowledge. This pre-configured canonical computation is predicted to be evolutionarily conserved across many circuits, ranging from these encoding memory engrams and imagination to decision-making and motor control. Although the power-of-two-based wiring and computational logic places a mathematical boundary on an individual’s cognitive capacity, the fullest intellectual potential can be brought about by optimized nature and nurture. This theory may also open up a new avenue to examining how genetic mutations and various drugs might impair or enhance the computational logic of brain circuits.

  1. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  2. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  3. Modeling biological problems in computer science: a case study in genome assembly.

    Science.gov (United States)

    Medvedev, Paul

    2018-01-30

    As computer scientists working in bioinformatics/computational biology, we often face the challenge of coming up with an algorithm to answer a biological question. This occurs in many areas, such as variant calling, alignment and assembly. In this tutorial, we use the example of the genome assembly problem to demonstrate how to go from a question in the biological realm to a solution in the computer science realm. We show the modeling process step-by-step, including all the intermediate failed attempts. Please note this is not an introduction to how genome assembly algorithms work and, if treated as such, would be incomplete and unnecessarily long-winded. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Benchmarking a computational design method for the incorporation of metal ion-binding sites at symmetric protein interfaces.

    Science.gov (United States)

    Hansen, William A; Khare, Sagar D

    2017-08-01

    The design of novel metal-ion binding sites along symmetric axes in protein oligomers could provide new avenues for metalloenzyme design, construction of protein-based nanomaterials and novel ion transport systems. Here, we describe a computational design method, symmetric protein recursive ion-cofactor sampling (SyPRIS), for locating constellations of backbone positions within oligomeric protein structures that are capable of supporting desired symmetrically coordinated metal ion(s) chelated by sidechains (chelant model). Using SyPRIS on a curated benchmark set of protein structures with symmetric metal binding sites, we found high recovery of native metal coordinating rotamers: in 65 of the 67 (97.0%) cases, native rotamers featured in the best scoring model while in the remaining cases native rotamers were found within the top three scoring models. In a second test, chelant models were crossmatched against protein structures with identical cyclic symmetry. In addition to recovering all native placements, 10.4% (8939/86013) of the non-native placements, had acceptable geometric compatibility scores. Discrimination between native and non-native metal site placements was further enhanced upon constrained energy minimization using the Rosetta energy function. Upon sequence design of the surrounding first-shell residues, we found further stabilization of native placements and a small but significant (1.7%) number of non-native placement-based sites with favorable Rosetta energies, indicating their designability in existing protein interfaces. The generality of the SyPRIS approach allows design of novel symmetric metal sites including with non-natural amino acid sidechains, and should enable the predictive incorporation of a variety of metal-containing cofactors at symmetric protein interfaces. © 2017 The Protein Society.

  5. DNA Self-Assembly and Computation Studied with a Coarse-grained Dynamic Bonded Model

    DEFF Research Database (Denmark)

    Svaneborg, Carsten; Fellermann, Harold; Rasmussen, Steen

    2012-01-01

    We utilize a coarse-grained directional dynamic bonding DNA model [C. Svaneborg, Comp. Phys. Comm. (In Press DOI:10.1016/j.cpc.2012.03.005)] to study DNA self-assembly and DNA computation. In our DNA model, a single nucleotide is represented by a single interaction site, and complementary sites c...... tetrahedra at several temperatures, a DNA icosahedron, and also strand displacement operations used in DNA computation....

  6. Computer Aided Design of the Link-Fork Head-Piston Assembly of the Kaplan Turbine with Solidworks

    Directory of Open Access Journals (Sweden)

    Camelia Jianu

    2010-10-01

    Full Text Available The paper presents the steps for 3D computer aided design (CAD of the link-fork head-piston assembly of the Kaplan turbine made in SolidWorks.The present paper is a tutorial for a Kaplan turbine assembly 3D geometry, which is dedicated to the Assembly design and Drawing Geometry and Drawing Annotation.

  7. Combining Self-Explaining with Computer Architecture Diagrams to Enhance the Learning of Assembly Language Programming

    Science.gov (United States)

    Hung, Y.-C.

    2012-01-01

    This paper investigates the impact of combining self explaining (SE) with computer architecture diagrams to help novice students learn assembly language programming. Pre- and post-test scores for the experimental and control groups were compared and subjected to covariance (ANCOVA) statistical analysis. Results indicate that the SE-plus-diagram…

  8. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  9. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  10. Financial benchmarking

    OpenAIRE

    Palanevich, Yana

    2014-01-01

    This bachelor thesis is focused on financial benchmarking of KBJ VELKOOBCHODY s.r.o. The aim of this study is to evaluate the financial situation of the company and to compare the results within the same field and with the best companies in this branch as well as direct competitors. The purpose is to gain an overview of the financial health of the company and also the relevation of strengths and weaknesses through Benchmarking diagnostic system of financial indicators INFA. The theoretical pa...

  11. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  12. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  13. Self Assembled Semiconductor Quantum Dots for Spin Based All Optical and Electronic Quantum Computing

    Science.gov (United States)

    2008-04-17

    Bandyopadhyay, “Self Assembling Quantum Dots and Wires”, Encyclopedia of Nanoscience and Nanotechnology, Eds. James . A. Schwartz, Cristian Contescu and Karol...Mexico, October 29 - November 3, 2006. 10. M. Cahay, K. Garre, D. J. Lockwood, J. Frazer , B. Kanchibotla, S. Pramanik, S. Bandyopadhyay, V. Semet... George Mason and Virginia Commonwealth University), Newport News, VA, June 12, 2006 (plenary). 10. S. Bandyopadhyay, Computing, Detecting, Storing

  14. 2-d and 1-d Nanomaterials Construction through Peptide Computational Design and Solution Assembly

    Science.gov (United States)

    Pochan, Darrin

    Self-assembly of molecules is an attractive materials construction strategy due to its simplicity in application. By considering peptidic molecules in the bottom-up materials self-assembly design process, one can take advantage of inherently biomolecular attributes; intramolecular folding events, secondary structure, and electrostatic/H-bonding/hydrophobic interactions to define hierarchical material structure and consequent properties. Importantly, while biomimicry has been a successful strategy for the design of new peptide molecules for intermolecular assembly, computational tools have been developed to de novo design peptide molecules required for construction of pre-determined, desired nanostructures and materials. A new system comprised of coiled coil bundle motifs theoretically designed to assemble into designed, one and two-dimensional nanostructures will be introduced. The strategy provides the opportunity for arbitrary nanostructure formation, i.e. structures not observed in nature, with peptide molecules. Importantly, the desired nanostructure was chosen first while the peptides needed for coiled coil formation and subsequent nanomaterial formation were determined computationally. Different interbundle, two-dimensional nanostructures are stabilized by differences in amino acid composition exposed on the exterior of the coiled coil bundles. Computation was able to determine molecules required for different interbundle symmetries within two-dimensional sheets stabilized by subtle differences in amino acid composition of the inherent peptides. Finally, polymers were also created through covalent interactions between bundles that allowed formation of architectures spanning flexible network forming chains to ultra-stiff polymers, all with the same building block peptides. The success of the computational design strategy is manifested in the nanomaterial results as characterized by electron microscopy, scattering methods, and biophysical techniques. Support

  15. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  16. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  17. A Context-Aware Ubiquitous Learning Approach for Providing Instant Learning Support in Personal Computer Assembly Activities

    Science.gov (United States)

    Hsu, Ching-Kun; Hwang, Gwo-Jen

    2014-01-01

    Personal computer assembly courses have been recognized as being essential in helping students understand computer structure as well as the functionality of each computer component. In this study, a context-aware ubiquitous learning approach is proposed for providing instant assistance to individual students in the learning activity of a…

  18. Computer simulations predict that chromosome movements and rotations accelerate mitotic spindle assembly without compromising accuracy.

    Science.gov (United States)

    Paul, Raja; Wollman, Roy; Silkworth, William T; Nardi, Isaac K; Cimini, Daniela; Mogilner, Alex

    2009-09-15

    The mitotic spindle self-assembles in prometaphase by a combination of centrosomal pathway, in which dynamically unstable microtubules search in space until chromosomes are captured, and a chromosomal pathway, in which microtubules grow from chromosomes and focus to the spindle poles. Quantitative mechanistic understanding of how spindle assembly can be both fast and accurate is lacking. Specifically, it is unclear how, if at all, chromosome movements and combining the centrosomal and chromosomal pathways affect the assembly speed and accuracy. We used computer simulations and high-resolution microscopy to test plausible pathways of spindle assembly in realistic geometry. Our results suggest that an optimal combination of centrosomal and chromosomal pathways, spatially biased microtubule growth, and chromosome movements and rotations is needed to complete prometaphase in 10-20 min while keeping erroneous merotelic attachments down to a few percent. The simulations also provide kinetic constraints for alternative error correction mechanisms, shed light on the dual role of chromosome arm volume, and compare well with experimental data for bipolar and multipolar HT-29 colorectal cancer cells.

  19. Towards evidence-based computational statistics: lessons from clinical research on the role and design of real-data benchmark studies

    Directory of Open Access Journals (Sweden)

    Anne-Laure Boulesteix

    2017-09-01

    Full Text Available Abstract Background The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly “evidence-based”. Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. Main message In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of “evidence-based” statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. Conclusion We suggest that benchmark studies—a method of assessment of statistical methods using real-world datasets—might benefit from adopting (some concepts from evidence-based medicine towards the goal of more evidence-based statistical research.

  20. Towards evidence-based computational statistics: lessons from clinical research on the role and design of real-data benchmark studies.

    Science.gov (United States)

    Boulesteix, Anne-Laure; Wilson, Rory; Hapfelmeier, Alexander

    2017-09-09

    The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly "evidence-based". Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of "evidence-based" statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. We suggest that benchmark studies-a method of assessment of statistical methods using real-world datasets-might benefit from adopting (some) concepts from evidence-based medicine towards the goal of more evidence-based statistical research.

  1. Quality characteristic association analysis of computer numerical control machine tool based on meta-action assembly unit

    Directory of Open Access Journals (Sweden)

    Yan Ran

    2016-01-01

    Full Text Available As everyone knows, assembly quality plays a very important role in final product quality. Since computer numerical control machine tool is a large system with complicated structure and function, and there are complex association relationships among quality characteristics in assembly process, then it is difficult and inaccurate to analyze the whole computer numerical control machine tool quality characteristic association at one time. In this article, meta-action assembly unit is proposed as the basic analysis unit, of which quality characteristic association is studied to guarantee the whole computer numerical control machine tool assembly quality. First, based on “Function-Motion-Action” decomposition structure, the definitions of meta-action and meta-action assembly unit are introduced. Second, manufacturing process association and meta-action assembly unit quality characteristic association are discussed. Third, after understanding the definitions of information entropy and relative entropy, the concrete meta-action assembly unit quality characteristic association analysis steps based on relative entropy are described in detail. And finally, the lifting piston translation assembly unit of automatic pallet changer is taken as an example, the association degree between internal leakage and the influence factors of part quality characteristics and mate-relationships among them are calculated to figure out the most influential factors, showing the correctness and feasibility of this method.

  2. Applications of the theory of computation to nanoscale self-assembly

    Science.gov (United States)

    Doty, David Samuel

    This thesis applies the theory of computing to the theory of nanoscale self-assembly, to explore the ability -- and under certain conditions, the inability -- of molecules to automatically arrange themselves in computationally sophisticated ways. In particular, we investigate a model of molecular self-assembly known as the abstract Tile Assembly Model (aTAM), in which different types of square "tiles" represent molecules that, through the interaction of highly specific binding sites on their four sides, can automatically assemble into larger and more elaborate structures. We investigate the possibility of using the inherent randomness of sampling different tiles in a well-mixed solution to drive selection of random numbers from a finite set, and explore the tradeoff between the uniformity of the imposed distribution and the size of structures necessary to process the sampled tiles. We then show that the inherent randomness of the competition of different types of molecules for binding can be exploited in a different way. By adjusting the relative concentrations of tiles, the structure assembled by a tile set is shown to be programmable to a high precision, in the following sense. There is a single tile set that can be made to assemble a square of arbitrary width with high probability, by setting the concentrations of the tiles appropriately, so that all the information about the square's width is "learned" from the concentrations by sampling the tiles. Based on these constructions, and those of other researchers, which have been completely implemented in a simulated environment, we design a high-level domain-specific "visual language" for implementing complex constructions in the aTAM. This language frees the implementer of an aTAM construction from many low-level and tedious details of programming and, together with a visual software tool that directly implements the basic operations of the language, frees the implementer from almost any programming at all

  3. Computer-aided design of nano-filter construction using DNA self-assembly

    Directory of Open Access Journals (Sweden)

    Mohammadzadegan Reza

    2006-01-01

    Full Text Available AbstractComputer-aided design plays a fundamental role in both top-down and bottom-up nano-system fabrication. This paper presents a bottom-up nano-filter patterning process based on DNA self-assembly. In this study we designed a new method to construct fully designed nano-filters with the pores between 5 nm and 9 nm in diameter. Our calculations illustrated that by constructing such a nano-filter we would be able to separate many molecules.

  4. Benchmarking hypercube hardware and software

    Science.gov (United States)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  5. Comparative Neutronics Analysis of DIMPLE S06 Criticality Benchmark with Contemporary Reactor Core Analysis Computer Code Systems

    Directory of Open Access Journals (Sweden)

    Wonkyeong Kim

    2015-01-01

    Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.

  6. Diversity of bilateral synaptic assemblies for binaural computation in midbrain single neurons.

    Science.gov (United States)

    He, Na; Kong, Lingzhi; Lin, Tao; Wang, Shaohui; Liu, Xiuping; Qi, Jiyao; Yan, Jun

    2017-11-01

    Binaural hearing confers many beneficial functions but our understanding of its underlying neural substrates is limited. This study examines the bilateral synaptic assemblies and binaural computation (or integration) in the central nucleus of the inferior colliculus (ICc) of the auditory midbrain, a key convergent center. Using in-vivo whole-cell patch-clamp, the excitatory and inhibitory postsynaptic potentials (EPSPs/IPSPs) of single ICc neurons to contralateral, ipsilateral and bilateral stimulation were recorded. According to the contralateral and ipsilateral EPSP/IPSP, 7 types of bilateral synaptic assemblies were identified. These include EPSP-EPSP (EE), E-IPSP (EI), E-no response (EO), II, IE, IO and complex-mode (CM) neurons. The CM neurons showed frequency- and/or amplitude-dependent EPSPs/IPSPs to contralateral or ipsilateral stimulation. Bilateral stimulation induced EPSPs/IPSPs that could be larger than (facilitation), similar to (ineffectiveness) or smaller than (suppression) those induced by contralateral stimulation. Our findings have allowed our group to characterize novel neural circuitry for binaural computation in the midbrain. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Algebraic Multigrid Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    2017-08-01

    AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the BoomerAMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL and is very similar to the AMG2013 benchmark with additional optimizations. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem with a 27-point stencil, which can be scaled up and is designed to solve a very large problem. A second problem simulates a time dependent problem, in which successively various smnllcr systems are solved.

  8. Experimental studies and computational benchmark on heavy liquid metal natural circulation in a full height-scale test loop for small modular reactors

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Yong-Hoon, E-mail: chaotics@snu.ac.kr [Department of Energy Systems Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of); Cho, Jaehyun [Korea Atomic Energy Research Institute, 111 Daedeok-daero, 989 Beon-gil, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Lee, Jueun; Ju, Heejae; Sohn, Sungjune; Kim, Yeji; Noh, Hyunyub; Hwang, Il Soon [Department of Energy Systems Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of)

    2017-05-15

    Highlights: • Experimental studies on natural circulation for lead-bismuth eutectic were conducted. • Adiabatic wall boundaries conditions were established by compensating heat loss. • Computational benchmark with a system thermal-hydraulics code was performed. • Numerical simulation and experiment showed good agreement in mass flow rate. • An empirical relation was formulated for mass flow rate with experimental data. - Abstract: In order to test the enhanced safety of small lead-cooled fast reactors, lead-bismuth eutectic (LBE) natural circulation characteristics have been studied. We present results of experiments with LBE non-isothermal natural circulation in a full-height scale test loop, HELIOS (heavy eutectic liquid metal loop for integral test of operability and safety of PEACER), and the validation of a system thermal-hydraulics code. The experimental studies on LBE were conducted under steady state as a function of core power conditions from 9.8 kW to 33.6 kW. Local surface heaters on the main loop were activated and finely tuned by trial-and-error approach to make adiabatic wall boundary conditions. A thermal-hydraulic system code MARS-LBE was validated by using the well-defined benchmark data. It was found that the predictions were mostly in good agreement with the experimental data in terms of mass flow rate and temperature difference that were both within 7%, respectively. With experiment results, an empirical relation predicting mass flow rate at a non-isothermal, adiabatic condition in HELIOS was derived.

  9. Reevaluation of the Jezebel Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-10

    Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.

  10. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  11. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  12. OECD/NEA Sandia Fuel Project phase I: Benchmark of the ignition testing

    Energy Technology Data Exchange (ETDEWEB)

    Adorni, Martina, E-mail: martina_adorni@hotmail.it [UNIPI (Italy); Herranz, Luis E. [CIEMAT (Spain); Hollands, Thorsten [GRS (Germany); Ahn, Kwang-II [KAERI (Korea, Republic of); Bals, Christine [GRS (Germany); D' Auria, Francesco [UNIPI (Italy); Horvath, Gabor L. [NUBIKI (Hungary); Jaeckel, Bernd S. [PSI (Switzerland); Kim, Han-Chul; Lee, Jung-Jae [KINS (Korea, Republic of); Ogino, Masao [JNES (Japan); Techy, Zsolt [NUBIKI (Hungary); Velazquez-Lozad, Alexander; Zigh, Abdelghani [USNRC (United States); Rehacek, Radomir [OECD/NEA (France)

    2016-10-15

    Highlights: • A unique PWR spent fuel pool experimental project is analytically investigated. • Predictability of fuel clad ignition in case of a complete loss of coolant in SFPs is assessed. • Computer codes reasonably estimate peak cladding temperature and time of ignition. - Abstract: The OECD/NEA Sandia Fuel Project provided unique thermal-hydraulic experimental data associated with Spent Fuel Pool (SFP) complete drain down. The study conducted at Sandia National Laboratories (SNL) was successfully completed (July 2009 to February 2013). The accident conditions of interest for the SFP were simulated in a full scale prototypic fashion (electrically heated, prototypic assemblies in a prototypic SFP rack) so that the experimental results closely represent actual fuel assembly responses. A major impetus for this work was to facilitate severe accident code validation and to reduce modeling uncertainties within the codes. Phase I focused on axial heating and burn propagation in a single PWR 17 × 17 assembly (i.e. “hot neighbors” configuration). Phase II addressed axial and radial heating and zirconium fire propagation including effects of fuel rod ballooning in a 1 × 4 assembly configuration (i.e. single, hot center assembly and four, “cooler neighbors”). This paper summarizes the comparative analysis regarding the final destructive ignition test of the phase I of the project. The objective of the benchmark is to evaluate and compare the predictive capabilities of computer codes concerning the ignition testing of PWR fuel assemblies. Nine institutions from eight different countries were involved in the benchmark calculations. The time to ignition and the maximum temperature are adequately captured by the calculations. It is believed that the benchmark constitutes an enlargement of the validation range for the codes to the conditions tested, thus enhancing the code applicability to other fuel assembly designs and configurations. The comparison of

  13. Iraq: Government Formation and Benchmarks

    Science.gov (United States)

    2007-08-10

    of a new de-Baathification law and approval of a flag and national anthem law. The De-Baathification reform law (benchmark # 2) remains stalled...vote 5 — Iraqi Turkomen Front (Turkomen, Kirkuk-based, pro- Turkey ) 3 1 National Independent and Elites (Jan)/Risalyun (Message, Dec) pro-Sadr 3 2...Elections and Constitutional Referendum in 2005 The first election (January 30, 2005) was for a 275-seat transitional National Assembly, a provincial

  14. Benchmarking Text Understanding Systems to Human Performance: An Exploration.

    Science.gov (United States)

    Butler, Frances A.; And Others

    This study, part of a larger effort to develop a methodology for evaluating intelligent computer systems (Artificial Intelligence Systems), explores the use of benchmarking as an evaluation technique. Benchmarking means comparing the performance of intelligent computer systems with human performance on the same task. Benchmarking in evaluation has…

  15. 47 CFR 69.108 - Transport rate benchmark.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone company...

  16. MC6800 cross-assembler for the PDP-8/E digital computer. [M68CA

    Energy Technology Data Exchange (ETDEWEB)

    Sand, R.J.

    1978-08-01

    A cross-assembler was developed to assemble Motorola MC6800 microprocessor programs on a digital Equipment Corporation PDP-8/E minicomputer. This cross-assembler runs in 8K of core under the OS/8 operating system. The advantages of using the cross-assembler are the large user symbol table and the convenience and speed of program development. User's instructions for the cross-assembler are given. The design of the cross-assembler and examples of its use are described. 12 figures.

  17. Benchmark of multi-phase method for the computation of fast ion distributions in a tokamak plasma in the presence of low-amplitude resonant MHD activity

    Science.gov (United States)

    Bierwage, A.; Todo, Y.

    2017-11-01

    The transport of fast ions in a beam-driven JT-60U tokamak plasma subject to resonant magnetohydrodynamic (MHD) mode activity is simulated using the so-called multi-phase method, where 4 ms intervals of classical Monte-Carlo simulations (without MHD) are interlaced with 1 ms intervals of hybrid simulations (with MHD). The multi-phase simulation results are compared to results obtained with continuous hybrid simulations, which were recently validated against experimental data (Bierwage et al., 2017). It is shown that the multi-phase method, in spite of causing significant overshoots in the MHD fluctuation amplitudes, accurately reproduces the frequencies and positions of the dominant resonant modes, as well as the spatial profile and velocity distribution of the fast ions, while consuming only a fraction of the computation time required by the continuous hybrid simulation. The present paper is limited to low-amplitude fluctuations consisting of a few long-wavelength modes that interact only weakly with each other. The success of this benchmark study paves the way for applying the multi-phase method to the simulation of Abrupt Large-amplitude Events (ALE), which were seen in the same JT-60U experiments but at larger time intervals. Possible implications for the construction of reduced models for fast ion transport are discussed.

  18. Computational studies on self-assembled paclitaxel structures: templates for hierarchical block copolymer assemblies and sustained drug release.

    Science.gov (United States)

    Guo, Xin D; Tan, Jeremy P K; Kim, Sung H; Zhang, Li J; Zhang, Ying; Hedrick, James L; Yang, Yi Y; Qian, Yu

    2009-11-01

    Paclitaxel-loaded poly(ethylene oxide)-b-poly(lactide) (PEO-b-PLA) systems have been observed to assemble into fiber structures with remarkably different properties using different chirality and molecular weight of PLA segments. In this study, dissipative particle dynamics (DPD) simulations were carried out to elaborate the microstructures and properties of pure paclitaxel and paclitaxel-loaded PEO-b-PLA systems. Paclitaxel molecules formed ribbon or fiber like structures in water. With the addition of PEO-b-PDLA, PEO-b-PLLA and their stereocomplex, paclitaxel acted as a template and polymer molecules assembled around the paclitaxel structure to form core/shell structured fibers having a PEO shell. For PEO19-b-PDLA27 and PEO19-b-PLLA27 systems, PLA segments and paclitaxel molecules were distributed homogeneously in the core of fibers based on the hydrophobic interactions. In the stereocomplex formulation, paclitaxel molecules were more concentrated in the inner PLA stereocomplex core, which led to slower release of paclitaxel. By increasing the length of PLA segments (e.g. 8,16,22 and 27), the crystalline structure of paclitaxel was gradually weakened and destroyed, which was further proved by X-ray diffraction studies. All the simulation results agreed well with experimental data, suggesting that the DPD simulations may provide a powerful tool for designing drug delivery systems.

  19. BENCHOP - The BENCHmarking project in Option Pricing

    NARCIS (Netherlands)

    L. von Sydow; L.J. Höök; E. Larsson; E. Lindström; S. Molanovic; J. Persson; V. Shcherbakov; Y. Shpolyansky; V. Shcherbakov (Vadim); S. Sirén; J. Toivanen; J. Waldén; M. Wiktorsson; J. Levesley; J. Li; C.W. Oosterlee (Cornelis); M.J. Ruijter (Marjon); A. Toropov; Y. Zhao; J. Li (Jiayuan)

    2015-01-01

    htmlabstractThe aim of the BENCHOP project is to provide the finance community with a common suite of benchmark problems for option pricing. We provide a detailed description of the six benchmark problems together with methods to compute reference solutions. We have implemented fifteen different

  20. Benchmarking nutrient use efficiency of dairy farms

    NARCIS (Netherlands)

    Mu, W.; Groen, E.A.; Middelaar, van C.E.; Bokkers, E.A.M.; Hennart, S.; Stilmant, D.; Boer, de I.J.M.

    2017-01-01

    The nutrient use efficiency (NUE) of a system, generally computed as the amount of nutrients in valuable outputs over the amount of nutrients in all inputs, is commonly used to benchmark the environmental performance of dairy farms. Benchmarking the NUE of farms, however, may lead to biased

  1. Sequence assembly

    DEFF Research Database (Denmark)

    Scheibye-Alsing, Karsten; Hoffmann, S.; Frankel, Annett Maria

    2009-01-01

    Despite the rapidly increasing number of sequenced and re-sequenced genomes, many issues regarding the computational assembly of large-scale sequencing data have remain unresolved. Computational assembly is crucial in large genome projects as well for the evolving high-throughput technologies...... and plays an important role in processing the information generated by these methods. Here, we provide a comprehensive overview of the current publicly available sequence assembly programs. We describe the basic principles of computational assembly along with the main concerns, such as repetitive sequences...... in genomic DNA, highly expressed genes and alternative transcripts in EST sequences. We summarize existing comparisons of different assemblers and provide a detailed descriptions and directions for download of assembly programs at: http://genome.ku.dk/resources/assembly/methods.html....

  2. Computational insights and the observation of SiC nanograin assembly: towards 2D silicon carbide.

    Science.gov (United States)

    Susi, Toma; Skákalová, Viera; Mittelberger, Andreas; Kotrusz, Peter; Hulman, Martin; Pennycook, Timothy J; Mangler, Clemens; Kotakoski, Jani; Meyer, Jannik C

    2017-06-30

    While an increasing number of two-dimensional (2D) materials, including graphene and silicene, have already been realized, others have only been predicted. An interesting example is the two-dimensional form of silicon carbide (2D-SiC). Here, we present an observation of atomically thin and hexagonally bonded nanosized grains of SiC assembling temporarily in graphene oxide pores during an atomic resolution scanning transmission electron microscopy experiment. Even though these small grains do not fully represent the bulk crystal, simulations indicate that their electronic structure already approaches that of 2D-SiC. This is predicted to be flat, but some doubts have remained regarding the preference of Si for sp 3 hybridization. Exploring a number of corrugated morphologies, we find completely flat 2D-SiC to have the lowest energy. We further compute its phonon dispersion, with a Raman-active transverse optical mode, and estimate the core level binding energies. Finally, we study the chemical reactivity of 2D-SiC, suggesting it is like silicene unstable against molecular absorption or interlayer linking. Nonetheless, it can form stable van der Waals-bonded bilayers with either graphene or hexagonal boron nitride, promising to further enrich the family of two-dimensional materials once bulk synthesis is achieved.

  3. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  4. Benchmarking Distance Education.

    Science.gov (United States)

    Novak, Richard J.

    2002-01-01

    Identifies and discusses the myriad perspectives on measures of quality and benchmarking in distance education. Reviews the standards or benchmarks of quality that have been promulgated by various stakeholder groups. (EV)

  5. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    Science.gov (United States)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  6. LightAssembler: fast and memory-efficient assembly algorithm for high-throughput sequencing reads.

    Science.gov (United States)

    El-Metwally, Sara; Zakaria, Magdi; Hamza, Taher

    2016-11-01

    The deluge of current sequenced data has exceeded Moore's Law, more than doubling every 2 years since the next-generation sequencing (NGS) technologies were invented. Accordingly, we will able to generate more and more data with high speed at fixed cost, but lack the computational resources to store, process and analyze it. With error prone high throughput NGS reads and genomic repeats, the assembly graph contains massive amount of redundant nodes and branching edges. Most assembly pipelines require this large graph to reside in memory to start their workflows, which is intractable for mammalian genomes. Resource-efficient genome assemblers combine both the power of advanced computing techniques and innovative data structures to encode the assembly graph efficiently in a computer memory. LightAssembler is a lightweight assembly algorithm designed to be executed on a desktop machine. It uses a pair of cache oblivious Bloom filters, one holding a uniform sample of [Formula: see text]-spaced sequenced [Formula: see text]-mers and the other holding [Formula: see text]-mers classified as likely correct, using a simple statistical test. LightAssembler contains a light implementation of the graph traversal and simplification modules that achieves comparable assembly accuracy and contiguity to other competing tools. Our method reduces the memory usage by [Formula: see text] compared to the resource-efficient assemblers using benchmark datasets from GAGE and Assemblathon projects. While LightAssembler can be considered as a gap-based sequence assembler, different gap sizes result in an almost constant assembly size and genome coverage. https://github.com/SaraEl-Metwally/LightAssembler CONTACT: sarah_almetwally4@mans.edu.egSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Transcriptator: An Automated Computational Pipeline to Annotate Assembled Reads and Identify Non Coding RNA.

    Directory of Open Access Journals (Sweden)

    Kumar Parijat Tripathi

    Full Text Available RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool, QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery tools. It offers a report on statistical analysis of functional and Gene Ontology (GO annotation's enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein-protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA by ab initio methods helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is

  8. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  9. Quantitative computational models of molecular self-assembly in systems biology.

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-05-23

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  10. Quantitative computational models of molecular self-assembly in systems biology

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-06-01

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  11. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  12. Cluster computing as an assembly process: coordination with S-Net

    NARCIS (Netherlands)

    Grelck, C.; Julku, J.; Penczek, F.; Shafarenko, A.; Parashar, M.; Buyya, R.

    2010-01-01

    This poster will present a coordination language for distributed computing and will discuss its application to cluster computing. It will introduce a programming technique of cluster computing whereby application components are completely dissociated from the communication/coordination

  13. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  14. Aeroelastic Benchmark Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the transonic...

  15. A microprocessor-based single board computer for high energy physics event pattern recognition

    CERN Document Server

    Bernstein, H; Imossi, R; Kopp, J K; Kramer, M A; Love, W A; Ozaki, S; Platner, E D

    1981-01-01

    A single board MC 68000 based computer has been assembled and benchmarked against the CDC 7600 running portions of the pattern recognition code used at the MPS. This computer has a floating coprocessor to achieve throughputs equivalent to several percent that of the 7600. A major part of this work was the construction of a FORTR

  16. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  17. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  18. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  19. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  20. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  1. Computational characterization of DNA/peptide/nanotube self assembly for bioenergy applications

    Science.gov (United States)

    Ortiz, Vanessa; Araki, Ruriko; Collier, Galen

    2012-02-01

    Multi-enzyme pathways have become a subject of increasing interest for their role in the engineering of biomimetic systems for applications including biosensors, bioelectronics, and bioenergy. The efficiencies found in natural metabolic pathways partially arise from biomolecular self-assembly of the component enzymes in an effort to avoid transport limitations. The ultimate goal of this effort is to design and build biofuel cells with efficiencies similar to those of native systems by introducing biomimetic structures that immobilize multiple enzymes in specific orientations on a bioelectrode. To achieve site-specific immobilization, the specificity of DNA-binding domains is exploited with an approach that allows any redox enzyme to be modified to site-specifically bind to double stranded (ds) DNA while retaining activity. Because of its many desirable properties, the bioelectrode of choice is single-wall carbon nanotubes (SWNTs), but little is known about dsDNA/SWNT assembly and how this might affect the activity of the DNA-binding domains. Here we evaluate the feasibility of the proposed assembly by performing atomistic molecular dynamics simulations to look at the stability and conformations adopted by dsDNA when bound to a SWNT. We also evaluate the effects of the presence of a SWNT on the stability of the complex formed by a DNA-binding domain and DNA.

  2. A comparative study of methods to compute the free energy of an ordered assembly by molecular simulation.

    Science.gov (United States)

    Moustafa, Sabry G; Schultz, Andrew J; Kofke, David A

    2013-08-28

    We present a comparative study of methods to compute the absolute free energy of a crystalline assembly of hard particles by molecular simulation. We consider all combinations of three choices defining the methodology: (1) the reference system: Einstein crystal (EC), interacting harmonic (IH), or r(-12) soft spheres (SS); (2) the integration path: Frenkel-Ladd (FL) or penetrable ramp (PR); and (3) the free-energy method: overlap-sampling free-energy perturbation (OS) or thermodynamic integration (TI). We apply the methods to FCC hard spheres at the melting state. The study shows that, in the best cases, OS and TI are roughly equivalent in efficiency, with a slight advantage to TI. We also examine the multistate Bennett acceptance ratio method, and find that it offers no advantage for this particular application. The PR path shows advantage in general over FL, providing results of the same precision with 2-9 times less computation, depending on the choice of a common reference. The best combination for the FL path is TI+EC, which is how the FL method is usually implemented. For the PR path, the SS system (with either TI or OS) proves to be most effective; it gives equivalent precision to TI+FL+EC with about 6 times less computation (or 12 times less, if discounting the computational effort required to establish the SS reference free energy). Both the SS and IH references show great advantage in capturing finite-size effects, providing a variation in free-energy difference with system size that is about 10 times less than EC. This result further confirms previous work for soft-particle crystals, and suggests that free-energy calculations for a structured assembly be performed using a hybrid method, in which the finite-system free-energy difference is added to the extrapolated (1/N→0) absolute free energy of the reference system, to obtain a result that is nearly independent of system size.

  3. TRUMP-BD: A computer code for the analysis of nuclear fuel assemblies under severe accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Lombardo, N.J.; Marseille, T.J.; White, M.D.; Lowery, P.S.

    1990-06-01

    TRUMP-BD (Boil Down) is an extension of the TRUMP (Edwards 1972) computer program for the analysis of nuclear fuel assemblies under severe accident conditions. This extension allows prediction of the heat transfer rates, metal-water oxidation rates, fission product release rates, steam generation and consumption rates, and temperature distributions for nuclear fuel assemblies under core uncovery conditions. The heat transfer processes include conduction in solid structures, convection across fluid-solid boundaries, and radiation between interacting surfaces. Metal-water reaction kinetics are modeled with empirical relationships to predict the oxidation rates of steam-exposed Zircaloy and uranium metal. The metal-water oxidation models are parabolic in form with an Arrhenius temperature dependence. Uranium oxidation begins when fuel cladding failure occurs; Zircaloy oxidation occurs continuously at temperatures above 13000{degree}F when metal and steam are available. From the metal-water reactions, the hydrogen generation rate, total hydrogen release, and temporal and spatial distribution of oxide formations are computed. Consumption of steam from the oxidation reactions and the effect of hydrogen on the coolant properties is modeled for independent coolant flow channels. Fission product release from exposed uranium metal Zircaloy-clad fuel is modeled using empirical time and temperature relationships that consider the release to be subject to oxidation and volitization/diffusion ( bake-out'') release mechanisms. Release of the volatile species of iodine (I), tellurium (Te), cesium (Ce), ruthenium (Ru), strontium (Sr), zirconium (Zr), cerium (Cr), and barium (Ba) from uranium metal fuel may be modeled.

  4. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  5. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  6. Experimental Criticality Benchmarks for SNAP 10A/2 Reactor Cores

    Energy Technology Data Exchange (ETDEWEB)

    Krass, A.W.

    2005-12-19

    This report describes computational benchmark models for nuclear criticality derived from descriptions of the Systems for Nuclear Auxiliary Power (SNAP) Critical Assembly (SCA)-4B experimental criticality program conducted by Atomics International during the early 1960's. The selected experimental configurations consist of fueled SNAP 10A/2-type reactor cores subject to varied conditions of water immersion and reflection under experimental control to measure neutron multiplication. SNAP 10A/2-type reactor cores are compact volumes fueled and moderated with the hydride of highly enriched uranium-zirconium alloy. Specifications for the materials and geometry needed to describe a given experimental configuration for a model using MCNP5 are provided. The material and geometry specifications are adequate to permit user development of input for alternative nuclear safety codes, such as KENO. A total of 73 distinct experimental configurations are described.

  7. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  8. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  9. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  10. Benchmarking TENDL-2012

    Science.gov (United States)

    van der Marck, S. C.; Koning, A. J.; Rochman, D. A.

    2014-04-01

    The new release of the TENDL nuclear data library, TENDL-2012, was tested by performing many benchmark calculations. Close to 2000 criticality safety benchmark cases were used, as well as many benchmark shielding cases. All the runs could be compared with similar runs based on the nuclear data libraries ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1 respectively. The results are that many of the criticality safety results obtained with TENDL-2012 are close to the ones for the other libraries. In particular the results for the thermal spectrum cases with LEU fuel are good. Nevertheless, there is a fair amount of cases for which the TENDL-2012 results are not as good as the other libraries. Especially a number of fast spectrum cases with reflectors are not well described. The results for the shielding benchmarks are mostly similar to the ones for the other libraries. Some isolated cases with differences are identified.

  11. Benchmarking Academic Anatomic Pathologists

    National Research Council Canada - National Science Library

    Barbara S. Ducatman MD; Tristram Parslow MD, PhD

    2016-01-01

    .... We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data...

  12. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  13. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  14. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  15. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  16. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  17. Benchmarking in University Toolbox

    OpenAIRE

    Katarzyna Kuźmicz

    2015-01-01

    In the face of global competition and rising challenges that higher education institutions (HEIs) meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indica...

  18. Benchmarking Software Assurance Implementation

    Science.gov (United States)

    2011-05-18

    product The chicken#. (a.k.a. Process Focused Assessment ) – Management Systems ( ISO 9001, ISO 27001 , ISO 2000) – Capability Maturity Models (CMMI...Benchmarking Software Assurance Implementation Michele Moss SSTC Conference May 18, 2011 Report Documentation Page Form ApprovedOMB No. 0704-0188...00-00-2011 4. TITLE AND SUBTITLE Benchmarking Software Assurance Implementation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  19. MFTF TOTAL benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base.

  20. Resolving Properties of Polymers and Nanoparticle Assembly through Coarse-Grained Computational Studies.

    Energy Technology Data Exchange (ETDEWEB)

    Grest, Gary S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    Coupled length and time scales determine the dynamic behavior of polymers and polymer nanocomposites and underlie their unique properties. To resolve the properties over large time and length scales it is imperative to develop coarse grained models which retain the atomistic specificity. Here we probe the degree of coarse graining required to simultaneously retain significant atomistic details a nd access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using polyethylene as a model system, we probe how the coarse - graining scale affects the measured dynamics with different number methylene group s per coarse - grained beads. Using these models we simulate polyethylene melts for times over 500 ms to study the viscoelastic properties of well - entangled polymer melts and large nanoparticle assembly as the nanoparticles are driven close enough to form nanostructures.

  1. Computer-based construction of gene models using the GRAIL Gene Assembly Program

    Energy Technology Data Exchange (ETDEWEB)

    Einstein, J.R.; Mural, R.J.; Guan, X.; Uberbacher, E.C.

    1992-09-01

    The Gene Assembly Program (GAP), a module of GRAIL, assembles and scores gene models, given a DNA sequence and the outputs of other GRAIL modules for the sequence. The latter modules determine the positions of coding regions, the positions and scores of possible splice junctions, the positions of possible translation-initiation sites, the coding strand for the gene, and the probable-translation-frame function over the sequence. GAP tests combinations of those splice junctions which are within acceptable distances from the initial estimated edges of the coding regions. Every complete gene model, comprising translation-initiation site, splice junctions and stop codon, which agrees with GAP's set of rules is scored, and the ten highest-scoring models are saved. Each gene-model score depends on the input scores of splice junctions used in the model, their positions relative to the initial estimated edges of the included coding regions, and the degree of agreement of the entire model with the probable-translation-frame function. If error conditions are detected, the present version of GAP attempts to correct them by the insertion and/or deletion of one or more coding regions. These insertions and deletions have resulted in a net improvement of gene models, and a particularly large improvement in the recognition and characterization of very short coding regions. The results of GRAIL including the GAP module for 26 sequences from GenBank, each with a biochemically characterized single gene, are quite promising and demonstrate the feasibility of constructing largely accurate gene models strictly on the basis of sequence data.

  2. Computer-based construction of gene models using the GRAIL Gene Assembly Program

    Energy Technology Data Exchange (ETDEWEB)

    Einstein, J.R.; Mural, R.J.; Guan, X.; Uberbacher, E.C.

    1992-09-01

    The Gene Assembly Program (GAP), a module of GRAIL, assembles and scores gene models, given a DNA sequence and the outputs of other GRAIL modules for the sequence. The latter modules determine the positions of coding regions, the positions and scores of possible splice junctions, the positions of possible translation-initiation sites, the coding strand for the gene, and the probable-translation-frame function over the sequence. GAP tests combinations of those splice junctions which are within acceptable distances from the initial estimated edges of the coding regions. Every complete gene model, comprising translation-initiation site, splice junctions and stop codon, which agrees with GAP`s set of rules is scored, and the ten highest-scoring models are saved. Each gene-model score depends on the input scores of splice junctions used in the model, their positions relative to the initial estimated edges of the included coding regions, and the degree of agreement of the entire model with the probable-translation-frame function. If error conditions are detected, the present version of GAP attempts to correct them by the insertion and/or deletion of one or more coding regions. These insertions and deletions have resulted in a net improvement of gene models, and a particularly large improvement in the recognition and characterization of very short coding regions. The results of GRAIL including the GAP module for 26 sequences from GenBank, each with a biochemically characterized single gene, are quite promising and demonstrate the feasibility of constructing largely accurate gene models strictly on the basis of sequence data.

  3. Computational 3D imaging to quantify structural components and assembly of protein networks.

    Science.gov (United States)

    Asgharzadeh, Pouyan; Özdemir, Bugra; Reski, Ralf; Röhrle, Oliver; Birkhold, Annette I

    2018-03-15

    Traditionally, protein structures have been described by the secondary structure architecture and fold arrangement. However, the relatively novel method of 3D confocal microscopy of fluorescent-protein-tagged networks in living cells allows resolving the detailed spatial organization of these networks. This provides new possibilities to predict network functionality, as structure and function seem to be linked at various scales. Here, we propose a quantitative approach using 3D confocal microscopy image data to describe protein networks based on their nano-structural characteristics. This analysis is constructed in four steps: (i) Segmentation of the microscopic raw data into a volume model and extraction of a spatial graph representing the protein network. (ii) Quantifying protein network gross morphology using the volume model. (iii) Quantifying protein network components using the spatial graph. (iv) Linking these two scales to obtain insights into network assembly. Here, we quantitatively describe the filamentous temperature sensitive Z protein network of the moss Physcomitrella patens and elucidate relations between network size and assembly details. Future applications will link network structure and functionality by tracking dynamic structural changes over time and comparing different states or types of networks, possibly allowing more precise identification of (mal) functions or the design of protein-engineered biomaterials for applications in regenerative medicine. Protein networks are highly complex and dynamic structures that play various roles in biological environments. Analyzing the detailed spatial structure of these networks may lead to new insight into biological functions and malfunctions. Here, we propose a tool set that extracts structural information at two scales of the protein network and allows therefore to address questions such as "how is the network built?" or "how networks grow?". Copyright © 2018 Acta Materialia Inc. Published by

  4. Modeling the assembly order of multimeric heteroprotein complexes.

    Directory of Open Access Journals (Sweden)

    Lenna X Peterson

    2018-01-01

    Full Text Available Protein-protein interactions are the cornerstone of numerous biological processes. Although an increasing number of protein complex structures have been determined using experimental methods, relatively fewer studies have been performed to determine the assembly order of complexes. In addition to the insights into the molecular mechanisms of biological function provided by the structure of a complex, knowing the assembly order is important for understanding the process of complex formation. Assembly order is also practically useful for constructing subcomplexes as a step toward solving the entire complex experimentally, designing artificial protein complexes, and developing drugs that interrupt a critical step in the complex assembly. There are several experimental methods for determining the assembly order of complexes; however, these techniques are resource-intensive. Here, we present a computational method that predicts the assembly order of protein complexes by building the complex structure. The method, named Path-LzerD, uses a multimeric protein docking algorithm that assembles a protein complex structure from individual subunit structures and predicts assembly order by observing the simulated assembly process of the complex. Benchmarked on a dataset of complexes with experimental evidence of assembly order, Path-LZerD was successful in predicting the assembly pathway for the majority of the cases. Moreover, when compared with a simple approach that infers the assembly path from the buried surface area of subunits in the native complex, Path-LZerD has the strong advantage that it can be used for cases where the complex structure is not known. The path prediction accuracy decreased when starting from unbound monomers, particularly for larger complexes of five or more subunits, for which only a part of the assembly path was correctly identified. As the first method of its kind, Path-LZerD opens a new area of computational protein structure

  5. Modeling the assembly order of multimeric heteroprotein complexes.

    Science.gov (United States)

    Peterson, Lenna X; Togawa, Yoichiro; Esquivel-Rodriguez, Juan; Terashi, Genki; Christoffer, Charles; Roy, Amitava; Shin, Woong-Hee; Kihara, Daisuke

    2018-01-01

    Protein-protein interactions are the cornerstone of numerous biological processes. Although an increasing number of protein complex structures have been determined using experimental methods, relatively fewer studies have been performed to determine the assembly order of complexes. In addition to the insights into the molecular mechanisms of biological function provided by the structure of a complex, knowing the assembly order is important for understanding the process of complex formation. Assembly order is also practically useful for constructing subcomplexes as a step toward solving the entire complex experimentally, designing artificial protein complexes, and developing drugs that interrupt a critical step in the complex assembly. There are several experimental methods for determining the assembly order of complexes; however, these techniques are resource-intensive. Here, we present a computational method that predicts the assembly order of protein complexes by building the complex structure. The method, named Path-LzerD, uses a multimeric protein docking algorithm that assembles a protein complex structure from individual subunit structures and predicts assembly order by observing the simulated assembly process of the complex. Benchmarked on a dataset of complexes with experimental evidence of assembly order, Path-LZerD was successful in predicting the assembly pathway for the majority of the cases. Moreover, when compared with a simple approach that infers the assembly path from the buried surface area of subunits in the native complex, Path-LZerD has the strong advantage that it can be used for cases where the complex structure is not known. The path prediction accuracy decreased when starting from unbound monomers, particularly for larger complexes of five or more subunits, for which only a part of the assembly path was correctly identified. As the first method of its kind, Path-LZerD opens a new area of computational protein structure modeling and will be

  6. Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center

    Energy Technology Data Exchange (ETDEWEB)

    Carter, B.J.; Maskewitz, B.F.

    1985-04-01

    This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art.

  7. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  8. Benchmarking in Academic Pharmacy Departments

    OpenAIRE

    Bosso, John A.; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O.; Ross, Leigh Ann

    2010-01-01

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilit...

  9. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  10. Determination of Benchmarks Stability within Ahmadu Bello ...

    African Journals Online (AJOL)

    Heights of six geodetic benchmarks over a total distance of 8.6km at the Ahmadu Bello University (ABU), Zaria, Nigeria were recomputed and analysed using least squares adjustment technique. The network computations were tied to two fix primary reference pillars situated outside the campus. The two-tail Chi-square ...

  11. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  12. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  13. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  14. Abstracts of digital computer code packages. Assembled by the Radiation Shielding Information Center. [Radiation transport codes

    Energy Technology Data Exchange (ETDEWEB)

    McGill, B.; Maskewitz, B.F.; Anthony, C.M.; Comolander, H.E.; Hendrickson, H.R.

    1976-01-01

    The term ''code package'' is used to describe a miscellaneous grouping of materials which, when interpreted in connection with a digital computer, enables the scientist--user to solve technical problems in the area for which the material was designed. In general, a ''code package'' consists of written material--reports, instructions, flow charts, listings of data, and other useful material and IBM card decks (or, more often, a reel of magnetic tape) on which the source decks, sample problem input (including libraries of data) and the BCD/EBCDIC output listing from the sample problem are written. In addition to the main code, and any available auxiliary routines are also included. The abstract format was chosen to give to a potential code user several criteria for deciding whether or not he wishes to request the code package. (RWR)

  15. Xenopus laevis: an ideal experimental model for studying the developmental dynamics of neural network assembly and sensory-motor computations.

    Science.gov (United States)

    Straka, Hans; Simmers, John

    2012-04-01

    The amphibian Xenopus laevis represents a highly amenable model system for exploring the ontogeny of central neural networks, the functional establishment of sensory-motor transformations, and the generation of effective motor commands for complex behaviors. Specifically, the ability to employ a range of semi-intact and isolated preparations for in vitro morphophysiological experimentation has provided new insights into the developmental and integrative processes associated with the generation of locomotory behavior during changing life styles. In vitro electrophysiological studies have begun to explore the functional assembly, disassembly and dynamic plasticity of spinal pattern generating circuits as Xenopus undergoes the developmental switch from larval tail-based swimming to adult limb-based locomotion. Major advances have also been made in understanding the developmental onset of multisensory signal processing for reactive gaze and posture stabilizing reflexes during self-motion. Additionally, recent evidence from semi-intact animal and isolated CNS experiments has provided compelling evidence that in Xenopus tadpoles, predictive feed-forward signaling from the spinal locomotor pattern generator are engaged in minimizing visual disturbances during tail-based swimming. This new concept questions the traditional view of retinal image stabilization that in vertebrates has been exclusively attributed to sensory-motor transformations of body/head motion-detecting signals. Moreover, changes in visuomotor demands associated with the developmental transition in propulsive strategy from tail- to limb-based locomotion during metamorphosis presumably necessitates corresponding adaptive alterations in the intrinsic spinoextraocular coupling mechanism. Consequently, Xenopus provides a unique opportunity to address basic questions on the developmental dynamics of neural network assembly and sensory-motor computations for vertebrate motor behavior in general. Copyright

  16. Solvent-driven symmetry of self-assembled nanocrystal superlattices-A computational study

    KAUST Repository

    Kaushik, Ananth P.

    2012-10-29

    The preference of experimentally realistic sized 4-nm facetted nanocrystals (NCs), emulating Pb chalcogenide quantum dots, to spontaneously choose a crystal habit for NC superlattices (Face Centered Cubic (FCC) vs. Body Centered Cubic (BCC)) is investigated using molecular simulation approaches. Molecular dynamics simulations, using united atom force fields, are conducted to simulate systems comprised of cube-octahedral-shaped NCs covered by alkyl ligands, in the absence and presence of experimentally used solvents, toluene and hexane. System sizes in the 400,000-500,000-atom scale followed for nanoseconds are required for this computationally intensive study. The key questions addressed here concern the thermodynamic stability of the superlattice and its preference of symmetry, as we vary the ligand length of the chains, from 9 to 24 CH2 groups, and the choice of solvent. We find that hexane and toluene are "good" solvents for the NCs, which penetrate the ligand corona all the way to the NC surfaces. We determine the free energy difference between FCC and BCC NC superlattice symmetries to determine the system\\'s preference for either geometry, as the ratio of the length of the ligand to the diameter of the NC is varied. We explain these preferences in terms of different mechanisms in play, whose relative strength determines the overall choice of geometry. © 2012 Wiley Periodicals, Inc.

  17. Benchmarking of workplace performance

    NARCIS (Netherlands)

    van der Voordt, Theo; Jensen, Per Anker

    2017-01-01

    This paper aims to present a process model of value adding corporate real estate and facilities management and to discuss which indicators can be used to measure and benchmark workplace performance.

    In order to add value to the organisation, the work environment has to provide value for

  18. Benchmarking the World's Best

    Science.gov (United States)

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  19. Benchmarking RDF Storage Engines

    NARCIS (Netherlands)

    Y. Zhang (Ying); M.-D. Pham (Minh-Duc); F.E. Groffen (Fabian); E. Liarou (Erietta); P.A. Boncz (Peter); M.L. Kersten (Martin); J.P. Calbimonte; O. Corcho

    2012-01-01

    htmlabstractIn this deliverable, we present version V1.0 of SRBench, the first benchmark for Streaming RDF engines, designed in the context of Task 1.4 of PlanetData, completely based on real-world datasets. With the increasing problem of too much streaming data but not enough knowledge, researchers

  20. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  1. Benchmarking in Foodservice Operations.

    Science.gov (United States)

    1998-06-19

    fifth generation was global benchmarking, which was "a global application where international trade, cultural, and business process distinctions...religious dietary needs of inmates. In the financial area of foodservice, usage offood cost percentage, labor cost percentage, supply cost percentage

  2. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  3. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  4. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  5. Comparative Study of DIMPLE benchmark with Two-step and Direct Modelling Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Wonkyeong; Lee, Deokjung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of); Kozlowski, Tomasz [Univ. of Illinois, Urbana (United States)

    2014-10-15

    The DIMPLE benchmark problem has been analyzed using both a two-step approach with SERPENT/PARCS and direct Monte Carlo modeling with SERPENT and MCNP. Detailed computational models are developed in this paper and the calculation results of SERPENT/PARCS are compared against those of Monte Carlo codes SERPENT and MCNP. The SERPENT 1.1.19 code was employed to homogenize the fuel assembly and reflector domains for nodal calculation. Then, the PARCS 3.0 code was used to solve two group neutron diffusion equations, and the results were compared to the full-core heterogeneous solution calculated with SERPENT 1.1.19 and MCNP6. The results show that the reflector with baffle requires the use of assembly discontinuity factors (ADF). It is presumed that the homogeneous results would have been improved if ADFs were used.

  6. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  7. Applied, theoretical modeling of space-based assembly, using expert system architecture for computer-aided engineering tool development

    Science.gov (United States)

    Jolly, Steven Douglas

    1992-01-01

    The challenges associated with constructing interplanetary spacecraft and space platforms in low earth orbit are such that it is imperative that comprehensive, preliminary process planning analyses be completed before committing funds for Phase B design (detail design, development). Phase A and 'pre-Phase A' design activities will commonly address engineering questions such as mission-design structural integrity, attitude control, thermal control, etc. But the questions of constructability, maintainability and reliability during the assembly phase usually go unaddressed until the more mature stages of design (or very often production) are reached. This is an unacceptable strategy for future space missions whether they be government or commercial ventures. After interviews with expert Aerospace and Construction industry planners a new methodology was formulated and a Blackboard Metaphor Knowledge-based Expert System synthesis model has been successfully developed which can decompose interplanetary vehicles into deliverable orbital subassemblies. Constraint propagation, including launch vehicle payload shroud envelope, is accomplished with heuristic and numerical algorithms including a unique adaptation of a reasoning technique used by Stanford researchers in terrestrial automated process planning. The model is a hybrid combination of rule and frame-based representations, designed to integrate into a Computer-Aided Engineering (CAE) environment. Emphasis is placed on the actual joining, rendezvous, and refueling of the orbiting, dynamic spacecraft. Significant results of this new methodology upon a large Mars interplanetary spacecraft (736,000 kg) designed by Boeing, show high correlation to manual decomposition and planning analysis studies, but at a fraction of the time, and with little user interaction. Such Computer-Aided Engineering (CAE) tools would greatly leverage the designers ability to assess constructability.

  8. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  9. Piping systems physical benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bezler, P.; Subudhi, M.

    1985-01-01

    Physical benchmark evaluations are used to assess the accuracy and adequacy of the analysis methods and assumptions used in typical piping qualification evaluations. To date physical benchmark evaluations have been completed for five systems involving both laboratory tested and in situ piping. In each evaluation elastic finite element methods are used to predict the time history response of a system for which physical test results are available. In the analytical simulations the measured support excitations and the measured damping properties are used as input and the acceleration and displacement response of piping interior points are predicted as output. Most evaluations were performed blind in that only the measured inputs are provided at the time of analysis. A summary of the overall results as well as predicted and measured time history traces for selected points are included.

  10. Benchmarking HIPAA compliance.

    Science.gov (United States)

    Wagner, James R; Thoman, Deborah J; Anumalasetty, Karthikeyan; Hardre, Pat; Ross-Lazarov, Tsvetomir

    2002-01-01

    One of the nation's largest academic medical centers is benchmarking its operations using internally developed software to improve privacy/confidentiality of protected health information (PHI) and to enhance data security to comply with HIPAA regulations. It is also coordinating the development of a web-based interactive product that can help hospitals, physician practices, and managed care organizations measure their compliance with HIPAA regulations.

  11. Benchmarking Cloud Resources for HEP

    Science.gov (United States)

    Alef, M.; Cordeiro, C.; De Salvo, A.; Di Girolamo, A.; Field, L.; Giordano, D.; Guerri, M.; Schiavi, F. C.; Wiebalck, A.

    2017-10-01

    In a commercial cloud environment, exhaustive resource profiling is beneficial to cope with the intrinsic variability of the virtualised environment, allowing to promptly identify performance degradation. In the context of its commercial cloud initiatives, CERN has acquired extensive experience in benchmarking commercial cloud resources. Ultimately, this activity provides information on the actual delivered performance of invoiced resources. In this report we discuss the experience acquired and the results collected using several fast benchmark applications adopted by the HEP community. These benchmarks span from open-source benchmarks to specific user applications and synthetic benchmarks. The workflow put in place to collect and analyse performance metrics is also described.

  12. A stylized three dimensional PWR whole-core benchmark problem with Gadolinium

    Energy Technology Data Exchange (ETDEWEB)

    Douglass, Steven, E-mail: douglass.steven@gmail.co [Nuclear and Radiological Engineering/Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA 30332-0405 (United States); Rahnema, Farzad, E-mail: farzad@gatech.ed [Nuclear and Radiological Engineering/Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA 30332-0405 (United States); Margulies, Johann, E-mail: johann.margulies@me.gatech.ed [Nuclear and Radiological Engineering/Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA 30332-0405 (United States)

    2010-10-15

    Recent advancements in computer technology have led to an increased ability to solve complicated reactor problems, and as a result, there has been an interest in codes and methods that utilize whole-core transport in three-dimensional, large-scale, highly-heterogeneous configurations. Most current reactor benchmark problems are either small-scale or involve spatial homogenization (pin cell or full-assembly) for diffusion codes; therefore, new whole-core benchmark problems must be designed in order to validate whole-core transport methods. This study focuses on an PWR case, with a low leakage loading pattern and Gadolinium as a burnable absorber. In this paper, a detailed description of a new 3D whole-core numerical benchmark problem based on a 4-loop layout is presented with geometric, material and cross section specifications. A set of 2-group and 4-group region-wise neutron cross section libraries is generated using the lattice depletion code HELIOS, and an MCNP model is described for three configurations (all rods in, all rods out, power-shaping rods inserted). Eigenvalues are presented for the 2-group calculations of all three configurations, and detailed pin fission density results are presented for the power-shaping configuration.

  13. Monte Carlo Benchmark Calculations for HTR-10 Initial Core

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hong Chul; Kim, Soon Young; Shin, Chang Ho; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    These days, pebble-bed and other high-temperature gas-cooled reactor (HTGR) designs are once again in vogue in connection with hydrogen production. In this study, as a part of establishing Monte Carlo computation system for HTGR core analysis, some benchmark calculations for pebble-type HTGR were carried out using MCNP code. As a benchmark model, the initial core of the 10MW High Temperature Gas-cooled Reactor-Test Module (HTR-10) in China was selected. After the detailed MCNP modeling of the whole facility, benchmark calculations were performed. This study deals with the core physics benchmark problems proposed for HTR-10 reactor initial core. Results to benchmark problems have been obtained by MCNP5 Code.

  14. ASBench: benchmarking sets for allosteric discovery.

    Science.gov (United States)

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Application of FORSS sensitivity and uncertainty methodology to fast reactor benchmark analysis

    Energy Technology Data Exchange (ETDEWEB)

    Weisbin, C.R.; Marable, J.H.; Lucius, J.L.; Oblow, E.M.; Mynatt, F.R.; Peelle, R.W.; Perey, F.G.

    1976-12-01

    FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions, and associated uncertainties. This paper presents the theory and code description as well as the first results of applying FORSS to fast reactor benchmarks. Specifically, for various assemblies and reactor performance parameters, the nuclear data sensitivities were computed by nuclide, reaction type, and energy. Comprehensive libraries of energy-dependent coefficients have been developed in a computer retrievable format and released for distribution by RSIC and NNCSC. Uncertainties induced by nuclear data were quantified using preliminary, energy-dependent relative covariance matrices evaluated with ENDF/B-IV expectation values and processed for /sup 238/U(n,f), /sup 238/U(n,..gamma..), /sup 239/Pu(n,f), and /sup 239/Pu(..nu..). Nuclear data accuracy requirements to meet specified performance criteria at minimum experimental cost were determined.

  16. Reduced-order computational model in nonlinear structural dynamics for structures having numerous local elastic modes in the low-frequency range. Application to fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Batou, A., E-mail: anas.batou@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Soize, C., E-mail: christian.soize@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Brie, N., E-mail: nicolas.brie@edf.fr [EDF R and D, Département AMA, 1 avenue du général De Gaulle, 92140 Clamart (France)

    2013-09-15

    Highlights: • A ROM of a nonlinear dynamical structure is built with a global displacements basis. • The reduced order model of fuel assemblies is accurate and of very small size. • The shocks between grids of a row of seven fuel assemblies are computed. -- Abstract: We are interested in the construction of a reduced-order computational model for nonlinear complex dynamical structures which are characterized by the presence of numerous local elastic modes in the low-frequency band. This high modal density makes the use of the classical modal analysis method not suitable. Therefore the reduced-order computational model is constructed using a basis of a space of global displacements, which is constructed a priori and which allows the nonlinear dynamical response of the structure observed on the stiff part to be predicted with a good accuracy. The methodology is applied to a complex industrial structure which is made up of a row of seven fuel assemblies with possibility of collisions between grids and which is submitted to a seismic loading.

  17. Characterizing Computers And Predicting Computing Times

    Science.gov (United States)

    Saavedra-Barrera, Rafael H.

    1991-01-01

    Improved method for evaluation and comparison of computers running same or different FORTRAN programs devised. Enables one to predict time necessary to run given "benchmark" or other standard program on given computer, in scalar mode and without optimization of codes generated by compiler. Such "benchmark" running times are principal measures used to characterize performances of computers; of interest to designers, manufacturers, programmers, and users.

  18. What Randomized Benchmarking Actually Measures

    Science.gov (United States)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-09-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r . For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. These theories allow explicit computation of the error rate that RB measures (r ), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  19. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  20. Benchmarking and library quality maturity

    OpenAIRE

    Wilson, F.; Town, SJ

    2006-01-01

    Purpose - It remains unresolved from the literature whether benchmarking is a useful and appropriate tool for the library and information services sector. The aim of this research was to gather evidence to establish whether benchmarking provides a real and lasting benefit to library and information services. Design/methodology/approach - The study investigated the long term effects of a benchmarking exercise on the quality level of three UK academic libraries. However, an appropriate frame...

  1. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  2. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  3. Benchmarking Academic Anatomic Pathologists

    Directory of Open Access Journals (Sweden)

    Barbara S. Ducatman MD

    2016-10-01

    Full Text Available The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical “full-time faculty” (0.60 clinical full-time equivalent and above. The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative

  4. Benchmarking Academic Anatomic Pathologists

    Science.gov (United States)

    Parslow, Tristram

    2016-01-01

    The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA) or Vizient-AAMC Faculty Practice Solutions Center® (FPSC) databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical “full-time faculty” (0.60 clinical full-time equivalent and above). The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative value unit

  5. 3D investigation on polystyrene colloidal crystals by floatage self-assembly with mixed solvent via synchrotron radiation x-ray phase-contrast computed tomography

    Science.gov (United States)

    Fu, Yanan; Xie, Honglan; Deng, Biao; Du, Guohao; Xiao, Tiqiao

    2017-06-01

    The floatage self-assembly method was introduced with mixed solvent as the medium of polystyrene sphere suspension to fabricate the colloidal crystal. The three dimensional (3D) void system of the colloidal crystal was noninvasively characterized by synchrotron radiation phase-contrast computed tomography, and the quantitative image analysis was implemented aiming to the polystyrene sphere colloidal crystal. Comparing with gravity sedimentation method, the three samples fabricated from floatage self-assembly with mixed solvents have the lowest porosity, and when ethylene glycol and water were mixed with ratio of 1:1, the lowest porosity of 27.49% could be achieved, that has been very close to the minimum porosity of ordered 3D monodisperse sphere array (26%). In single slices, the porosities and fractal dimension for the voids were calculated. The results showed that two factors would significantly influence the porosity of the whole colloidal crystal: the first deposited sphere layer's orderliness and the sedimentation speed of the spheres. The floatage self-assembly could induce a stable close-packing process, resulted from the powerful nucleation force-lateral capillary force coupled with the mixed solvent to regulate the floating upward speed for purpose of matching the assembly rate.

  6. SDhaP: haplotype assembly for diploids and polyploids via semi-definite programming.

    Science.gov (United States)

    Das, Shreepriya; Vikalo, Haris

    2015-04-03

    The goal of haplotype assembly is to infer haplotypes of an individual from a mixture of sequenced chromosome fragments. Limited lengths of paired-end sequencing reads and inserts render haplotype assembly computationally challenging; in fact, most of the problem formulations are known to be NP-hard. Dimensions (and, therefore, difficulty) of the haplotype assembly problems keep increasing as the sequencing technology advances and the length of reads and inserts grow. The computational challenges are even more pronounced in the case of polyploid haplotypes, whose assembly is considerably more difficult than in the case of diploids. Fast, accurate, and scalable methods for haplotype assembly of diploid and polyploid organisms are needed. We develop a novel framework for diploid/polyploid haplotype assembly from high-throughput sequencing data. The method formulates the haplotype assembly problem as a semi-definite program and exploits its special structure - namely, the low rank of the underlying solution - to solve it rapidly and with high accuracy. The developed framework is applicable to both diploid and polyploid species. The code for SDhaP is freely available at https://sourceforge.net/projects/sdhap . Extensive benchmarking tests on both real and simulated data show that the proposed algorithms outperform several well-known haplotype assembly methods in terms of either accuracy or speed or both. Useful recommendations for coverages needed to achieve near-optimal solutions are also provided.

  7. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  8. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  9. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  10. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  11. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  12. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  13. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  14. Summary of ACCSIM and ORBIT Benchmarking Simulations

    CERN Document Server

    AIBA, M

    2009-01-01

    We have performed a benchmarking study of ORBIT and ACCSIM which are accelerator tracking codes having routines to evaluate space charge effects. The study is motivated by the need of predicting/understanding beam behaviour in the CERN Proton Synchrotron Booster (PSB) in which direct space charge is expected to be the dominant performance limitation. Historically at CERN, ACCSIM has been employed for space charge simulation studies. A benchmark study using ORBIT has been started to confirm the results from ACCSIM and to profit from the advantages of ORBIT such as the capability of parallel processing. We observed a fair agreement in emittance evolution in the horizontal plane but not in the vertical one. This may be partly due to the fact that the algorithm to compute the space charge field is different between the two codes.

  15. Benchmark Verordening, ook voor u relevant

    NARCIS (Netherlands)

    van Praag, E.J.

    2014-01-01

    De Europese Commissie heeft een voorstel gedaan voor een Benchmark Verordening. Deze verordening die eind 2015 in werking zou moeten treden, reguleert het aanbieden van benchmarks, het gebruik van benchmarks voor financiële producten en het leveren van inputgegevens voor benchmarks. De Benchmark

  16. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent......, Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...

  17. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Benchmarking for Bayesian Reinforcement Learning.

    Directory of Open Access Journals (Sweden)

    Michael Castronovo

    Full Text Available In the Bayesian Reinforcement Learning (BRL setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed.

  19. Benchmarking for Bayesian Reinforcement Learning.

    Science.gov (United States)

    Castronovo, Michael; Ernst, Damien; Couëtoux, Adrien; Fonteneau, Raphael

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed.

  20. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center

    2017-10-19

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction

  1. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  2. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  3. Benchmarking in Academic Pharmacy Departments

    Science.gov (United States)

    Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O.; Ross, Leigh Ann

    2010-01-01

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation. PMID:21179251

  4. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  5. A Heterogeneous Medium Analytical Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  6. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  7. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  8. Comet whole-core solution to a stylized 3-dimensional pressurized water reactor benchmark problem with UO{sub 2}and MOX fuel

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, D.; Rahnema, F. [Georgia Inst. of Technology, 770 State Street, Atlanta, GA 30332-0745 (United States)

    2012-07-01

    A stylized pressurized water reactor (PWR) benchmark problem with UO{sub 2} and MOX fuel was used to test the accuracy and efficiency of the coarse mesh radiation transport (COMET) code. The benchmark problem contains 125 fuel assemblies and 44,000 fuel pins. The COMET code was used to compute the core eigenvalue and assembly and pin power distributions for three core configurations. In these calculations, a set of tensor products of orthogonal polynomials were used to expand the neutron angular phase space distribution on the interfaces between coarse meshes. The COMET calculations were compared with the Monte Carlo code MCNP reference solutions using a recently published an 8-group material cross section library. The comparison showed both the core eigenvalues and assembly and pin power distributions predicated by COMET agree very well with the MCNP reference solution if the orders of the angular flux expansion in the two spatial variables and the polar and azimuth angles on the mesh boundaries are 4, 4, 2 and 2. The mean and maximum differences in the pin fission density distribution ranged from 0.28%-0.44% and 3.0%-5.5%, all within 3-sigma uncertainty of the MCNP solution. These comparisons indicate that COMET can achieve accuracy comparable to Monte Carlo. It was also found that COMET's computational speed is 450 times faster than MCNP. (authors)

  9. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  10. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  11. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  12. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together...... to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  13. Benchmarking Complications Associated with Esophagectomy

    NARCIS (Netherlands)

    Low, Donald E.; Kuppusamy, Madhan Kumar; Alderson, Derek; Cecconello, Ivan; Chang, Andrew C.; Darling, Gail; Davies, Andrew; D'journo, Xavier Benoit; Gisbertz, Suzanne S.; Griffin, S. Michael; Hardwick, Richard; Hoelscher, Arnulf; Hofstetter, Wayne; Jobe, Blair; Kitagawa, Yuko; Law, Simon; Mariette, Christophe; Maynard, Nick; Morse, Christopher R.; Nafteux, Philippe; Pera, Manuel; Pramesh, C. S.; Puig, Sonia; Reynolds, John V.; Schroeder, Wolfgang; Smithers, Mark; Wijnhoven, B. P. L.

    2017-01-01

    Utilizing a standardized dataset with specific definitions to prospectively collect international data to provide a benchmark for complications and outcomes associated with esophagectomy. Outcome reporting in oncologic surgery has suffered from the lack of a standardized system for reporting

  14. Methodology for Benchmarking IPsec Gateways

    National Research Council Canada - National Science Library

    Adam Tisovský; Ivan Baronák

    2012-01-01

    ... proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials...

  15. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  16. Simplified two and three dimensional HTTR benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Zhan [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, 770 State St., Atlanta, GA 30332 (United States); Rahnema, Farzad, E-mail: Farzad@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, 770 State St., Atlanta, GA 30332 (United States); Zhang Dingkang; Pounders, Justin M. [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, 770 State St., Atlanta, GA 30332 (United States); Ougouag, Abderrafi M. [Idaho National Laboratory, Ms-3860, PO Box 1625, Idaho Falls, ID 83415-3860 (United States)

    2011-05-15

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  17. Summary of Results for the VENUS-7 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Zwermann, Winfried; Langenbuch, Siegfried [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Garching (Germany); Na, Byung-Chan [ITER Organization, Cadarache (France); Sartori, Enrico [OECD/NEA Data Bank, Issy-les-Moulineaux (France); Wehmann, Udo-Karl [Consultant, Hildesheim (Germany)

    2008-07-01

    Within the OECD/NEA Working Party on Scientific Issues in Reactor Systems, an international benchmark was specified concerning the VENUS-7 critical assemblies as part of a continuous LWR criticality and reactor physics benchmark activity, with the focus on MOX and mixed UO{sub 2}/MOX-cores. This contribution summarizes the results. In pin cell calculations, excellent agreement is obtained among all benchmark contributions. Concerning the critical assemblies, the agreement among the calculated multiplication factors is also very satisfactory, accepting a considerable underestimation of the experimental values by all calculations. The degree of underestimation in Monte Carlo solutions using continuous energy nuclear data is directly related to the basic data used. Calculations with ENDF/B-VI yield the lowest multiplication factors, while using JEFF-3.1 yields the highest values. This is in line with the results for other small LWR critical assemblies. The quality of multiplication factors and reactivity differences obtained with codes using multi-group data is comparable to Monte Carlo results. (authors)

  18. Self-organization of Dynamic Distributed Computational Systems Applying Principles of Integrative Activity of Brain Neuronal Assemblies

    Directory of Open Access Journals (Sweden)

    Eugene Burmakin

    2009-02-01

    Full Text Available This paper presents a method for self-organization of the distributed systems operating in a dynamic context. We propose the use of a simple biologically (cognitive neuroscience inspired method for system configuration that allows allocating most of the computational load to off-line in order to improve the scalability property of the system. The method proposed has less computational burden at runtime than traditional system adaptation approaches.

  19. Performance of genetic programming optimised Bowtie2 on genome comparison and analytic testing (GCAT) benchmarks.

    Science.gov (United States)

    Langdon, W B

    2015-01-01

    Genetic studies are increasingly based on short noisy next generation scanners. Typically complete DNA sequences are assembled by matching short NextGen sequences against reference genomes. Despite considerable algorithmic gains since the turn of the millennium, matching both single ended and paired end strings to a reference remains computationally demanding. Further tailoring Bioinformatics tools to each new task or scanner remains highly skilled and labour intensive. With this in mind, we recently demonstrated a genetic programming based automated technique which generated a version of the state-of-the-art alignment tool Bowtie2 which was considerably faster on short sequences produced by a scanner at the Broad Institute and released as part of The Thousand Genome Project. Bowtie2 (G P) and the original Bowtie2 release were compared on bioplanet's GCAT synthetic benchmarks. Bowtie2 (G P) enhancements were also applied to the latest Bowtie2 release (2.2.3, 29 May 2014) and retained both the GP and the manually introduced improvements. On both singled ended and paired-end synthetic next generation DNA sequence GCAT benchmarks Bowtie2GP runs up to 45% faster than Bowtie2. The lost in accuracy can be as little as 0.2-0.5% but up to 2.5% for longer sequences.

  20. The VENUS-7 benchmarks. Results from state-of-the-art transport codes and nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Zwermann, Winfried; Pautz, Andreas [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Timm, Wolf [AREVA NP, Erlangen (Germany)

    2010-05-15

    For the validation of both nuclear data and computational methods, comparisons with experimental data are necessary. Most advantageous are assemblies where not only the multiplication factors or critical parameters were measured, but also additional quantities like reactivity differences or pin-wise fission rate distributions have been assessed. Currently there is a comprehensive activity to evaluate such measure-ments and incorporate them in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. A large number of such experiments was performed at the VENUS zero power reactor at SCK/CEN in Belgium in the sixties and seventies. The VENUS-7 series was specified as an international benchmark within the OECD/NEA Working Party on Scientific Issues of Reactor Systems (WPRS), and results obtained with various codes and nuclear data evaluations were summarized. In the present paper, results of high-accuracy transport codes with full spatial resolution with up-to-date nuclear data libraries from the JEFF and ENDF/B evaluations are presented. The comparisons of the results, both code-to-code and with the measured data are augmented by uncertainty and sensitivity analyses with respect to nuclear data uncertainties. For the multiplication factors, these are performed with the TSUNAMI-3D code from the SCALE system. In addition, uncertainties in the reactivity differences are analyzed with the TSAR code which is available from the current SCALE-6 version. (orig.)

  1. Benchmarking Ligand-Based Virtual High-Throughput Screening with the PubChem Database

    Directory of Open Access Journals (Sweden)

    Mariusz Butkiewicz

    2013-01-01

    Full Text Available With the rapidly increasing availability of High-Throughput Screening (HTS data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR models are built using Artificial Neural Networks (ANNs, Support Vector Machines (SVMs, Decision Trees (DTs, and Kohonen networks (KNs. Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed.

  2. WIPP Benchmark calculations with the large strain SPECTROM codes

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, G.D.; DeVries, K.L. [RE/SPEC, Inc., Rapid City, SD (United States)

    1995-08-01

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems.

  3. Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.

    Energy Technology Data Exchange (ETDEWEB)

    Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ananthan, Shreyas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Alan B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-29

    assembly timings faster than that observed on Haswell architecture. The computational workload of higher-order meshes, therefore, seems ideally suited for the many-core architecture and justi es further exploration of higher-order on NGP platforms. A Trilinos/Tpetra-based multi-threaded GMRES preconditioned by symmetric Gauss Seidel (SGS) represents the core solver infrastructure for the low-Mach advection/diffusion implicit solves. The threaded solver stack has been tested on small problems on NREL's Peregrine system using the newly developed and deployed Kokkos-view/SIMD kernels. fforts are underway to deploy the Tpetra-based solver stack on NERSC Cori system to benchmark its performance at scale on KNL machines.

  4. Benchmarking for best practice environmental management.

    Science.gov (United States)

    Jenkins, Bryan R; Hine, Philip T

    2003-06-01

    Benchmarking of environmental performance to demonstrate the achievement of best practice environmental management is a component of a new form of licensing of industrial discharges in Western Australia. The paper describes the approaches to benchmarking for the critical environmental issues for an alumina refinery and wastewater treatment plant. It also describes the lessons learnt from the benchmarking process on appropriate methods, the benefits and difficulties in the benchmarking process, and changes that would assist benchmarking for best practice environmental management.

  5. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins, Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  6. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or benchmark...

  7. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  8. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  9. POTENTIAL BENCHMARKS FOR ACTINIDE PRODUCTION IN HANFORD REACTORS

    Energy Technology Data Exchange (ETDEWEB)

    PUIGH RJ; TOFFER H

    2011-10-19

    A significant experimental program was conducted in the early Hanford reactors to understand the reactor production of actinides. These experiments were conducted with sufficient rigor, in some cases, to provide useful information that can be utilized today in development of benchmark experiments that may be used for the validation of present computer codes for the production of these actinides in low enriched uranium fuel.

  10. Assembly line balancing with resource constraints using new rank-based crossovers

    Science.gov (United States)

    Kamarudin, N. H.; Rashid, M. F. F. Ab.

    2017-10-01

    Assembly line balancing (ALB) is about distributing the assembly tasks into workstations with the almost equal workload. Recently, researchers started to consider the resource constraints in ALB such as machine and worker, to make the assembly layout more efficient. This paper presents an ALB with resource constraints (ALB-RC) to minimize the workstation, machine and worker. For the optimization purpose, genetic algorithm (GA) with two new crossovers is introduced. The crossovers are developed using ranking approach and known as rank-based crossover type I and type II (RBC-I and RBC-II). These crossovers are tested against popular combinatorial crossovers using 17 benchmark problems. The computational experiment results indicated that the RBC-II has better overall performance because of the balance between divergence and guidance in the reproduction process. In future, the RBC-I and RBC-II will be tested for different variant of ALB problems.

  11. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... of the ‘inside’ costs of the sub-component, technical specifications of the product, opportunistic behavior from the suppliers and cognitive limitation. These are all aspects that easily can dismantle the market mechanism and make it counter-productive in the organization. Thus, by directing more attention...

  12. The contextual benchmark method: benchmarking e-government services

    NARCIS (Netherlands)

    Jansen, J.; de Vries, Sjoerd A.; van Schaik, P.; van Schaik, Paul

    2010-01-01

    This paper offers a new method for benchmarking e-Government services. Government organizations no longer doubt the need to deliver their services on line. Instead, the question that is more relevant is how well the electronic services offered by a particular organization perform in comparison with

  13. Lessons learned for participation in recent OECD-NEA reactor physics and thermalhydraulic benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Novog, D.R.; Leung, K.H.; Ball, M. [McMaster Univ., Dept. of Engineering Physics, Hamilton, Ontario (Canada)

    2013-07-01

    Over the last 6 years the OECD-NEA has initiated a series of computational benchmarks in the fields of reactor physics and thermalhydraulics. Within this context McMaster university has been a key contributor and applied several state of the art tools including TSUNAMI, DRAGON, ASSERT, STAR-CCM+, RELAP and TRACE. Considering the tremendous amount of international participation in these benchmarks, there were many lessons of both technical and non-technical that should be shared. This paper presents a summary of the benchmarks, the results and contributions from McMaster, and the authors opinion on the overall conclusions gained from these extensive benchmarks. The benchmarks discussed in this paper include the Uncertainty Analysis in Modelling (UAM), the BWR fine mesh bundle test (BFBT), the PWR Subchannel Boiling Test (PSBT), the MATiS mixing experiment and the IAEA super critical water benchmarks on heat transfer and stability. (author)

  14. Mercaptopyridine and 4-aminothiophenol self-assembled layers in metal-molecule-metal contacts: a computational DFT study

    Energy Technology Data Exchange (ETDEWEB)

    Kucera, Jan; Gross, Axel [Institute fuer Theoretische Chemie, Universitaet Ulm (Germany)

    2008-07-01

    Recently it has become possible to deposit two-dimensional Pd layers on top of a 4-mercaptopyridine (Mpy) or a 4-aminothiophenol (4-ATP) self-assembled monolayer (SAM) on Au(111) leading to metal-molecule-metal contacts. We performed periodic density functional theory (DFT) calculations in order to contribute to the interpretation of the experimentally observed geometric and electronic structures. The calculations show that the adsorption structures of Mpy and 4-ATP on Au(111) are very similar. Both molecules prefer to bind to bridge-hollow sites at low as well as at high coverages. At low coverages, the molecules are significantly tilted from the Au(111) surface normal, whereas a denser packing leads to more upright configurations. The Pd/SAM interfaces correspond to metastable configurations in spite of the relatively strong Pd-Au interaction. The Pd-SAM contact is made through one-fold coordinated Pd-N bonds. In agreement with the experiment, the density of states (DOS) of Pd layer shows a significant reduction close to the Fermi level with respect to bulk Pd due to the Pd-N interaction. Also in agreement with experiment, the calculations confirm that 4-ATP is able to form bilayer structures connected through hydrogen bonds between the sulfur head group and up to to three hydrogen atoms of the amino group.

  15. Benchmarking ray-traced tropospheric delays

    Science.gov (United States)

    Nafisi, V.; Wijaya, D.; Boehm, J.; Schuh, H.; Hobiger, T.; Ichikawa, R.; Urquhart, L.; Santos, M. C.; Nievinski, F. G.; Zus, F.; Wickert, J.; Gegout, P.; Ardalan, A. A.

    2010-12-01

    Tropospheric propagation is a serious source of error in the analysis of space geodetic observations at radio wavelengths such as VLBI, GNSS, and DORIS. In recent years direct ray-tracing methods based on numerical weather models have been developed by different researchers in order to determine the true trajectory of a specific ray and its path delay in the troposphere. To evaluate and compare the results from different ray-tracing programs a benchmarking campaign was carried out under the umbrella of the International Association of Geodesy (IAG) Working Group 4.3.3 in the first half of 2010 with five institutions participating: German Research Centre for Geosciences (GFZ), Groupe de Recherche de Geodesie Spatiale (GRGS), National Institute of Information and Communication Technology (NICT), University of New Brunswick (UNB), and Institute of Geodesy and Geophysics (IGG). High-resolution ECMWF operational analysis pressure level data at the stations Tsukuba (Japan) and Wettzell (Germany) have been provided to the participants of the benchmarking campaign. The data consist of geopotential differences with respect to mean sea level, temperature, and specific humidity. Additionally, information about the geoid undulations was also provided and the participants were asked to compute the ray-traced total delays for various elevations (above 5 degrees) and azimuths. In general, we find good agreement with standard deviations below 1 cm between the ray-traced delays from the different solutions at 5 degrees elevation. Some small discrepancies are due to differences in the algorithm and the interpolation approaches. This benchmarking is very useful for the ray-tracers because it allows the validation of the results. Thus, these data sets and delays will be made available for the public, so that they can serve as reference for future ray-tracers.

  16. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2009-01-01

    The classic textbook for computer systems analysis and design, Computer Organization and Design, has been thoroughly updated to provide a new focus on the revolutionary change taking place in industry today: the switch from uniprocessor to multicore microprocessors. This new emphasis on parallelism is supported by updates reflecting the newest technologies with examples highlighting the latest processor designs, benchmarking standards, languages and tools. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, compu

  17. Benchmarking in de Europese Unie

    NARCIS (Netherlands)

    Groenendijk, Nico

    2004-01-01

    Dit artikel geeft een beschrijving van de toepassing van de zogenaamde open methode van beleidscoördinatie in de Europese Unie, en van de rol die benchmarking en best practices daarin spelen. Ingegaan wordt op de achtergrond van de totstandkoming van de methode, en op enkele toepassingen, die in de

  18. Benchmarks for industrial energy efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Amarnath, K.R. [Electric Power Research Inst., Palo Alto, CA (United States); Kumana, J.D. [Linnhoff March, Inc., Houston, TX (United States); Shah, J.V. [Electric Power Research Inst., Pittsburgh, PA (United States). Chemicals and Petroleum Center

    1996-12-31

    What are the standards for improving energy efficiency for industries such as petroleum refining, chemicals, and glass manufacture? How can different industries in emerging markets and developing accelerate the pace of improvements? This paper discusses several case studies and experiences relating to this subject emphasizing the use of energy efficiency benchmarks. Two important benchmarks are discussed. The first is based on a track record of outstanding performers in the related industry segment; the second benchmark is based on site specific factors. Using energy use reduction targets or benchmarks, projects have been implemented in Mexico, Poland, India, Venezuela, Brazil, China, Thailand, Malaysia, Republic of South Africa and Russia. Improvements identified through these projects include a variety of recommendations. The use of oxy-fuel and electric furnaces in the glass industry in Poland; reconfiguration of process heat recovery systems for refineries in China, Malaysia, and Russia; recycling and reuse of process wastewater in Republic of South Africa; cogeneration plant in Venezuela. The paper will discuss three case studies of efforts undertaken in emerging market countries to improve energy efficiency.

  19. Benchmarking forensic mental health organizations.

    Science.gov (United States)

    Coombs, Tim; Taylor, Monica; Pirkis, Jane

    2011-04-01

    This paper describes the forensic mental health forums that were conducted as part of the National Mental Health Benchmarking Project (NMHBP). These forums encouraged participating organizations to compare their performance on a range of key performance indicators (KPIs) with that of their peers. Four forensic mental health organizations took part in the NMHBP. Representatives from these organizations attended eight benchmarking forums at which they documented their performance against previously agreed KPIs. They also undertook three special projects which explored some of the factors that might explain inter-organizational variation in performance. The inter-organizational range for many of the indicators was substantial. Observing this led participants to conduct the special projects to explore three factors which might help explain the variability - seclusion practices, delivery of community mental health services, and provision of court liaison services. The process of conducting the special projects gave participants insights into the practices and structures employed by their counterparts, and provided them with some important lessons for quality improvement. The forensic mental health benchmarking forums have demonstrated that benchmarking is feasible and likely to be useful in improving service performance and quality.

  20. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  1. Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines

    Science.gov (United States)

    Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.

  2. Fault detection of a benchmark wind turbine using interval analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Odgaard, Peter Fogh; Bak, Thomas

    2012-01-01

    of the measurement with a closed set that is computed based on the past measurements and a model of the system. If the measurement is not consistent with this set, a fault is detected. The result demonstrates effectiveness of the method for fault detection of the benchmark wind turbine.......This paper investigates a state estimation set- membership approach for fault detection of a benchmark wind turbine. The main challenges in the benchmark are high noise on the wind speed measurement and the nonlinearities in the aerodynamic torque such that the overall model of the turbine...... is nonlinear. We use an effective wind speed estimator to estimate the effective wind speed and then using interval analysis and monotonicity of the aerodynamic torque with respect to the effective wind speed, we can apply the method to the nonlinear system. The fault detection algorithm checks the consistency...

  3. Benchmarking short sequence mapping tools.

    Science.gov (United States)

    Hatem, Ayat; Bozdağ, Doruk; Toland, Amanda E; Çatalyürek, Ümit V

    2013-06-07

    The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results.

  4. Development and validation of the computer program TNHXY; Desarrollo y validacion del programa TNHXY

    Energy Technology Data Exchange (ETDEWEB)

    Xolocostli M, V.; Valle G, E. del [Instituto Politecnico Nacional, Escuela Superior de Fisica y Matematicas, Dep. de Ingenieria Nuclear, Av. IPN s/n, Col. Lindavista, 07738 Mexico D.F. (Mexico); Alonso V, G. [ININ, 52045 Estado de Mexico (Mexico)

    2003-07-01

    This work describes the development and validation of the computer program TNHXY (Neutron Transport with Nodal Hybrid schemes in X Y geometry), which solves the discrete-ordinates neutron transport equations using a discontinuous Bi-Linear (DBiL) nodal hybrid method. One of the immediate applications of TNHXY is in the analysis of nuclear fuel assemblies, in particular those of BWRs. Its validation was carried out by reproducing some results for test or benchmark problems that some authors have solved using other numerical techniques. This allows to ensure that the program will provide results with similar accuracy for other problems of the same type. To accomplish this two benchmark problems have been solved. The first problem consists in a BWR fuel assembly in a 7x7 array without and with control rod. The results obtained with TNHXY are consistent with those reported for the TWOTRAN code. The second benchmark problem is a Mixed Oxide (MOX) fuel assembly in a 10x10 array. This last problem is known as the WPPR benchmark problem of the NEA Data Bank and the results are compared with those obtained with commercial codes like HELIOS, MCNP-4B and CPM-3. (Author)

  5. DFT computations on: Crystal structure, vibrational studies and optical investigations of a luminescent self-assembled material.

    Science.gov (United States)

    Kessentini, A; Ben Ahmed, A; Dammak, T; Belhouchet, M

    2018-02-15

    The current work undertakes the growth and the physicochemical properties of a novel green-yellow luminescence semi-organic material, the 3-picolylammonium bromide abbreviated (Pico-Br). In this paper, we report the X-ray diffraction measurements which show that the crystal lattice consists of distinct 3-picolylammonium cations and free bromide anions connected via NH⋯Br and NH⋯N hydrogen bonds leading to form a two dimensional frameworks. Molecular geometry compared with its optimized counterpart shows that the quantum chemical calculations carried out with density functional method (DFT) well produce the perceived structure by X-ray resolution of the studied material. To provide further insight into the spectroscopic properties, additional characterization of this material have been performed with Raman and infrared studies at room temperature. Theoretical computations have been computed using the (DFT) method at B3LYP/LanL2DZ level of theory implemented within Gaussian 03 program to study the vibrational spectra of the investigated molecule in the ground state. Optical absorption spectrum inspected by UV-visible absorption reveals the appearance of sharp optical gap of 280nm (4.42eV) as well as a strong green photoluminescence emission at 550nm (2.25eV) is detected on the photoluminescence (PL) spectrum at room temperature. Using the TD/DFT method, HOMO-LUMO energy gap and the Mulliken atomic charges were calculated in order to get an insight into the material. Good agreement between the theoretical results and the experimental ones was predicted. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. DFT computations on: Crystal structure, vibrational studies and optical investigations of a luminescent self-assembled material

    Science.gov (United States)

    Kessentini, A.; Ben Ahmed, A.; Dammak, T.; Belhouchet, M.

    2018-02-01

    The current work undertakes the growth and the physicochemical properties of a novel green-yellow luminescence semi-organic material, the 3-picolylammonium bromide abbreviated (Pico-Br). In this paper, we report the X-ray diffraction measurements which show that the crystal lattice consists of distinct 3-picolylammonium cations and free bromide anions connected via Nsbnd H ⋯ Br and Nsbnd H ⋯ N hydrogen bonds leading to form a two dimensional frameworks. Molecular geometry compared with its optimized counterpart shows that the quantum chemical calculations carried out with density functional method (DFT) well produce the perceived structure by X-ray resolution of the studied material. To provide further insight into the spectroscopic properties, additional characterization of this material have been performed with Raman and infrared studies at room temperature. Theoretical computations have been computed using the (DFT) method at B3LYP/LanL2DZ level of theory implemented within Gaussian 03 program to study the vibrational spectra of the investigated molecule in the ground state. Optical absorption spectrum inspected by UV-visible absorption reveals the appearance of sharp optical gap of 280 nm (4.42 eV) as well as a strong green photoluminescence emission at 550 nm (2.25 eV) is detected on the photoluminescence (PL) spectrum at room temperature. Using the TD/DFT method, HOMO-LUMO energy gap and the Mulliken atomic charges were calculated in order to get an insight into the material. Good agreement between the theoretical results and the experimental ones was predicted.

  7. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  8. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  9. Benchmarking set for domestic smart grid management

    OpenAIRE

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used as a guideline to create benchmark sets to compare domestic smart grid management methodologies. Secondly, an implementation of such a benchmark set is discussed in full detail, to give an example...

  10. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  11. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  12. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarkin...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  13. Benchmarking Academic Standards in the UK.

    Science.gov (United States)

    Yorke, Mantz

    1999-01-01

    Reports on a pilot study of benchmarking of academic standards in the United Kingdom and offers empirical data that benchmarking in higher education needs to be approached differently than benchmarking in the industrial/commercial milieu. Argues that the complexity which underpins academic standards is inimical to generalizations applicable across…

  14. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  15. Modeling Viral Capsid Assembly

    Science.gov (United States)

    2014-01-01

    I present a review of the theoretical and computational methodologies that have been used to model the assembly of viral capsids. I discuss the capabilities and limitations of approaches ranging from equilibrium continuum theories to molecular dynamics simulations, and I give an overview of some of the important conclusions about virus assembly that have resulted from these modeling efforts. Topics include the assembly of empty viral shells, assembly around single-stranded nucleic acids to form viral particles, and assembly around synthetic polymers or charged nanoparticles for nanotechnology or biomedical applications. I present some examples in which modeling efforts have promoted experimental breakthroughs, as well as directions in which the connection between modeling and experiment can be strengthened. PMID:25663722

  16. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  17. Benchmarking Tokamak edge modelling codes

    Science.gov (United States)

    Contributors To The Efda-Jet Work Programme; Coster, D. P.; Bonnin, X.; Corrigan, G.; Kirnev, G. S.; Matthews, G.; Spence, J.; Contributors to the EFDA-JET work programme

    2005-03-01

    Tokamak edge modelling codes are in widespread use to interpret and understand existing experiments, and to make predictions for future machines. Little direct benchmarking has been done between the codes, and the users of the codes have tended to concentrate on different experimental machines. An important validation step is to compare the codes for identical scenarios. In this paper, two of the major edge codes, SOLPS (B2.5-Eirene) and EDGE2D-NIMBUS are benchmarked against each other. A set of boundary conditions, transport coefficients, etc. for a JET plasma were chosen, and the two codes were run on the same grid. Initially, large differences were seen in the resulting plasmas. These differences were traced to differing physics assumptions with respect to the parallel heat flux limits. Once these were switched off in SOLPS, or implemented and switched on in EDGE2D-NIMBUS, the remaining differences were small.

  18. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Science.gov (United States)

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  19. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  20. Investible benchmarks & hedge fund liquidity

    OpenAIRE

    Freed, Marc S; McMillan, Ben

    2011-01-01

    A lack of commonly accepted benchmarks for hedge fund performance has permitted hedge fund managers to attribute to skill returns that may actually accrue from market risk factors and illiquidity. Recent innovations in hedge fund replication permits us to estimate the extent of this misattribution. Using an option-based model, we find evidence that the value of liquidity options that investors implicitly grant managers when they invest may account for part or even all hedge fund returns. C...

  1. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  2. Prismatic VHTR neutronic benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Connolly, Kevin John, E-mail: connolly@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA (United States); Rahnema, Farzad, E-mail: farzad@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA (United States); Tsvetkov, Pavel V. [Department of Nuclear Engineering, Texas A& M University, College Station, TX (United States)

    2015-04-15

    Highlights: • High temperature gas-cooled reactor neutronics benchmark problems. • Description of a whole prismatic VHTR core in its full heterogeneity. • Modeled using continuous energy nuclear data at a representative hot operating temperature. • Benchmark results for core eigenvalue, block-averaged power, and some selected pin fission density results. - Abstract: This paper aims to fill an apparent scarcity of benchmarks based on high temperature gas-cooled reactors. Within is a description of a whole prismatic VHTR core in its full heterogeneity and modeling using continuous energy nuclear data at a representative hot operating temperature. Also included is a core which has been simplified for ease in modeling while attempting to preserve as faithfully as possible the neutron physics of the core. Fuel and absorber pins have been homogenized from the particle level, however, the blocks which construct the core remain strongly heterogeneous. A six group multigroup (discrete energy) cross section set has been developed via Monte Carlo using the original heterogeneous core as a basis. Several configurations of the core have been solved using these two cross section sets; eigenvalue results, block-averaged power results, and some selected pin fission density results are presented in this paper, along with the six-group cross section data, so that method developers may use these problems as a standard reference point.

  3. Benchmarking adult mental health organizations.

    Science.gov (United States)

    Coombs, Tim; Geyer, Tania; Pirkis, Jane

    2011-06-01

    This paper describes the adult mental health forums that were conducted as part of the National Mental Health Benchmarking Project (NMHBP). Eight adult mental health forums were attended by staff from eight adult mental health services from around the country. The forums provided an avenue for these participants to document their organizations' performances against previously agreed key performance indicators (KPIs), and to compare this performance with that of their peers. The forums also encouraged discussion about appropriate targets. Forum participants found that the inter-organizational range for many of the KPIs was substantial, and they used this to inform practice change within their own organizations. They also found that they could set "alert targets" and "good practice targets" for some KPIs but not others. The discussion that ensued informed participants' understanding of factors that were within the control of their organizations that could be modified to improve service quality. Benchmarking in adult mental health services is not only possible but also likely to be extremely worthwhile as an exercise in improving service quality. For benchmarking to realize its potential, it requires strong national and local leadership, and a spirit of openness on the part of participating organizations.

  4. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  5. Thermal Analysis of a TREAT Fuel Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Papadias, Dionissios [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, Arthur E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-07-09

    The objective of this study was to explore options as to reduce peak cladding temperatures despite an increase in peak fuel temperatures. A 3D thermal-hydraulic model for a single TREAT fuel assembly was benchmarked to reproduce results obtained with previous thermal models developed for a TREAT HEU fuel assembly. In exercising this model, and variants thereof depending on the scope of analysis, various options were explored to reduce the peak cladding temperatures.

  6. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  7. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  8. Statistical benchmark for BosonSampling

    Science.gov (United States)

    Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas

    2016-03-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.

  9. Assembling large, complex environmental metagenomes

    Energy Technology Data Exchange (ETDEWEB)

    Howe, A. C. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Plant Soil and Microbial Sciences; Jansson, J. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Malfatti, S. A. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Tringe, S. G. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Tiedje, J. M. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Plant Soil and Microbial Sciences; Brown, C. T. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Computer Science and Engineering

    2012-12-28

    The large volumes of sequencing data required to sample complex environments deeply pose new challenges to sequence analysis approaches. De novo metagenomic assembly effectively reduces the total amount of data to be analyzed but requires significant computational resources. We apply two pre-assembly filtering approaches, digital normalization and partitioning, to make large metagenome assemblies more computationaly tractable. Using a human gut mock community dataset, we demonstrate that these methods result in assemblies nearly identical to assemblies from unprocessed data. We then assemble two large soil metagenomes from matched Iowa corn and native prairie soils. The predicted functional content and phylogenetic origin of the assembled contigs indicate significant taxonomic differences despite similar function. The assembly strategies presented are generic and can be extended to any metagenome; full source code is freely available under a BSD license.

  10. A computational module assembled from different protease family motifs identifies PI PLC from Bacillus cereus as a putative prolyl peptidase with a serine protease scaffold.

    Directory of Open Access Journals (Sweden)

    Adela Rendón-Ramírez

    Full Text Available Proteolytic enzymes have evolved several mechanisms to cleave peptide bonds. These distinct types have been systematically categorized in the MEROPS database. While a BLAST search on these proteases identifies homologous proteins, sequence alignment methods often fail to identify relationships arising from convergent evolution, exon shuffling, and modular reuse of catalytic units. We have previously established a computational method to detect functions in proteins based on the spatial and electrostatic properties of the catalytic residues (CLASP. CLASP identified a promiscuous serine protease scaffold in alkaline phosphatases (AP and a scaffold recognizing a β-lactam (imipenem in a cold-active Vibrio AP. Subsequently, we defined a methodology to quantify promiscuous activities in a wide range of proteins. Here, we assemble a module which encapsulates the multifarious motifs used by protease families listed in the MEROPS database. Since APs and proteases are an integral component of outer membrane vesicles (OMV, we sought to query other OMV proteins, like phospholipase C (PLC, using this search module. Our analysis indicated that phosphoinositide-specific PLC from Bacillus cereus is a serine protease. This was validated by protease assays, mass spectrometry and by inhibition of the native phospholipase activity of PI-PLC by the well-known serine protease inhibitor AEBSF (IC50 = 0.018 mM. Edman degradation analysis linked the specificity of the protease activity to a proline in the amino terminal, suggesting that the PI-PLC is a prolyl peptidase. Thus, we propose a computational method of extending protein families based on the spatial and electrostatic congruence of active site residues.

  11. A computational module assembled from different protease family motifs identifies PI PLC from Bacillus cereus as a putative prolyl peptidase with a serine protease scaffold.

    Science.gov (United States)

    Rendón-Ramírez, Adela; Shukla, Manish; Oda, Masataka; Chakraborty, Sandeep; Minda, Renu; Dandekar, Abhaya M; Ásgeirsson, Bjarni; Goñi, Félix M; Rao, Basuthkar J

    2013-01-01

    Proteolytic enzymes have evolved several mechanisms to cleave peptide bonds. These distinct types have been systematically categorized in the MEROPS database. While a BLAST search on these proteases identifies homologous proteins, sequence alignment methods often fail to identify relationships arising from convergent evolution, exon shuffling, and modular reuse of catalytic units. We have previously established a computational method to detect functions in proteins based on the spatial and electrostatic properties of the catalytic residues (CLASP). CLASP identified a promiscuous serine protease scaffold in alkaline phosphatases (AP) and a scaffold recognizing a β-lactam (imipenem) in a cold-active Vibrio AP. Subsequently, we defined a methodology to quantify promiscuous activities in a wide range of proteins. Here, we assemble a module which encapsulates the multifarious motifs used by protease families listed in the MEROPS database. Since APs and proteases are an integral component of outer membrane vesicles (OMV), we sought to query other OMV proteins, like phospholipase C (PLC), using this search module. Our analysis indicated that phosphoinositide-specific PLC from Bacillus cereus is a serine protease. This was validated by protease assays, mass spectrometry and by inhibition of the native phospholipase activity of PI-PLC by the well-known serine protease inhibitor AEBSF (IC50 = 0.018 mM). Edman degradation analysis linked the specificity of the protease activity to a proline in the amino terminal, suggesting that the PI-PLC is a prolyl peptidase. Thus, we propose a computational method of extending protein families based on the spatial and electrostatic congruence of active site residues.

  12. A new benchmark for pose estimation with ground truth from virtual reality

    DEFF Research Database (Denmark)

    Schlette, Christian; Buch, Anders Glent; Aksoy, Eren Erdal

    2014-01-01

    assembly tasks. Following the eRobotics methodology, a simulatable 3D representation of this platform was modelled in virtual reality. Based on a detailed camera and sensor simulation, we generated a set of benchmark images and point clouds with controlled levels of noise as well as ground truth data......The development of programming paradigms for industrial assembly currently gets fresh impetus from approaches in human demonstration and programming-by-demonstration. Major low- and mid-level prerequisites for machine vision and learning in these intelligent robotic applications are pose estimation...... such as object positions and time stamps. We demonstrate the application of the benchmark to evaluate our latest developments in pose estimation, stereo reconstruction and action recognition and publish the benchmark data for objective comparison of sensor setups and algorithms in industry....

  13. Benchmarks of Global Clean Energy Manufacturing: Summary of Findings

    Energy Technology Data Exchange (ETDEWEB)

    2017-01-01

    The Benchmarks of Global Clean Energy Manufacturing will help policymakers and industry gain deeper understanding of global manufacturing of clean energy technologies. Increased knowledge of the product supply chains can inform decisions related to manufacturing facilities for extracting and processing raw materials, making the array of required subcomponents, and assembling and shipping the final product. This brochure summarized key findings from the analysis and includes important figures from the report. The report was prepared by the Clean Energy Manufacturing Analysis Center (CEMAC) analysts at the U.S. Department of Energy's National Renewable Energy Laboratory.

  14. The BRaliBase dent-a tale of benchmark design and interpretation.

    Science.gov (United States)

    Löwes, Benedikt; Chauve, Cedric; Ponty, Yann; Giegerich, Robert

    2017-03-01

    BRaliBase is a widely used benchmark for assessing the accuracy of RNA secondary structure alignment methods. In most case studies based on the BRaliBase benchmark, one can observe a puzzling drop in accuracy in the 40-60% sequence identity range, the so-called 'BRaliBase Dent'. In this article, we show this dent is owing to a bias in the composition of the BRaliBase benchmark, namely the inclusion of a disproportionate number of transfer RNAs, which exhibit a conserved secondary structure. Our analysis, aside of its interest regarding the specific case of the BRaliBase benchmark, also raises important questions regarding the design and use of benchmarks in computational biology. © The Author 2016. Published by Oxford University Press.

  15. Robust Extraction of Tomographic Information via Randomized Benchmarking

    Directory of Open Access Journals (Sweden)

    Shelby Kimmel

    2014-03-01

    Full Text Available We describe how randomized benchmarking can be used to reconstruct the unital part of any trace-preserving quantum map, which in turn is sufficient for the full characterization of any unitary evolution or, more generally, any unital trace-preserving evolution. This approach inherits randomized benchmarking’s robustness to preparation, measurement, and gate imperfections, thereby avoiding systematic errors caused by these imperfections. We also extend these techniques to efficiently estimate the average fidelity of a quantum map to unitary maps outside of the Clifford group. The unitaries we consider correspond to large circuits commonly used as building blocks to achieve scalable, universal, and fault-tolerant quantum computation. Hence, we can efficiently verify all such subcomponents of a circuit-based universal quantum computer. In addition, we rigorously bound the time and sampling complexities of randomized benchmarking procedures, proving that the required nonlinear estimation problem can be solved efficiently.

  16. Quantifying the hydrophobic effect. 1. A computer simulation-molecular-thermodynamic model for the self-assembly of hydrophobic and amphiphilic solutes in aqueous solution.

    Science.gov (United States)

    Stephenson, Brian C; Goldsipe, Arthur; Beers, Kenneth J; Blankschtein, Daniel

    2007-02-08

    Surfactant micellization and micellar solubilization in aqueous solution can be modeled using a molecular-thermodynamic (MT) theoretical approach; however, the implementation of MT theory requires an accurate identification of the portions of solutes (surfactants and solubilizates) that are hydrated and unhydrated in the micellar state. For simple solutes, such identification is comparatively straightforward using simple rules of thumb or group-contribution methods, but for more complex solutes, the hydration states in the micellar environment are unclear. Recently, a hybrid method was reported by these authors in which hydrated and unhydrated states are identified by atomistic simulation, with the resulting information being used to make MT predictions of micellization and micellar solubilization behavior. Although this hybrid method improves the accuracy of the MT approach for complex solutes with a minimum of computational expense, the limitation remains that individual atoms are modeled as being in only one of two states-head or tail-whereas in reality, there is a continuous spectrum of hydration states between these two limits. In the case of hydrophobic or amphiphilic solutes possessing more complex chemical structures, a new modeling approach is needed to (i) obtain quantitative information about changes in hydration that occur upon aggregate formation, (ii) quantify the hydrophobic driving force for self-assembly, and (iii) make predictions of micellization and micellar solubilization behavior. This article is the first in a series of articles introducing a new computer simulation-molecular thermodynamic (CS-MT) model that accomplishes objectives (i)-(iii) and enables prediction of micellization and micellar solubilization behaviors, which are infeasible to model directly using atomistic simulation. In this article (article 1 of the series), the CS-MT model is introduced and implemented to model simple oil aggregates of various shapes and sizes, and its

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  18. HyspIRI Low Latency Concept and Benchmarks

    Science.gov (United States)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  19. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart, TC

    2015-11-01

    Full Text Available the study was exempt from ethical approval procedures.) Did the study presented in the manuscript involve human or animal subjects: No I v i w 1Closed-loop Neuromorphic Benchmarks Terrence C. Stewart 1,∗, Travis DeWolf 1, Ashley Kleinhans 2 and Chris..._link335 program from ev3dev-c (https://github.com/in4lio/ev3dev-c). This allows the EV3 to336 listen for UDP commands that tell it to set motor values and read sensor values. Communication with337 a PC was over a USB link (although the system also...

  20. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  1. Benchmarking the state comparison amplifier

    Science.gov (United States)

    Kleczkowska, Klaudia; Puthoor, Ittoop Vergheese; Bain, Lauren; Andersson, Erika

    2017-10-01

    The state comparison amplifier is a recently proposed probabilistic quantum amplifier, intended especially for amplifying coherent states. Its realization is simple and uses only linear optics and photodetectors, and the preparation of a "guess" state, typically a coherent state. Fidelity and success probability can be high compared with other probabilistic amplifiers. State comparison amplification does, however, extract information about the amplified state, which means that it is especially important to benchmark it against a simple measure-and-resend procedure. We compare state comparison quantum amplifiers to measure-and-resend strategies, and identify parameter regimes and scenarios where these can and where they cannot provide an advantage.

  2. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    Science.gov (United States)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different

  3. Uracil Excision for Assembly of Complex Pathways

    DEFF Research Database (Denmark)

    Cavaleiro, Mafalda; Nielsen, Morten Thrane; Kim, Se Hyeuk

    2015-01-01

    Despite decreasing prices on synthetic DNA constructs, higher-order assembly of PCR-generated DNA continues to be an important exercise in molecular and synthetic biology. Simplicity and robustness are attractive features met by the uracil excision DNA assembly method, which is one of the most...... inexpensive technologies available. Here, we describe four different protocols for uracil excision-based DNA editing: one for simple manipulations such as site-directed mutagenesis, one for plasmid-based multigene assembly in Escherichia coli, one for one-step assembly and integration of single or multiple...... genes into the genome, and a standardized assembly pipeline using benchmarked oligonucleotides for pathway assembly and multigene expression optimization....

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  6. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  7. Sieve of Eratosthenes benchmarks for the Z8 FORTH microcontroller

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, R.

    1989-02-01

    This report presents benchmarks for the Z8 FORTH microcontroller system that ORNL uses extensively in proving concepts and developing prototype test equipment for the Smart House Project. The results are based on the sieve of Eratosthenes algorithm, a calculation used extensively to rate computer systems and programming languages. Three benchmark refinements are presented,each showing how the execution speed of a FORTH program can be improved by use of a particular optimization technique. The last version of the FORTH benchmark shows that optimization is worth the effort: It executes 20 times faster than the Gilbreaths' widely-published FORTH benchmark program. The National Association of Home Builders Smart House Project is a cooperative research and development effort being undertaken by American home builders and a number of major corporations serving the home building industry. The major goal of the project is to help the participating organizations incorporate advanced technology in communications,energy distribution, and appliance control products for American homes. This information is provided to help project participants use the Z8 FORTH prototyping microcontroller in developing Smart House concepts and equipment. The discussion is technical in nature and assumes some experience with microcontroller devices and the techniques used to develop software for them. 7 refs., 5 tabs.

  8. Analysis of the impact of correlated benchmark experiments on the validation of codes for criticality safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bock, M.; Stuke, M.; Behler, M. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany)

    2013-07-01

    The validation of a code for criticality safety analysis requires the recalculation of benchmark experiments. The selected benchmark experiments are chosen such that they have properties similar to the application case that has to be assessed. A common source of benchmark experiments is the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) compiled by the 'International Criticality Safety Benchmark Evaluation Project' (ICSBEP). In order to take full advantage of the information provided by the individual benchmark descriptions for the application case, the recommended procedure is to perform an uncertainty analysis. The latter is based on the uncertainties of experimental results included in most of the benchmark descriptions. They can be performed by means of the Monte Carlo sampling technique. The consideration of uncertainties is also being introduced in the supplementary sheet of DIN 25478 'Application of computer codes in the assessment of criticality safety'. However, for a correct treatment of uncertainties taking into account the individual uncertainties of the benchmark experiments is insufficient. In addition, correlations between benchmark experiments have to be handled correctly. For example, these correlations can arise due to different cases of a benchmark experiment sharing the same components like fuel pins or fissile solutions. Thus, manufacturing tolerances of these components (e.g. diameter of the fuel pellets) have to be considered in a consistent manner in all cases of the benchmark experiment. At the 2012 meeting of the Expert Group on 'Uncertainty Analysis for Criticality Safety Assessment' (UACSA) of the OECD/NEA a benchmark proposal was outlined that aimed for the determination of the impact on benchmark correlations on the estimation of the computational bias of the neutron multiplication factor (k{sub eff}). The analysis presented here is based on this

  9. International benchmark study of advanced thermal hydraulic safety analysis codes against measurements on IEA-R1 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Hainoun, A., E-mail: pscientific2@aec.org.sy [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Doval, A. [Nuclear Engineering Department, Av. Cmdt. Luis Piedrabuena 4950, C.P. 8400 S.C de Bariloche, Rio Negro (Argentina); Umbehaun, P. [Centro de Engenharia Nuclear – CEN, IPEN-CNEN/SP, Av. Lineu Prestes 2242-Cidade Universitaria, CEP-05508-000 São Paulo, SP (Brazil); Chatzidakis, S. [School of Nuclear Engineering, Purdue University, West Lafayette, IN 47907 (United States); Ghazi, N. [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Park, S. [Research Reactor Design and Engineering Division, Basic Science Project Operation Dept., Korea Atomic Energy Research Institute (Korea, Republic of); Mladin, M. [Institute for Nuclear Research, Campului Street No. 1, P.O. Box 78, 115400 Mioveni, Arges (Romania); Shokr, A. [Division of Nuclear Installation Safety, Research Reactor Safety Section, International Atomic Energy Agency, A-1400 Vienna (Austria)

    2014-12-15

    Highlights: • A set of advanced system thermal hydraulic codes are benchmarked against IFA of IEA-R1. • Comparative safety analysis of IEA-R1 reactor during LOFA by 7 working teams. • This work covers both experimental and calculation effort and presents new out findings on TH of RR that have not been reported before. • LOFA results discrepancies from 7% to 20% for coolant and peak clad temperatures are predicted conservatively. - Abstract: In the framework of the IAEA Coordination Research Project on “Innovative methods in research reactor analysis: Benchmark against experimental data on neutronics and thermal hydraulic computational methods and tools for operation and safety analysis of research reactors” the Brazilian research reactor IEA-R1 has been selected as reference facility to perform benchmark calculations for a set of thermal hydraulic codes being widely used by international teams in the field of research reactor (RR) deterministic safety analysis. The goal of the conducted benchmark is to demonstrate the application of innovative reactor analysis tools in the research reactor community, validation of the applied codes and application of the validated codes to perform comprehensive safety analysis of RR. The IEA-R1 is equipped with an Instrumented Fuel Assembly (IFA) which provided measurements for normal operation and loss of flow transient. The measurements comprised coolant and cladding temperatures, reactor power and flow rate. Temperatures are measured at three different radial and axial positions of IFA summing up to 12 measuring points in addition to the coolant inlet and outlet temperatures. The considered benchmark deals with the loss of reactor flow and the subsequent flow reversal from downward forced to upward natural circulation and presents therefore relevant phenomena for the RR safety analysis. The benchmark calculations were performed independently by the participating teams using different thermal hydraulic and safety

  10. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    Science.gov (United States)

    The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr

  11. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  12. [Benchmarking in health care: conclusions and recommendations].

    Science.gov (United States)

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process. Copyright © 2011. Published by Elsevier GmbH.

  13. THE ISPRS BENCHMARK ON INDOOR MODELLING

    Directory of Open Access Journals (Sweden)

    K. Khoshelham

    2017-09-01

    Full Text Available Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  14. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  15. Benchmarking for Excellence and the Nursing Process

    Science.gov (United States)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  17. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  18. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  19. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based......Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  20. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  1. Report on the parallelization of the MLfit benchmark using OpenMP and MPI

    CERN Document Server

    Lazzaro, Alfio; Leduc, Julien; Nowak, Andrzej; Valsan, Liviu

    2012-01-01

    This report describes the development of an MPI parallelization support on top of the existing OpenMP parallel version of the MLfit benchmark for a hybrid evaluation on multicore and distributed computational hosts. The MLfit benchmark is used at CERN openlab as a representative of data analysis applications used in the high energy physics community. The report includes the results of scalability runs obtained with several configurations and systems.

  2. Benchmark testing the flow and solidification modeling of AI castings

    Science.gov (United States)

    Sirrell, B.; Holliday, M.; Campbell, J.

    1996-03-01

    Although the heat flow aspects of the simulation of castings now appears to be tolerably well advanced, a recent exercise has revealed that computed predictions can, in fact, be widely different from experimentally observed values. The modeling of flow, where turbulence is properly taken into account, appears to be good in its macroscopic ability. However, better resolution and the possible general incorporation of surface tension will be required to simulate the damaging effect of air entrainment common in most metal castings. It is envisaged that the results of this excercise will constitute a useful benchmark test for computer models of flow and solidification for the foreseeable future.

  3. On Constraints in Assembly Planning

    Energy Technology Data Exchange (ETDEWEB)

    Calton, T.L.; Jones, R.E.; Wilson, R.H.

    1998-12-17

    Constraints on assembly plans vary depending on product, assembly facility, assembly volume, and many other factors. Assembly costs and other measures to optimize vary just as widely. To be effective, computer-aided assembly planning systems must allow users to express the plan selection criteria that appIy to their products and production environments. We begin this article by surveying the types of user criteria, both constraints and quality measures, that have been accepted by assembly planning systems to date. The survey is organized along several dimensions, including strategic vs. tactical criteria; manufacturing requirements VS. requirements of the automated planning process itself and the information needed to assess compliance with each criterion. The latter strongly influences the efficiency of planning. We then focus on constraints. We describe a framework to support a wide variety of user constraints for intuitive and efficient assembly planning. Our framework expresses all constraints on a sequencing level, specifying orders and conditions on part mating operations in a number of ways. Constraints are implemented as simple procedures that either accept or reject assembly operations proposed by the planner. For efficiency, some constraints are supplemented with special-purpose modifications to the planner's algorithms. Fast replanning enables an interactive plan-view-constrain-replan cycle that aids in constraint discovery and documentation. We describe an implementation of the framework in a computer-aided assembly planning system and experiments applying the system to a number of complex assemblies, including one with 472 parts.

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  6. Updates to the Integrated Protein-Protein Interaction Benchmarks : Docking Benchmark Version 5 and Affinity Benchmark Version 2

    NARCIS (Netherlands)

    Vreven, Thom; Moal, Iain H.; Vangone, Anna|info:eu-repo/dai/nl/370549694; Pierce, Brian G.; Kastritis, Panagiotis L.|info:eu-repo/dai/nl/315886668; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M J J|info:eu-repo/dai/nl/113691238; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high-quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were

  7. Benchmarking of flux-driven full-F gyrokinetic simulations

    Science.gov (United States)

    Asahi, Yuuichi; Grandgirard, Virginie; Idomura, Yasuhiro; Garbet, Xavier; Latu, Guillaume; Sarazin, Yanick; Dif-Pradalier, Guilhem; Donnel, Peter; Ehrlacher, Charles

    2017-10-01

    Two full-F global gyrokinetic codes are benchmarked to compute flux-driven ion temperature gradient (ITG) turbulence in tokamak plasmas. For this purpose, the Semi-Lagrangian code GYrokinetic SEmi-LAgrangian and the Eulerian code GT5D are employed, which solve the full-F gyrokinetic equation with a realistic fixed flux condition. The equilibrium poloidal flow profile formation processes are benchmarked and compared against the local neoclassical theory. The simulations above are carried out without turbulence, which agree well with each other and with the theoretical estimates. Here, a lot of attention has been paid to the boundary conditions, which have huge impacts on the global shape of radial electric field. The behaviors of micro-instabilities are benchmarked for linear and nonlinear cases without a heat source, where we found good agreements in the linear growth rates and nonlinear critical gradient level. In the nonlinear case, initial conditions are chosen to be identical since they dominate the transient turbulence behavior. Using the appropriate settings for the boundary and initial conditions obtained in the benchmarks above, a flux-driven ITG turbulence simulation is carried out. The avalanche-like transport is assessed with a focus on spatio-temporal properties. A statistical analysis is performed to discuss this self-organized criticality (SOC) like behaviors, where we found a 1/f spectra and a transition to 1/f3 spectra at high-frequency side in both codes. Based on these benchmarks, it is verified that the SOC-like behavior is robust and not dependent on numerics.

  8. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  9. Two h-index benchmarks for evaluating the publication performance of medical informatics researchers.

    Science.gov (United States)

    El Emam, Khaled; Arbuckle, Luk; Jonker, Elizabeth; Anderson, Kevin

    2012-10-18

    The h-index is a commonly used metric for evaluating the publication performance of researchers. However, in a multidisciplinary field such as medical informatics, interpreting the h-index is a challenge because researchers tend to have diverse home disciplines, ranging from clinical areas to computer science, basic science, and the social sciences, each with different publication performance profiles. To construct a reference standard for interpreting the h-index of medical informatics researchers based on the performance of their peers. Using a sample of authors with articles published over the 5-year period 2006-2011 in the 2 top journals in medical informatics (as determined by impact factor), we computed their h-index using the Scopus database. Percentiles were computed to create a 6-level benchmark, similar in scheme to one used by the US National Science Foundation, and a 10-level benchmark. The 2 benchmarks can be used to place medical informatics researchers in an ordered category based on the performance of their peers. A validation exercise mapped the benchmark levels to the ranks of medical informatics academic faculty in the United States. The 10-level benchmark tracked academic rank better (with no ties) and is therefore more suitable for practical use. Our 10-level benchmark provides an objective basis to evaluate and compare the publication performance of medical informatics researchers with that of their peers using the h-index.

  10. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  11. Quantitative Performance Analysis of the SPEC OMPM2001 Benchmarks

    Directory of Open Access Journals (Sweden)

    Vishal Aslot

    2003-01-01

    Full Text Available The state of modern computer systems has evolved to allow easy access to multiprocessor systems by supporting multiple processors on a single physical package. As the multiprocessor hardware evolves, new ways of programming it are also developed. Some inventions may merely be adopting and standardizing the older paradigms. One such evolving standard for programming shared-memory parallel computers is the OpenMP API. The Standard Performance Evaluation Corporation (SPEC has created a suite of parallel programs called SPEC OMP to compare and evaluate modern shared-memory multiprocessor systems using the OpenMP standard. We have studied these benchmarks in detail to understand their performance on a modern architecture. In this paper, we present detailed measurements of the benchmarks. We organize, summarize, and display our measurements using a Quantitative Model. We present a detailed discussion and derivation of the model. Also, we discuss the important loops in the SPEC OMPM2001 benchmarks and the reasons for less than ideal speedup on our platform.

  12. First benchmark of the Unstructured Grid Adaptation Working Group

    Science.gov (United States)

    Ibanez, Daniel; Barral, Nicolas; Krakos, Joshua; Loseille, Adrien; Michal, Todd; Park, Mike

    2017-01-01

    Unstructured grid adaptation is a technology that holds the potential to improve the automation and accuracy of computational fluid dynamics and other computational disciplines. Difficulty producing the highly anisotropic elements necessary for simulation on complex curved geometries that satisfies a resolution request has limited this technology's widespread adoption. The Unstructured Grid Adaptation Working Group is an open gathering of researchers working on adapting simplicial meshes to conform to a metric field. Current members span a wide range of institutions including academia, industry, and national laboratories. The purpose of this group is to create a common basis for understanding and improving mesh adaptation. We present our first major contribution: a common set of benchmark cases, including input meshes and analytic metric specifications, that are publicly available to be used for evaluating any mesh adaptation code. We also present the results of several existing codes on these benchmark cases, to illustrate their utility in identifying key challenges common to all codes and important differences between available codes. Future directions are defined to expand this benchmark to mature the technology necessary to impact practical simulation workflows.

  13. Performance benchmarking of liver CT image segmentation and volume estimation

    Science.gov (United States)

    Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang

    2008-03-01

    In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.

  14. Solution of the stationary state of the PWR MOX/UO-2 core transient benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Seubert, A.; Langenbuch, S.; Zwermann, W. [Gesellschaft fuer Anlagen- und Reaktorsicherheit GRS mbH, Forschungsinstitute, D-85748 Garching (Germany)

    2006-07-01

    The multi-group Discrete Ordinates transport code DORT is applied to solve the stationary state of the OECD/NEA PWR MOX/UO-2 Core Transient Benchmark. Pin cell homogenised cross sections in 16 energy groups and P{sub 1} scattering order have been obtained by fuel assembly burn-up calculations using HELIOS. In this paper, we report on the details of our calculations for this benchmark problem and show our results to be in good agreement with an MCNP Monte Carlo solution with nuclear point data and a multi-group DeCART Method of Characteristics solution. (authors)

  15. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  2. Benchmark Assessment for Improved Learning. AACC Report

    Science.gov (United States)

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  3. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  4. International Benchmarking of Vocational Education and Training

    Science.gov (United States)

    Wyatt, Tim

    2004-01-01

    This report critically examines two approaches to benchmarking vocational education and training (VET) - benchmarking through performance indicators and comparative case studies. The author finds both approaches provide useful information, although the case study approach enables a more thorough analysis of particular issues and can take greater…

  5. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  6. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  7. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  8. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...

  9. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  10. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...

  11. Protein Structure Determination by Assembling Super-Secondary Structure Motifs Using Pseudocontact Shifts.

    Science.gov (United States)

    Pilla, Kala Bharath; Otting, Gottfried; Huber, Thomas

    2017-03-07

    Computational and nuclear magnetic resonance hybrid approaches provide efficient tools for 3D structure determination of small proteins, but currently available algorithms struggle to perform with larger proteins. Here we demonstrate a new computational algorithm that assembles the 3D structure of a protein from its constituent super-secondary structural motifs (Smotifs) with the help of pseudocontact shift (PCS) restraints for backbone amide protons, where the PCSs are produced from different metal centers. The algorithm, DINGO-PCS (3D assembly of Individual Smotifs to Near-native Geometry as Orchestrated by PCSs), employs the PCSs to recognize, orient, and assemble the constituent Smotifs of the target protein without any other experimental data or computational force fields. Using a universal Smotif database, the DINGO-PCS algorithm exhaustively enumerates any given Smotif. We benchmarked the program against ten different protein targets ranging from 100 to 220 residues with different topologies. For nine of these targets, the method was able to identify near-native Smotifs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Phylogenetic Comparative Assembly

    Science.gov (United States)

    Husemann, Peter; Stoye, Jens

    Recent high throughput sequencing technologies are capable of generating a huge amount of data for bacterial genome sequencing projects. Although current sequence assemblers successfully merge the overlapping reads, often several contigs remain which cannot be assembled any further. It is still costly and time consuming to close all the gaps in order to acquire the whole genomic sequence. Here we propose an algorithm that takes several related genomes and their phylogenetic relationships into account to create a contig adjacency graph. From this a layout graph can be computed which indicates putative adjacencies of the contigs in order to aid biologists in finishing the complete genomic sequence.

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  15. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  16. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  18. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  19. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  20. NUCLEAR DATA UNCERTAINTY AND SENSITIVITY ANALYSIS WITH XSUSA FOR FUEL ASSEMBLY DEPLETION CALCULATIONS

    National Research Council Canada - National Science Library

    Zwermann, W; Aures, A; Gallner, L; Hannstein, V; Krzykacz-Hausmann, B; Velkov, K; Martinez, J.S

    2014-01-01

    Uncertainty and sensitivity analyses with respect to nuclear data are performed with depletion calculations for BWR and PWR fuel assemblies specified in the framework of the UAM-LWR Benchmark Phase II...

  1. Simultaneous assembly of multiple test forms

    NARCIS (Netherlands)

    van der Linden, Willem J.; Adema, J.J.; Adema, Jos J.

    1998-01-01

    An algorithm for the assembly of multiple test forms is proposed in which the multiple-form problem is reduced to a series of computationally less intensive two-form problems. At each step, one form is assembled to its true specifications; the other form is a dummy assembled only to maintain a

  2. Simultaneous assembly of multiple test forms

    NARCIS (Netherlands)

    van der Linden, Willem J.; Adema, Jos J.

    1997-01-01

    An algorithm for the assembly of multiple test forms is proposed in which the multiple-form problem is reduced to a series of computationally less intensive two-form problems. At each step one form is assembled to its true specifications; the other form is a dummy assembled only to maintain a

  3. Benchmarking for Major Producers of Limestone in the Czech Republic

    National Research Council Canada - National Science Library

    Vanék, Michal; Mikoláš, Milan; Bora, Petr

    2013-01-01

    .... Benchmarking is a method which can yield quality information. The importance of benchmarking is strengthened by the fact that many authors consider benchmarking to be an integral part of strategic management...

  4. Benchmarking Measures of Network Influence

    Science.gov (United States)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-09-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures.

  5. Academic Benchmarks for Otolaryngology Leaders.

    Science.gov (United States)

    Eloy, Jean Anderson; Blake, Danielle M; D'Aguillo, Christine; Svider, Peter F; Folbe, Adam J; Baredes, Soly

    2015-08-01

    This study aimed to characterize current benchmarks for academic otolaryngologists serving in positions of leadership and identify factors potentially associated with promotion to these positions. Information regarding chairs (or division chiefs), vice chairs, and residency program directors was obtained from faculty listings and organized by degree(s) obtained, academic rank, fellowship training status, sex, and experience. Research productivity was characterized by (a) successful procurement of active grants from the National Institutes of Health and prior grants from the American Academy of Otolaryngology-Head and Neck Surgery Foundation Centralized Otolaryngology Research Efforts program and (b) scholarly impact, as measured by the h-index. Chairs had the greatest amount of experience (32.4 years) and were the least likely to have multiple degrees, with 75.8% having an MD degree only. Program directors were the most likely to be fellowship trained (84.8%). Women represented 16% of program directors, 3% of chairs, and no vice chairs. Chairs had the highest scholarly impact (as measured by the h-index) and the greatest external grant funding. This analysis characterizes the current picture of leadership in academic otolaryngology. Chairs, when compared to their vice chair and program director counterparts, had more experience and greater research impact. Women were poorly represented among all academic leadership positions. © The Author(s) 2015.

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  7. Benchmark solutions for transport in $d$-dimensional Markov binary mixtures

    CERN Document Server

    Larmier, Coline; Malvagi, Fausto; Mazzolo, Alain; Zoia, Andrea

    2016-01-01

    Linear particle transport in stochastic media is key to such relevant applications as neutron diffusion in randomly mixed immiscible materials, light propagation through engineered optical materials, and inertial confinement fusion, only to name a few. We extend the pioneering work by Adams, Larsen and Pomraning \\cite{benchmark_adams} (recently revisited by Brantley \\cite{brantley_benchmark}) by considering a series of benchmark configurations for mono-energetic and isotropic transport through Markov binary mixtures in dimension $d$. The stochastic media are generated by resorting to Poisson random tessellations in $1d$ slab, $2d$ extruded, and full $3d$ geometry. For each realization, particle transport is performed by resorting to the Monte Carlo simulation. The distributions of the transmission and reflection coefficients on the free surfaces of the geometry are subsequently estimated, and the average values over the ensemble of realizations are computed. Reference solutions for the benchmark have never be...

  8. Characterizing a benchmark scenario for heavy Higgs boson searches in the Georgi-Machacek model

    Science.gov (United States)

    Logan, Heather E.; Reimer, Mark B.

    2017-11-01

    The Georgi-Machacek model is used to motivate and interpret LHC searches for doubly- and singly-charged Higgs bosons decaying into vector boson pairs. In this paper we study the constraints on and phenomenology of the "H5plane" benchmark scenario in the Georgi-Machacek model, which has been proposed for use in these searches. We show that the entire H5plane benchmark is compatible with the LHC measurements of the 125 GeV Higgs boson couplings. We also point out that, over much of the H5plane benchmark, the line shapes of the two C P -even neutral heavy Higgs bosons H and H50 will overlap and interfere when produced in vector boson fusion with decays to W+W- or Z Z . Finally we compute the decay branching ratios of the additional heavy Higgs bosons within the H5plane benchmark to facilitate the development of search strategies for these additional particles.

  9. Levermore-Pomraning Model Results for an Interior Source Binary Stochastic Medium Benchmark Problem

    Energy Technology Data Exchange (ETDEWEB)

    Brantley, P S; Palmer, T S

    2009-02-24

    The accuracy of the Levermore-Pomraning model for particle transport through a binary stochastic medium is investigated using an interior source benchmark problem. As in previous comparisons of the model for incident angular flux benchmark problems, the model accurately computes the leakage and the scalar flux distributions for optically thin slabs. The model is less accurate for more optically thick slabs but has a maximum relative error in the leakage of approximately 10% for the problems examined. The maximum root-mean-squared relative errors for the total and material scalar flux distributions approach 65% for the more optically thick slabs. Consistent with previous benchmark comparisons, the results of these interior source benchmark comparisons demonstrate that the Levermore-Pomraning model produces qualitatively correct and semi-quantitatively correct results for both leakage values and scalar flux distributions.

  10. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  11. Benchmarking Attosecond Physics with Atomic Hydrogen

    Science.gov (United States)

    2015-05-25

    Final 3. DATES COVERED (From - To) 12 Mar 12 – 11 Mar 15 4. TITLE AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a...AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a. CONTRACT NUMBER FA2386-12-1-4025 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Final Report for AOARD Grant FA2386-12-1-4025 “ Benchmarking

  12. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador

    2016-01-01

    -recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  13. Exploiting sparseness in de novo genome assembly.

    Science.gov (United States)

    Ye, Chengxi; Ma, Zhanshan Sam; Cannon, Charles H; Pop, Mihai; Yu, Douglas W

    2012-04-19

    The very large memory requirements for the construction of assembly graphs for de novo genome assembly limit current algorithms to super-computing environments. In this paper, we demonstrate that constructing a sparse assembly graph which stores only a small fraction of the observed k-mers as nodes and the links between these nodes allows the de novo assembly of even moderately-sized genomes (~500 M) on a typical laptop computer. We implement this sparse graph concept in a proof-of-principle software package, SparseAssembler, utilizing a new sparse k-mer graph structure evolved from the de Bruijn graph. We test our SparseAssembler with both simulated and real data, achieving ~90% memory savings and retaining high assembly accuracy, without sacrificing speed in comparison to existing de novo assemblers.

  14. Benchmark Study of Density Cumulant Functional Theory: Thermochemistry and Kinetics.

    Science.gov (United States)

    Copan, Andreas V; Sokolov, Alexander Yu; Schaefer, Henry F

    2014-06-10

    We present an extensive benchmark study of density cumulant functional theory (DCFT) for thermochemistry and kinetics of closed- and open-shell molecules. The performance of DCFT methods (DC-06, DC-12, ODC-06, and ODC-12) is compared to that of coupled-electron pair methods (CEPA0 and OCEPA0) and coupled-cluster theory (CCSD and CCSD(T)) for the description of noncovalent interactions (A24 database), barrier heights of hydrogen-transfer reactions (HTBH38), radical stabilization energies (RSE30), adiabatic ionization energies (AIE), and covalent bond stretching in diatomic molecules. Our results indicate that out of four DCFT methods the ODC-12 method is the most reliable and accurate DCFT formulation to date. Compared to CCSD, ODC-12 shows superior results for all benchmark tests employed in our study. With respect to coupled-pair theories, ODC-12 outperforms CEPA0 and shows similar accuracy to the orbital-optimized CEPA0 variant (OCEPA0) for systems at equilibrium geometries. For covalent bond stretching, ODC-12 is found to be more reliable than OCEPA0. For the RSE30 and AIE data sets, ODC-12 shows competitive performance with CCSD(T). In addition to benchmark results, we report new reference values for the RSE30 data set computed using coupled cluster theory with up to perturbative quadruple excitations.

  15. Analytical Radiation Transport Benchmarks for The Next Century

    Energy Technology Data Exchange (ETDEWEB)

    B.D. Ganapol

    2005-01-19

    Verification of large-scale computational algorithms used in nuclear engineering and radiological applications is an essential element of reliable code performance. For this reason, the development of a suite of multidimensional semi-analytical benchmarks has been undertaken to provide independent verification of proper operation of codes dealing with the transport of neutral particles. The benchmarks considered cover several one-dimensional, multidimensional, monoenergetic and multigroup, fixed source and critical transport scenarios. The first approach, called the Green's Function. In slab geometry, the Green's function is incorporated into a set of integral equations for the boundary fluxes. Through a numerical Fourier transform inversion and subsequent matrix inversion for the boundary fluxes, a semi-analytical benchmark emerges. Multidimensional solutions in a variety of infinite media are also based on the slab Green's function. In a second approach, a new converged SN method is developed. In this method, the SN solution is ''minded'' to bring out hidden high quality solutions. For this case multigroup fixed source and criticality transport problems are considered. Remarkably accurate solutions can be obtained with this new method called the Multigroup Converged SN (MGCSN) method as will be demonstrated.

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  17. QUAST: quality assessment tool for genome assemblies.

    Science.gov (United States)

    Gurevich, Alexey; Saveliev, Vladislav; Vyahhi, Nikolay; Tesler, Glenn

    2013-04-15

    Limitations of genome sequencing techniques have led to dozens of assembly algorithms, none of which is perfect. A number of methods for comparing assemblers have been developed, but none is yet a recognized benchmark. Further, most existing methods for comparing assemblies are only applicable to new assemblies of finished genomes; the problem of evaluating assemblies of previously unsequenced species has not been adequately considered. Here, we present QUAST-a quality assessment tool for evaluating and comparing genome assemblies. This tool improves on leading assembly comparison software with new ideas and quality metrics. QUAST can evaluate assemblies both with a reference genome, as well as without a reference. QUAST produces many reports, summary tables and plots to help scientists in their research and in their publications. In this study, we used QUAST to compare several genome assemblers on three datasets. QUAST tables and plots for all of them are available in the Supplementary Material, and interactive versions of these reports are on the QUAST website. http://bioinf.spbau.ru/quast . Supplementary data are available at Bioinformatics online.

  18. Benchmarking inappropriate empirical antibiotic treatment.

    Science.gov (United States)

    Kariv, G; Paul, M; Shani, V; Muchtar, E; Leibovici, L

    2013-07-01

    Inappropriate empirical antibiotic treatment for severe infections is associated with increased mortality. Superfluous treatment is associated with resistance induction. We aimed to define acceptable rates of inappropriate empirical antibiotic treatment. We included all prospective cohort studies published between 1975 and 2009 reporting the proportion of appropriate and inappropriate empirical antibiotic treatment of microbiologically documented infections. Studies were identified in PubMed and in reference lists of included studies. Funnel plots were drawn using the proportion of inappropriate empirical treatment as the effect size. A pooled estimate of inappropriate empirical antibiotic treatment was calculated using a β-binomial model. Control limits were calculated with the overdispersion factor technique and 20% winsorized data. Heterogeneity was assessed through subgroup analysis for categorical moderators and meta-regression for continuous variables. Eighty-seven studies, comprising 92 study groups, with 27 628 patients met inclusion criteria. The pooled rate of inappropriate empirical antibiotic treatment was 28.6% (95% CI 25.4-31.8). Funnel plot analysis yielded a dispersed graph with only 37 (40%) studies falling within the control limits. Using the overdispersion factor technique with 20% winsorizing, 79 (86%) studies fell within the control limits. None of the clinical or methodological factors could explain the large heterogeneity observed. The funnel plot presented can be used to benchmark rates of inappropriate empirical antibiotic treatment. Based on the control limits found, at least 500 patients should be evaluated before establishing a local rate. Lower and higher than expected rates might indicate overly aggressive treatment or poor performance, respectively. © 2012 The Authors. Clinical Microbiology and Infection © 2012 European Society of Clinical Microbiology and Infectious Diseases.

  19. Planeación asistida por computadora del proceso tecnológico de ensamble. //Computer-aided gliding of the assembles technological process.

    Directory of Open Access Journals (Sweden)

    L. L. Tomás García

    2008-01-01

    Full Text Available El presente trabajo está dedicado a la optimización bajo criterios múltiples de la planificación de procesos de ensamblemecánico a partir de su modelo geométrico tridimensional. Se soporta sobre un enfoque que integra tanto informacióngeométrica como restricciones tecnológicas del proceso de ensamble. En el desarrollo de la misma quedó demostrado, queuna vez conocido el modelo geométrico tridimensional de un ensamble, la aplicación de criterios tecnológicos ygeométricos al proceso inverso de desensamble y su posterior tratamiento con métodos evolutivos, genera planes deensamble mecánico próximos a los óptimos de acuerdo al sistema de preferencias del decisor. La integración de lainformación permite disminuir el número de secuencias a evaluar y de elementos a procesar, con lo que se evita lageneración y evaluación de todas las secuencias posibles con la consecuente disminución del tiempo de procesamiento.Como resultado de la aplicación del modelo integrado propuesto, se obtiene la planificación del proceso de ensamblemecánico con una reducción del tiempo de ensamble debido a que en las secuencias obtenidas se reduce el número decambios de dirección de ensamble, los cambios de herramientas y de puestos de trabajo, así como se minimiza la distanciaa recorrer debido al cambio de puestos de trabajo. Esto se logra mediante un modelo de optimización multiobjetivo basadoen algoritmos genéticos.Palabras claves: Ensamble mecánico, algoritmos genéticos, optimización multiobjetivo.______________________________________________________________________________Abstract:This work deals with the combinatorial problem of generating and optimizing technologically feasible assembly sequencesand process planning involving tools and work places. The assembly sequences and related technological decisions areobtained from a 3D model of the assembled parts based on mating conditions along with a set of technological criteria

  20. X-Ray computed tomography as supporting technology in the failure analysis of press-in connections for electronic assemblies; Roentgen-Computertomographie als Hilfsmittel zur Schadensanalyse an Einpressverbindungen fuer elektronische Baugruppen

    Energy Technology Data Exchange (ETDEWEB)

    Rauer, Miriam; Schreck, Timo; Kaloudis, Michael [Hochschule Aschaffenburg (Germany). Labor fuer Aufbau- und Verbindungstechnik

    2013-03-01

    Besides its original field of application - the characterization of microstructures - the metallography is often referred to when analyzing failures of electronic components and assemblies. In this context, the cause of the failure is analyzed by means of a metallographic section. At this point, the precondition and difficulty is the right choice of the section plane. If it is not optimally selected, there is a risk of overlooking the defect and therefore of not recognizing important correlations. The X-ray computed tomography is an efficient method which helps making the right choice and facilitates a targeted metallographic preparation. Conspicuous features inside the damaged component can be detected in advance by means of a three-dimensional volume model established by computed tomography. Based on this information, an appropriate cutting plane can be chosen in the volume model and a targeted metallographic section can be prepared. In the following, it is possible to view the conspicuous spot with high resolution and to analyze its microstructure in selected areas. In order to demonstrate the use of computed tomography as a supporting technology in the target preparation, the following article establishes a link between both testing technologies and their application to the press-fit technology which serves as joining technology when assembling electronic components. (orig.)

  1. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.

  2. The FLUKA Code: Description And Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Battistoni, Giuseppe; Muraro, S.; Sala, Paola R.; /INFN, Milan; Cerutti, Fabio; Ferrari, A.; Roesler, Stefan; /CERN; Fasso, A.; /SLAC; Ranft, J.; /Siegen U.

    2007-09-18

    The physics model implemented inside the FLUKA code are briefly described, with emphasis on hadronic interactions. Examples of the capabilities of the code are presented including basic (thin target) and complex benchmarks.

  3. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    R. Angles Rojas (Renzo); M.-D. Pham (Minh-Duc); P.A. Boncz (Peter)

    2014-01-01

    htmlabstractWith inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics

  4. Benchmarking and validation activities within JEFF project

    Science.gov (United States)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  5. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  6. Benchmarking by State Higher Education Boards.

    Science.gov (United States)

    Barak, Robert J.; Kniker, Charles R.

    2002-01-01

    Describes how state higher education governing boards can use benchmarking to provide direction for colleges and universities. Provides an in-depth example and indicators used by selected state higher education boards. (EV)

  7. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic co...

  8. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and the...

  9. Critical Assessment of Metagenome Interpretation-a benchmark of metagenomics software.

    Science.gov (United States)

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J; Chia, Burton K H; Denis, Bertrand; Froula, Jeff L; Wang, Zhong; Egan, Robert; Don Kang, Dongwan; Cook, Jeffrey J; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael D; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z; Cuevas, Daniel A; Edwards, Robert A; Saha, Surya; Piro, Vitor C; Renard, Bernhard Y; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C; Woyke, Tanja; Vorholt, Julia A; Schulze-Lefert, Paul; Rubin, Edward M; Darling, Aaron E; Rattei, Thomas; McHardy, Alice C

    2017-11-01

    Methods for assembly, taxonomic profiling and binning are key to interpreting metagenome data, but a lack of consensus about benchmarking complicates performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on highly complex and realistic data sets, generated from ∼700 newly sequenced microorganisms and ∼600 novel viruses and plasmids and representing common experimental setups. Assembly and genome binning programs performed well for species represented by individual genomes but were substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below family level. Parameter settings markedly affected performance, underscoring their importance for program reproducibility. The CAMI results highlight current challenges but also provide a roadmap for software selection to answer specific research questions.

  10. Integrated Approach to Construction of Benchmarking Network in DEA-Based Stepwise Benchmark Target Selection

    Directory of Open Access Journals (Sweden)

    Jaehun Park

    2016-06-01

    Full Text Available Stepwise benchmark target selection in data envelopment analysis (DEA is a realistic and effective method by which inefficient decision-making units (DMUs can choose benchmarks in a stepwise manner. We propose, for the construction of a benchmarking network (i.e., a network structure consisting of an alternative sequence of benchmark targets, an approach that integrates the cross-efficiency DEA, K-means clustering and context-dependent DEA methods to minimize resource improvement pattern inconsistency in the selection of the intermediate benchmark targets (IBTs of an inefficient DMU. The specific advantages and overall effectiveness of the proposed method were demonstrated by application to a case study of 34 actual container terminal ports and the successful determination of the stepwise benchmarking path of an inefficient DMU.

  11. NERSC-6 Workload Analysis and Benchmark Selection Process

    Energy Technology Data Exchange (ETDEWEB)

    Antypas, Katie; Shalf, John; Wasserman, Harvey

    2008-08-29

    This report describes efforts carried out during early 2008 to determine some of the science drivers for the"NERSC-6" next-generation high-performance computing system acquisition. Although the starting point was existing Greenbooks from DOE and the NERSC User Group, the main contribution of this work is an analysis of the current NERSC computational workload combined with requirements information elicited from key users and other scientists about expected needs in the 2009-2011 timeframe. The NERSC workload is described in terms of science areas, computer codes supporting research within those areas, and description of key algorithms that comprise the codes. This work was carried out in large part to help select a small set of benchmark programs that accurately capture the science and algorithmic characteristics of the workload. The report concludes with a description of the codes selected and some preliminary performance data for them on several important systems.

  12. Benchmarking comparison and validation of MCNP photon interaction data

    Science.gov (United States)

    Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.

    2017-09-01

    The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.

  13. Reference Solutions for Benchmark Turbulent Flows in Three Dimensions

    Science.gov (United States)

    Diskin, Boris; Thomas, James L.; Pandya, Mohagna J.; Rumsey, Christopher L.

    2016-01-01

    A grid convergence study is performed to establish benchmark solutions for turbulent flows in three dimensions (3D) in support of turbulence-model verification campaign at the Turbulence Modeling Resource (TMR) website. The three benchmark cases are subsonic flows around a 3D bump and a hemisphere-cylinder configuration and a supersonic internal flow through a square duct. Reference solutions are computed for Reynolds Averaged Navier Stokes equations with the Spalart-Allmaras turbulence model using a linear eddy-viscosity model for the external flows and a nonlinear eddy-viscosity model based on a quadratic constitutive relation for the internal flow. The study involves three widely-used practical computational fluid dynamics codes developed and supported at NASA Langley Research Center: FUN3D, USM3D, and CFL3D. Reference steady-state solutions computed with these three codes on families of consistently refined grids are presented. Grid-to-grid and code-to-code variations are described in detail.

  14. FDA Benchmark Medical Device Flow Models for CFD Validation.

    Science.gov (United States)

    Malinauskas, Richard A; Hariharan, Prasanna; Day, Steven W; Herbertson, Luke H; Buesen, Martin; Steinseifer, Ulrich; Aycock, Kenneth I; Good, Bryan C; Deutsch, Steven; Manning, Keefe B; Craven, Brent A

    Computational fluid dynamics (CFD) is increasingly being used to develop blood-contacting medical devices. However, the lack of standardized methods for validating CFD simulations and blood damage predictions limits its use in the safety evaluation of devices. Through a U.S. Food and Drug Administration (FDA) initiative, two benchmark models of typical device flow geometries (nozzle and centrifugal blood pump) were tested in multiple laboratories to provide experimental velocities, pressures, and hemolysis data to support CFD validation. In addition, computational simulations were performed by more than 20 independent groups to assess current CFD techniques. The primary goal of this article is to summarize the FDA initiative and to report recent findings from the benchmark blood pump model study. Discrepancies between CFD predicted velocities and those measured using particle image velocimetry most often occurred in regions of flow separation (e.g., downstream of the nozzle throat, and in the pump exit diffuser). For the six pump test conditions, 57% of the CFD predictions of pressure head were within one standard deviation of the mean measured values. Notably, only 37% of all CFD submissions contained hemolysis predictions. This project aided in the development of an FDA Guidance Document on factors to consider when reporting computational studies in medical device regulatory submissions. There is an accompanying podcast available for this article. Please visit the journal's Web site (www.asaiojournal.com) to listen.

  15. CFD validation in OECD/NEA t-junction benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E. (Mathematics and Computer Science); (Cambridge Univ.); (Moscow Institute of Nuclar Energy Safety)

    2011-08-23

    When streams of rapidly moving flow merge in a T-junction, the potential arises for large oscillations at the scale of the diameter, D, with a period scaling as O(D/U), where U is the characteristic flow velocity. If the streams are of different temperatures, the oscillations result in experimental fluctuations (thermal striping) at the pipe wall in the outlet branch that can accelerate thermal-mechanical fatigue and ultimately cause pipe failure. The importance of this phenomenon has prompted the nuclear energy modeling and simulation community to establish a benchmark to test the ability of computational fluid dynamics (CFD) codes to predict thermal striping. The benchmark is based on thermal and velocity data measured in an experiment designed specifically for this purpose. Thermal striping is intrinsically unsteady and hence not accessible to steady state simulation approaches such as steady state Reynolds-averaged Navier-Stokes (RANS) models.1 Consequently, one must consider either unsteady RANS or large eddy simulation (LES). This report compares the results for three LES codes: Nek5000, developed at Argonne National Laboratory (USA), and Cabaret and Conv3D, developed at the Moscow Institute of Nuclear Energy Safety at (IBRAE) in Russia. Nek5000 is based on the spectral element method (SEM), which is a high-order weighted residual technique that combines the geometric flexibility of the finite element method (FEM) with the tensor-product efficiencies of spectral methods. Cabaret is a 'compact accurately boundary-adjusting high-resolution technique' for fluid dynamics simulation. The method is second-order accurate on nonuniform grids in space and time, and has a small dispersion error and computational stencil defined within one space-time cell. The scheme is equipped with a conservative nonlinear correction procedure based on the maximum principle. CONV3D is based on the immersed boundary method and is validated on a wide set of the experimental

  16. National Energy Software Center: benchmark problem book. Revision

    Energy Technology Data Exchange (ETDEWEB)

    None

    1985-12-01

    Computational benchmarks are given for the following problems: (1) Finite-difference, diffusion theory calculation of a highly nonseparable reactor, (2) Iterative solutions for multigroup two-dimensional neutron diffusion HTGR problem, (3) Reference solution to the two-group diffusion equation, (4) One-dimensional neutron transport transient solutions, (5) To provide a test of the capabilities of multi-group multidimensional kinetics codes in a heavy water reactor, (6) Test of capabilities of multigroup neutron diffusion in LMFBR, and (7) Two-dimensional PWR models.

  17. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  18. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  19. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  20. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich

    2015-01-01

    . The first benchmark focuses on the 1D transient diffusion of HNO3 (pH = 4) in a NaCl solution into a fixed concentration reservoir, also containing NaCl—but with lower HNO3 concentrations (pH = 6). The second benchmark describes the 1D steady-state migration of the sodium isotope 22Na triggered by sodium...

  1. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  2. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  3. Benchmarking for controllere: metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels Erik; Dietrichson, Lars Grubbe

    2008-01-01

    Benchmarking indgår på mange måder i både private og offentlige virksomheders ledelsespraksis. I økonomistyring anvendes benchmark-baserede indikatorer (eller nøgletal), eksempelvis ved fastlæggelse af mål i resultatkontrakter eller for at angive det ønskede niveau for visse nøgletal i et Balanced...... Scorecard eller tilsvarende målstyringsmodeller. Artiklen redegør for begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det, samt redegør for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et...... benchmarkingprojekt. Dernæst bliver forskellen på resultatbenchmarking og procesbenchmarking behandlet, hvorefter brugen af intern hhv. ekstern benchmarking, samt brugen af benchmarking i budgetlægning og budgetopfølgning, behandles....

  4. Planificación y optimización asistida por computadora de secuencias de ensamble mecánico // Computer aided Planning and optimization for mechanical assembly.

    Directory of Open Access Journals (Sweden)

    L. L. Tomás-García

    2009-01-01

    Full Text Available El presente trabajo versa sobre la generación, planificación y optimización de secuencias deensamble mecánico a partir de su modelo geométrico tridimensional. Se soporta sobre un enfoqueque integra tanto información geométrica como restricciones tecnológicas del proceso deensamble. En el desarrollo de la misma quedó demostrado, que una vez conocido el modelogeométrico tridimensional de un ensamble, la aplicación de criterios tecnológicos y geométricos alproceso inverso de desensamble y su posterior tratamiento con el método de algoritmosevolutivos, genera una planificación optimizada del su proceso de ensamble mecánico. Laintegración de la información permite disminuir el número de secuencias a evaluar y de elementosa procesar, con lo que se evita la generación y evaluación de todas las secuencias posibles con laconsecuente disminución del tiempo de procesamiento. Como resultado de la aplicación delmodelo integrado propuesto, se obtiene la planificación del proceso de ensamble mecánico conuna reducción del tiempo de ensamble debido a que en las secuencias de ensamble obtenidas sereduce el número de cambios de dirección de ensamble, los cambios de herramientas y de puestosde trabajo, así como se minimiza la distancia a recorrer debido al cambio de puestos de trabajo.Esto se logra mediante un modelo de optimización multiobjetivo basado en algoritmos evolutivos.Palabras claves: ensamble mecánico, algoritmos genéticos, optimización multiobjetivo.____________________________________________________________________________AbstractThis work deals with the combinatorial problem of generating and optimizing feasible assemblysequences and doing the process planning involving tools and work places. The assembly sequencesare obtained from a 3D model of the assembled parts based on mating conditions along with a setof technological criteria, which allows automatically analyzing and generating the sequences. Thegenerated

  5. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  6. Benchmarking as a strategy policy tool for energy management

    NARCIS (Netherlands)

    Rienstra, S.A.; Nijkamp, P.

    2002-01-01

    In this paper we analyse to what extent benchmarking is a valuable tool in strategic energy policy analysis. First, the theory on benchmarking is concisely presented, e.g., by discussing the benchmark wheel and the benchmark path. Next, some results of surveys among business firms are presented. To

  7. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  8. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  9. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  10. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    Science.gov (United States)

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  11. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  12. A benchmark server using high resolution protein structure data, and benchmark results for membrane helix predictions.

    Science.gov (United States)

    Rath, Emma M; Tessier, Dominique; Campbell, Alexander A; Lee, Hong Ching; Werner, Tim; Salam, Noeris K; Lee, Lawrence K; Church, W Bret

    2013-03-27

    Helical membrane proteins are vital for the interaction of cells with their environment. Predicting the location of membrane helices in protein amino acid sequences provides substantial understanding of their structure and function and identifies membrane proteins in sequenced genomes. Currently there is no comprehensive benchmark tool for evaluating prediction methods, and there is no publication comparing all available prediction tools. Current benchmark literature is outdated, as recently determined membrane protein structures are not included. Current literature is also limited to global assessments, as specialised benchmarks for predicting specific classes of membrane proteins were not previously carried out. We present a benchmark server at http://sydney.edu.au/pharmacy/sbio/software/TMH_benchmark.shtml that uses recent high resolution protein structural data to provide a comprehensive assessment of the accuracy of existing membrane helix prediction methods. The server further allows a user to compare uploaded predictions generated by novel methods, permitting the comparison of these novel methods against all existing methods compared by the server. Benchmark metrics include sensitivity and specificity of predictions for membrane helix location and orientation, and many others. The server allows for customised evaluations such as assessing prediction method performances for specific helical membrane protein subtypes.We report results for custom benchmarks which illustrate how the server may be used for specialised benchmarks. Which prediction method is the best performing method depends on which measure is being benchmarked. The OCTOPUS membrane helix prediction method is consistently one of the highest performing methods across all measures in the benchmarks that we performed. The benchmark server allows general and specialised assessment of existing and novel membrane helix prediction methods. Users can employ this benchmark server to determine the most

  13. Benchmarking child and adolescent mental health organizations.

    Science.gov (United States)

    Brann, Peter; Walter, Garry; Coombs, Tim

    2011-04-01

    This paper describes aspects of the child and adolescent benchmarking forums that were part of the National Mental Health Benchmarking Project (NMHBP). These forums enabled participating child and adolescent mental health organizations to benchmark themselves against each other, with a view to understanding variability in performance against a range of key performance indicators (KPIs). Six child and adolescent mental health organizations took part in the NMHBP. Representatives from these organizations attended eight benchmarking forums at which they documented their performance against relevant KPIs. They also undertook two special projects designed to help them understand the variation in performance on given KPIs. There was considerable inter-organization variability on many of the KPIs. Even within organizations, there was often substantial variability over time. The variability in indicator data raised many questions for participants. This challenged participants to better understand and describe their local processes, prompted them to collect additional data, and stimulated them to make organizational comparisons. These activities fed into a process of reflection about their performance. Benchmarking has the potential to illuminate intra- and inter-organizational performance in the child and adolescent context.

  14. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  15. General Assembly

    CERN Multimedia

    Staff Association

    2016-01-01

    5th April, 2016 – Ordinary General Assembly of the Staff Association! In the first semester of each year, the Staff Association (SA) invites its members to attend and participate in the Ordinary General Assembly (OGA). This year the OGA will be held on Tuesday, April 5th 2016 from 11:00 to 12:00 in BE Auditorium, Meyrin (6-2-024). During the Ordinary General Assembly, the activity and financial reports of the SA are presented and submitted for approval to the members. This is the occasion to get a global view on the activities of the SA, its financial management, and an opportunity to express one’s opinion, including taking part in the votes. Other points are listed on the agenda, as proposed by the Staff Council. Who can vote? Only “ordinary” members (MPE) of the SA can vote. Associated members (MPA) of the SA and/or affiliated pensioners have a right to vote on those topics that are of direct interest to them. Who can give his/her opinion? The Ordinary General Asse...

  16. AutoAssemblyD: a graphical user interface system for several genome assemblers.

    Science.gov (United States)

    Veras, Adonney Allan de Oliveira; de Sá, Pablo Henrique Caracciolo Gomes; Azevedo, Vasco; Silva, Artur; Ramos, Rommel Thiago Jucá

    2013-01-01

    Next-generation sequencing technologies have increased the amount of biological data generated. Thus, bioinformatics has become important because new methods and algorithms are necessary to manipulate and process such data. However, certain challenges have emerged, such as genome assembly using short reads and high-throughput platforms. In this context, several algorithms have been developed, such as Velvet, Abyss, Euler-SR, Mira, Edna, Maq, SHRiMP, Newbler, ALLPATHS, Bowtie and BWA. However, most such assemblers do not have a graphical interface, which makes their use difficult for users without computing experience given the complexity of the assembler syntax. Thus, to make the operation of such assemblers accessible to users without a computing background, we developed AutoAssemblyD, which is a graphical tool for genome assembly submission and remote management by multiple assemblers through XML templates. AssemblyD is freely available at https://sourceforge.net/projects/autoassemblyd. It requires Sun jdk 6 or higher.

  17. CFD Modeling of Thermal Manikin Heat Loss in a Comfort Evaluation Benchmark Test

    DEFF Research Database (Denmark)

    Nilsson, Håkan O.; Brohus, Henrik; Nielsen, Peter V.

    2007-01-01

    and companies still use several in-house codes for their calculations. The validation and association with human perception and heat losses in reality is consequently very difficult to make. This paper is providing requirements for the design and development of computer manikins and CFD benchmark tests...

  18. Benchmarking ontologies: bigger or better?

    Directory of Open Access Journals (Sweden)

    Lixia Yao

    2011-01-01

    Full Text Available A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the biomedical sciences. Powerful approaches exist for testing the internal consistency of an ontology, but not for assessing the fidelity of its domain representation. We introduce a family of metrics that describe the breadth and depth with which an ontology represents its knowledge domain. We then test these metrics using (1 four of the most common medical ontologies with respect to a corpus of medical documents and (2 seven of the most popular English thesauri with respect to three corpora that sample language from medicine, news, and novels. Here we show that our approach captures the quality of ontological representation and guides efforts to narrow the breach between ontology and collective discourse within a domain. Our results also demonstrate key features of medical ontologies, English thesauri, and discourse from different domains. Medical ontologies have a small intersection, as do English thesauri. Moreover, dialects characteristic of distinct domains vary strikingly as many of the same words are used quite differently in medicine, news, and novels. As ontologies are intended to mirror the state of knowledge, our methods to tighten the fit between ontology and domain will increase their relevance for new areas of biomedical science and improve the accuracy and power of inferences computed across them.

  19. Benchmarking Ontologies: Bigger or Better?

    Science.gov (United States)

    Yao, Lixia; Divoli, Anna; Mayzus, Ilya; Evans, James A.; Rzhetsky, Andrey

    2011-01-01

    A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the biomedical sciences. Powerful approaches exist for testing the internal consistency of an ontology, but not for assessing the fidelity of its domain representation. We introduce a family of metrics that describe the breadth and depth with which an ontology represents its knowledge domain. We then test these metrics using (1) four of the most common medical ontologies with respect to a corpus of medical documents and (2) seven of the most popular English thesauri with respect to three corpora that sample language from medicine, news, and novels. Here we show that our approach captures the quality of ontological representation and guides efforts to narrow the breach between ontology and collective discourse within a domain. Our results also demonstrate key features of medical ontologies, English thesauri, and discourse from different domains. Medical ontologies have a small intersection, as do English thesauri. Moreover, dialects characteristic of distinct domains vary strikingly as many of the same words are used quite differently in medicine, news, and novels. As ontologies are intended to mirror the state of knowledge, our methods to tighten the fit between ontology and domain will increase their relevance for new areas of biomedical science and improve the accuracy and power of inferences computed across them. PMID:21249231

  20. Chloroplast genome assembly approaches from NGS data

    Directory of Open Access Journals (Sweden)

    Zdravka Ivanova

    2016-12-01

    Full Text Available The advent of Next Generation Sequencing platforms led to increase of research in whole genome assembly algorithms and software. Illumina Genome Analyzer produces a large amount of sequencing data, with a shorted read length, higher coverage and different errors in comparison to Sanger Sequencing. In response to this, several new assemblers were developed specifically for de novo assembly of next generation sequencing. This study compares software assembly packages named Edena, SPAdes, ABySS and analyzes results delivered by de novo assembly experiments. We show that assembly job of small genome can be completed in a short time on a 32 bit Linux OS with 4 GB RAM, indicating than de novo assembly can be executed and millions of very reads assembled on a desktop computer.

  1. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Polyethylene Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Celik, Cihangir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); McMahan, Kimberly L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Yi-kang [French Atomic Energy Commission (CEA), Saclay (France); Gagnier, Emmanuel [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Authier, Nicolas [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Piot, Jerome [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Jacquet, Xavier [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Rousseau, Guillaume [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Reynolds, Kevin H. [Y-12 National Security Complex, Oak Ridge, TN (United States)

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 19, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc depositing energy in a Si solid state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  2. Neutron Activation and Thermoluminescent Detector Responses to a Bare Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [ORNL; Isbell, Kimberly McMahan [ORNL; Lee, Yi-kang [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Gagnier, Emmanuel [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Authier, Nicolas [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Piot, Jerome [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Jacquet, Xavier [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Rousseau, Guillaume [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Reynolds, Kevin H. [Y-12 National Security Complex

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 11, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  3. Benchmark test of accelerated multi-slice simulation by GPGPU.

    Science.gov (United States)

    Hosokawa, Fumio; Shinkawa, Takao; Arai, Yoshihiro; Sannomiya, Takumi

    2015-11-01

    A fast multi-slice image simulation by parallelized computation using a graphics processing unit (GPU) has been developed. The image simulation contains multiple sets of computing steps, such as Fourier transform and pixel-to-pixel operation. The efficiency of GPU varies depending on the type of calculation. In the effective case of utilizing GPU, the calculation speed is conducted hundreds of times faster than a central processing unit (CPU). The benchmark test of parallelized multi-slice was performed, and the results of contents, such as TEM imaging, STEM imaging and CBD calculation are reported. Some features of the simulation software are also introduced. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Primm III, RT

    2002-05-29

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  5. Benchmark Dataset for Whole Genome Sequence Compression.

    Science.gov (United States)

    C L, Biji; S Nair, Achuthsankar

    2017-01-01

    The research in DNA data compression lacks a standard dataset to test out compression tools specific to DNA. This paper argues that the current state of achievement in DNA compression is unable to be benchmarked in the absence of such scientifically compiled whole genome sequence dataset and proposes a benchmark dataset using multistage sampling procedure. Considering the genome sequence of organisms available in the National Centre for Biotechnology and Information (NCBI) as the universe, the proposed dataset selects 1,105 prokaryotes, 200 plasmids, 164 viruses, and 65 eukaryotes. This paper reports the results of using three established tools on the newly compiled dataset and show that their strength and weakness are evident only with a comparison based on the scientifically compiled benchmark dataset. The sample dataset and the respective links are available @ https://sourceforge.net/projects/benchmarkdnacompressiondataset/.

  6. New Test Set for Video Quality Benchmarking

    Science.gov (United States)

    Raventos, Joaquin

    A new test set design and benchmarking approach (US Patent pending) allows a "standard observer" to assess the end-to-end image quality characteristics of video imaging systems operating in day time or low-light conditions. It uses randomized targets based on extensive application of Photometry, Geometrical Optics, and Digital Media. The benchmarking takes into account the target's contrast sensitivity, its color characteristics, and several aspects of human vision such as visual acuity and dynamic response. The standard observer is part of the "extended video imaging system" (EVIS). The new test set allows image quality benchmarking by a panel of standard observers at the same time. The new approach shows that an unbiased assessment can be guaranteed. Manufacturers, system integrators, and end users will assess end-to-end performance by simulating a choice of different colors, luminance levels, and dynamic conditions in the laboratory or in permanent video systems installations.

  7. Benchmarking with Spine Tango: potentials and pitfalls

    Science.gov (United States)

    Staub, L.; Dietrich, D.; Zweig, T.; Melloh, M.; Aebi, M.

    2009-01-01

    The newly released online statistics function of Spine Tango allows comparison of own data against the aggregated results of the data pool that all other participants generate. This comparison can be considered a very simple way of benchmarking, which means that the quality of what one organization does is compared with other similar organizations. The goal is to make changes towards better practice if benchmarking shows inferior results compared with the pool. There are, however, pitfalls in this simplified way of comparing data that can result in confounding. This means that important influential factors can make results appear better or worse than they are in reality and these factors can only be identified and neutralized in a multiple regression analysis performed by a statistical expert. Comparing input variables, confounding is less of a problem than comparing outcome variables. Therefore, the potentials and limitations of automated online comparisons need to be considered when interpreting the results of the benchmarking procedure. PMID:19337759

  8. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  9. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  10. Hydrogen bonding interactions and supramolecular assemblies in 2-amino guanidinium 4-methyl benzene sulphonate crystal structure: Hirshfeld surfaces and computational calculations

    Science.gov (United States)

    Muthuraja, P.; Joselin Beaula, T.; Balachandar, S.; Bena Jothy, V.; Dhandapani, M.

    2017-10-01

    2-aminoguanidinium 4-methyl benzene sulphonate (AGMS), an organic compound with big assembly of hydrogen bonding interactions was crystallized at room temperature. The structure of the compound was confirmed by FT-IR, NMR and single crystal X-ray diffraction analysis. Numerous hydrogen bonded interactions were found to form supramolecular assemblies in the molecular structure. Fingerprint plots of Hirshfeld surface analysis spells out the interactions in various directions. The molecular structure of AGMS was optimised by HF, MP2 and DFT (B3LYP and CAM-B3LYP) methods at 6-311G (d,p) basis set and the geometrical parameters were compared. Electrostatic potential calculations of the reactants and product confirm the transfer of proton. Optical properties of AGMS were ascertained by UV-Vis absorbance and reflectance spectra. The band gap of AGMS is found to be 2.689 eV. Due to numerous hydrogen bonds, the crystal is thermally stable up to 200 °C. Hyperconjugative interactions which are responsible for the second hyperpolarizabilities were accounted by NBO analysis. Static and frequency dependent optical properties were calculated at HF and DFT methods. The hyperpolarizabilities of AGMS increase rapidly at frequencies 0.0428 and 0.08 a.u. compared to static one. The compound exhibits violet and blue emission.

  11. Computational benchmark for calculation of silane and siloxane thermochemistry.

    Science.gov (United States)

    Cypryk, Marek; Gostyński, Bartłomiej

    2016-01-01

    Geometries of model chlorosilanes, R3SiCl, silanols, R3SiOH, and disiloxanes, (R3Si)2O, R = H, Me, as well as the thermochemistry of the reactions involving these species were modeled using 11 common density functionals in combination with five basis sets to examine the accuracy and applicability of various theoretical methods in organosilicon chemistry. As the model reactions, the proton affinities of silanols and siloxanes, hydrolysis of chlorosilanes and condensation of silanols to siloxanes were considered. As the reference values, experimental bonding parameters and reaction enthalpies were used wherever available. Where there are no experimental data, W1 and CBS-QB3 values were used instead. For the gas phase conditions, excellent agreement between theoretical CBS-QB3 and W1 and experimental thermochemical values was observed. All DFT methods also give acceptable values and the precision of various functionals used was comparable. No significant advantage of newer more advanced functionals over 'classical' B3LYP and PBEPBE ones was noted. The accuracy of the results was improved significantly when triple-zeta basis sets were used for energy calculations, instead of double-zeta ones. The accuracy of calculations for the reactions in water solution within the SCRF model was inferior compared to the gas phase. However, by careful estimation of corrections to the ΔHsolv and ΔGsolv of H(+) and HCl, reasonable values of thermodynamic quantities for the discussed reactions can be obtained.

  12. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  13. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador

    2016-01-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  14. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  15. Benchmarking of Monte Carlo based shutdown dose rate calculations for applications to JET.

    Science.gov (United States)

    Petrizzi, L; Batistoni, P; Fischer, U; Loughlin, M; Pereslavtsev, P; Villari, R

    2005-01-01

    The calculation of dose rates after shutdown is an important issue for operating nuclear reactors. A validated computational tool is needed for reliable dose rate calculations. In fusion reactors neutrons induce high levels of radioactivity and presumably high doses. The complex geometries of the devices require the use of sophisticated geometry modelling and computational tools for transport calculations. Simple rule of thumb laws do not always apply well. Two computational procedures have been developed recently and applied to fusion machines. Comparisons between the two methods showed some inherent discrepancies when applied to calculation for the ITER while good agreement was found for a 14 MeV point source neutron benchmark experiment. Further benchmarks were considered necessary to investigate in more detail the reasons for the different results in different cases. In this frame the application to the Joint European Torus JET machine has been considered as a useful benchmark exercise. In a first calculational benchmark with a representative D-T irradiation history of JET the two methods differed by no more than 25%. In another, more realistic benchmark exercise, which is the subject of this paper, the real irradiation history of D-T and D-D campaigns conducted at JET in 1997-98 were used to calculate the shut-down doses at different locations, irradiation and decay times. Experimental dose data recorded at JET for the same conditions offer the possibility to check the prediction capability of the calculations and thus show the applicability (and the constraints) of the procedures and data to the rather complex shutdown dose rate analysis of real fusion devices. Calculation results obtained by the two methods are reported below, comparison with experimental results give discrepancies ranging between 2 and 10. The reasons of that can be ascribed to the high uncertainty on the experimental data and the unsatisfactory JET model used in the calculation. A new

  16. Integrated assembly and motion planning using regrasp graphs.

    Science.gov (United States)

    Wan, Weiwei; Harada, Kensuke

    2016-01-01

    This paper presents an integrated assembly and motion planning system to recursively find the assembly sequence and motions to assemble two objects with the help of a horizontal surface as the supporting fixture. The system is implemented in both assembly level and motion level. In the assembly level, the system checks all combinations of the assembly sequences and gets a set of candidates. Then, for each candidate assembly sequence, the system incrementally builds regrasp graphs and performs recursive search to find a pick-and-place motion in the motion level to manipulate the base object as well as to assemble the other object to the base. The system integrates the candidate assembly sequences computed in the assembly level incrementally and recursively with graph searching and motion planning in the motion level and plans the assembly sequences and motions integratedly for assembly tasks. Both simulation and real-world experiments are performed to demonstrate the efficacy of the integrated planning system.

  17. Assembling consumption

    DEFF Research Database (Denmark)

    Assembling Consumption marks a definitive step in the institutionalisation of qualitative business research. By gathering leading scholars and educators who study markets, marketing and consumption through the lenses of philosophy, sociology and anthropology, this book clarifies and applies...... the investigative tools offered by assemblage theory, actor-network theory and non-representational theory. Clear theoretical explanation and methodological innovation, alongside empirical applications of these emerging frameworks will offer readers new and refreshing perspectives on consumer culture and market...... societies. This is an essential reading for both seasoned scholars and advanced students of markets, economies and social forms of consumption....

  18. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    Directory of Open Access Journals (Sweden)

    Marcelo S. Reis

    2017-01-01

    Full Text Available In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.

  19. Benchmark Tests on the New IBM RISC System/6000 590 Workstation

    Directory of Open Access Journals (Sweden)

    Harvey J. Wasserman

    1995-01-01

    Full Text Available The results of benchmark tests on the superscalar IBM RISC System/6000 Model 590 are presented. A set of well-characterized Fortran benchmarks spanning a range of computational characteristics was used for the study. The data from the 590 system are compared with those from a single-processor CRAY C90 system as well as with other microprocessor-based systems, such as the Digital Equipment Corporation AXP 3000/500X and the Hewlett-Packard HP/735.

  20. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  1. A screwing device for handling and assembly of micro screws

    DEFF Research Database (Denmark)

    Gegeckaite, Asta; Hansen, Hans Nørgaard; Eriksson, Torbjörn Gerhard

    2007-01-01

    specific requirements for the torque and displacement regarding precision and repeatability. Micro screws are used as critical mechanical components in micro assemblies such as watches, dials, computers and hearing aids. These miniature parts normally require manual assembly processes under magnification...

  2. Katome: de novo DNA assembler implemented in rust

    Science.gov (United States)

    Neumann, Łukasz; Nowak, Robert M.; Kuśmirek, Wiktor

    2017-08-01

    Katome is a new de novo sequence assembler written in the Rust programming language, designed with respect to future parallelization of the algorithms, run time and memory usage optimization. The application uses new algorithms for the correct assembly of repetitive sequences. Performance and quality tests were performed on various data, comparing the new application to `dnaasm', `ABySS' and `Velvet' genome assemblers. Quality tests indicate that the new assembler creates more contigs than well-established solutions, but the contigs have better quality with regard to mismatches per 100kbp and indels per 100kbp. Additionally, benchmarks indicate that the Rust-based implementation outperforms `dnaasm', `ABySS' and `Velvet' assemblers, written in C++, in terms of assembly time. Lower memory usage in comparison to `dnaasm' is observed.

  3. A fast mathematical programming procedure for simultaneous fitting of assembly components into cryoEM density maps.

    Science.gov (United States)

    Zhang, Shihua; Vasishtan, Daven; Xu, Min; Topf, Maya; Alber, Frank

    2010-06-15

    Single-particle cryo electron microscopy (cryoEM) typically produces density maps of macromolecular assemblies at intermediate to low resolution (approximately 5-30 A). By fitting high-resolution structures of assembly components into these maps, pseudo-atomic models can be obtained. Optimizing the quality-of-fit of all components simultaneously is challenging due to the large search space that makes the exhaustive search over all possible component configurations computationally unfeasible. We developed an efficient mathematical programming algorithm that simultaneously fits all component structures into an assembly density map. The fitting is formulated as a point set matching problem involving several point sets that represent component and assembly densities at a reduced complexity level. In contrast to other point matching algorithms, our algorithm is able to match multiple point sets simultaneously and not only based on their geometrical equivalence, but also based on the similarity of the density in the immediate point neighborhood. In addition, we present an efficient refinement method based on the Iterative Closest Point registration algorithm. The integer quadratic programming method generates an assembly configuration in a few seconds. This efficiency allows the generation of an ensemble of candidate solutions that can be assessed by an independent scoring function. We benchmarked the method using simulated density maps of 11 protein assemblies at 20 A, and an experimental cryoEM map at 23.5 A resolution. Our method was able to generate assembly structures with root-mean-square errors Matlab code package. Supplementary data are available at Bioinformatics Online.

  4. General Assembly

    CERN Multimedia

    Staff Association

    2015-01-01

    Mardi 5 mai à 11 h 00 Salle 13-2-005 Conformément aux statuts de l’Association du personnel, une Assemblée générale ordinaire est organisée une fois par année (article IV.2.1). Projet d’ordre du jour : 1- Adoption de l’ordre du jour. 2- Approbation du procès-verbal de l’Assemblée générale ordinaire du 22 mai 2014. 3- Présentation et approbation du rapport d’activités 2014. 4- Présentation et approbation du rapport financier 2014. 5- Présentation et approbation du rapport des vérificateurs aux comptes pour 2014. 6- Programme 2015. 7- Présentation et approbation du projet de budget 2015 et taux de cotisation pour 2015. 8- Pas de modifications aux Statuts de l'Association du personnel proposée. 9- Élections des membres de la Commission é...

  5. General assembly

    CERN Multimedia

    Staff Association

    2015-01-01

    Mardi 5 mai à 11 h 00 Salle 13-2-005 Conformément aux statuts de l’Association du personnel, une Assemblée générale ordinaire est organisée une fois par année (article IV.2.1). Projet d’ordre du jour : Adoption de l’ordre du jour. Approbation du procès-verbal de l’Assemblée générale ordinaire du 22 mai 2014. Présentation et approbation du rapport d’activités 2014. Présentation et approbation du rapport financier 2014. Présentation et approbation du rapport des vérificateurs aux comptes pour 2014. Programme 2015. Présentation et approbation du projet de budget 2015 et taux de cotisation pour 2015. Pas de modifications aux Statuts de l'Association du personnel proposée. Élections des membres de la Commission électorale. &am...

  6. General Assembly

    CERN Multimedia

    Staff Association

    2016-01-01

    Mardi 5 avril à 11 h 00 BE Auditorium Meyrin (6-2-024) Conformément aux statuts de l’Association du personnel, une Assemblée générale ordinaire est organisée une fois par année (article IV.2.1). Projet d’ordre du jour : Adoption de l’ordre du jour. Approbation du procès-verbal de l’Assemblée générale ordinaire du 5 mai 2015. Présentation et approbation du rapport d’activités 2015. Présentation et approbation du rapport financier 2015. Présentation et approbation du rapport des vérificateurs aux comptes pour 2015. Programme de travail 2016. Présentation et approbation du projet de budget 2016 Approbation du taux de cotisation pour 2017. Modifications aux Statuts de l'Association du personnel proposée. Élections des membres de la Commissio...

  7. General Assembly

    CERN Multimedia

    Staff Association

    2017-01-01

    Conformément aux statuts de l’Association du personnel, une Assemblée générale ordinaire est organisée une fois par année (article IV.2.1). Projet d’ordre du jour : Adoption de l’ordre du jour. Approbation du procès-verbal de l’Assemblée générale ordinaire du 5 avril 2016. Présentation et approbation du rapport d’activités 2016. Présentation et approbation du rapport financier 2016. Présentation et approbation du rapport des vérificateurs aux comptes pour 2016. Programme de travail 2017. Présentation et approbation du projet de budget 2017 Approbation du taux de cotisation pour 2018. Modifications aux Statuts de l'Association du personnel proposées. Élections des membres de la Commission électorale. Élections des vérifica...

  8. An IBM 370 assembly language program verifier

    Science.gov (United States)

    Maurer, W. D.

    1977-01-01

    The paper describes a program written in SNOBOL which verifies the correctness of programs written in assembly language for the IBM 360 and 370 series of computers. The motivation for using assembly language as a source language for a program verifier was the realization that many errors in programs are caused by misunderstanding or ignorance of the characteristics of specific computers. The proof of correctness of a program written in assembly language must take these characteristics into account. The program has been compiled and is currently running at the Center for Academic and Administrative Computing of The George Washington University.

  9. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  10. Parton Distribution Benchmarking with LHC Data

    NARCIS (Netherlands)

    Ball, Richard D.; Carrazza, Stefano; Debbio, Luigi Del; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C. -P.

    2012-01-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross

  11. Benchmarking image fusion system design parameters

    Science.gov (United States)

    Howell, Christopher L.

    2013-06-01

    A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.

  12. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...

  13. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  14. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available of dynamic multi-objective optimisation algorithms (DMOAs) are highlighted. In addition, new DMOO benchmark functions with complicated Pareto-optimal sets (POSs) and approaches to develop DMOOPs with either an isolated or deceptive Pareto-optimal front (POF...

  15. Benchmark graphs for testing community detection algorithms

    Science.gov (United States)

    Lancichinetti, Andrea; Fortunato, Santo; Radicchi, Filippo

    2008-10-01

    Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e., the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this benchmark to test two popular methods of community detection, modularity optimization, and Potts model clustering. The results show that the benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis.

  16. Benchmarking Year Five Students' Reading Abilities

    Science.gov (United States)

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  17. What is the impact of subject benchmarking?

    OpenAIRE

    Pidcock, Steve

    2006-01-01

    Abstract The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For this study, semi-structured interviews were ...

  18. What Is the Impact of Subject Benchmarking?

    Science.gov (United States)

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  19. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  20. Benchmarking 2009: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica; Kilgore, Gin

    2009-01-01

    "Benchmarking 2009: Trends in Education Philanthropy" is Grantmakers for Education's (GFE) second annual study of grantmaking trends and priorities among members. As a national network dedicated to improving education outcomes through philanthropy, GFE members are mindful of their role in fostering greater knowledge in the field. They believe it's…

  1. Benchmarking 2008: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2008

    2008-01-01

    Through Grantmakers for Education's (GFE's) "Benchmarking 2008" research report, the researchers sought to make the act of sharing among GFE members as easy and worthwhile as possible. The researchers started with an online survey, which was completed by education grantmakers from more than 150 organizations. They analyzed their responses for…

  2. Benchmarking 2010: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  3. Thermodynamic benchmark study using Biacore technology

    NARCIS (Netherlands)

    Navratilova, I.; Papalia, G.A.; Rich, R.L.; Bedinger, D.; Brophy, S.; Condon, B.; Deng, T.; Emerick, A.W.; Guan, H.W.; Hayden, T.; Heutmekers, T.; Hoorelbeke, B.; McCroskey, M.C.; Murphy, M.M.; Nakagawa, T.; Parmeggiani, F.; Xiaochun, Q.; Rebe, S.; Nenad, T.; Tsang, T.; Waddell, M.B.; Zhang, F.F.; Leavitt, S.; Myszka, D.G.

    2007-01-01

    A total of 22 individuals participated in this benchmark study to characterize the thermodynamics of small-molecule inhibitor-enzyme interactions using Biacore instruments. Participants were provided with reagents (the enzyme carbonic anhydrase II, which was immobilized onto the sensor surface, and

  4. Benchmarking older persons mental health organizations.

    Science.gov (United States)

    McKay, Rod; McDonald, Regina; Coombs, Tim

    2011-02-01

    This paper offers a reflection about the outcomes of the older persons benchmarking forums that formed part of the National Mental Health Benchmarking Project (NMHBP). Seven older persons mental health organizations took part in the NMHBP. Representatives from these organizations attended eight benchmarking forums at which they documented their performance against relevant key performance indicators (KPIs). In order to better understand the differential performance of organizations on particular KPIs, participants gathered additional contextual data. This included organization-level data, data on the local catchment area, and data with which to profile the consumers accessing services through the given organization. Participants' average performance on some indicators was stable over time, while the average performance on others demonstrated fluctuations. Perhaps more importantly, the inter-organization range for almost all of the indicators was substantial. The older persons benchmarking forums provided an opportunity for participants to gauge the performance of their own organizations on a range of KPIs, come to understand some of the reasons for their own organization's performance and that of their counterparts, consider which of these reasons may be within their control, and reflect upon opportunities for quality improvement within their own organizations.

  5. Assembly and annotation of a non-model gastropod (Nerita melanotragus) transcriptome: a comparison of de novo assemblers.

    Science.gov (United States)

    Amin, Shorash; Prentis, Peter J; Gilding, Edward K; Pavasovic, Ana

    2014-08-01

    The sequencing, de novo assembly and annotation of transcriptome datasets generated with next generation sequencing (NGS) has enabled biologists to answer genomic questions in non-model species with unprecedented ease. Reliable and accurate de novo assembly and annotation of transcriptomes, however, is a critically important step for transcriptome assemblies generated from short read sequences. Typical benchmarks for assembly and annotation reliability have been performed with model species. To address the reliability and accuracy of de novo transcriptome assembly in non-model species, we generated an RNAseq dataset for an intertidal gastropod mollusc species, Nerita melanotragus, and compared the assembly produced by four different de novo transcriptome assemblers; Velvet, Oases, Geneious and Trinity, for a number of quality metrics and redundancy. Transcriptome sequencing on the Ion Torrent PGM™ produced 1,883,624 raw reads with a mean length of 133 base pairs (bp). Both the Trinity and Oases de novo assemblers produced the best assemblies based on all quality metrics including fewer contigs, increased N50 and average contig length and contigs of greater length. Overall the BLAST and annotation success of our assemblies was not high with only 15-19% of contigs assigned a putative function. We believe that any improvement in annotation success of gastropod species will require more gastropod genome sequences, but in particular an increase in mollusc protein sequences in public databases. Overall, this paper demonstrates that reliable and accurate de novo transcriptome assemblies can be generated from short read sequencers with the right assembly algorithms.

  6. ORCODE. 77: a computer routine to control a nuclear physics experiment by a PDP-15 + CAMAC system, written in assembler language and including many new routines of general interest

    Energy Technology Data Exchange (ETDEWEB)

    Dickens, J.K.; McConnell, J.W.

    1977-03-29

    ORCODE.77 is a versatile data-handling computer routine written in MACRO (assembler) language for a PDP-15 computer with EAE (extended arithmetic capability) connected to a CAMAC interface. The Interrupt feature of the computer is utilized. Although the code is oriented for a specific experimental problem, there are many routines of general interest, including a CAMAC Scaler handler, an executive routine to interpret and act upon three-character teletype commands, concise routines to type out double-precision integers (both octal and decimal) and floating-point numbers and to read in integers and floating-point numbers, a routine to convert to and from PDP-15 FORTRAN-IV floating-point format, a routine to handle clock interrupts, and our own DECTAPE handling routine. Routines having specific applications which are applicable to other very similar applications include a display routine using CAMAC instructions, control of external mechanical equipment using CAMAC instructions, storage of data from an Analog-to-digital Converter, analysis of stored data into time-dependent pulse-height spectra, and a routine to read the contents of a Nuclear Data 5050 Analyzer and to prepare DECTAPE output of these data for subsequent analysis by a code written in PDP-15-compiled FORTRAN-IV.

  7. Large-scale 16S gene assembly using metagenomics shotgun sequences.

    Science.gov (United States)

    Zeng, Feng; Wang, Zicheng; Wang, Ying; Zhou, Jizhong; Chen, Ting

    2017-05-15

    Combining a 16S rRNA (16S) gene database with metagenomic shotgun sequences promises unbiased identification of known and novel microbes. To achieve this, we herein report reference-based ribosome assembly (RAMBL), a computational pipeline, which integrates taxonomic tree search and Dirichlet process clustering to reconstruct full-length 16S gene sequences from metagenomic sequencing data with high accuracy. By benchmarking against the synthetic and real shotgun sequences, we demonstrated that full-length 16S gene assemblies of RAMBL were a good proxy for known and putative microbes, including Candidate Phyla Radiation. We found that 30-40% of bacteria genera in the terrestrial and intestinal biomes have no closely related genome sequences. We also observed that RAMBL was able to generate a more accurate determination of environmental microbial diversity and yield better disease classification, suggesting that full-length 16S gene assemblies are a powerful alternative to marker gene set and 16S short reads. RAMBL first realizes the access to full-length 16S gene sequences in the near-terabase-scale metagenomic shotgun sequences, which markedly improve metagenomic data analysis and interpretation. RAMBL is available at https://github.com/homopolymer/RAMBL for academic use. zengfeng@xmu.edu.cn. Supplementary data are available at Bioinformatics online.

  8. A Base Integer Programming Model and Benchmark Suite for Liner-Shipping Network Design

    DEFF Research Database (Denmark)

    Brouer, Berit Dangaard; Alvarez, Fernando; Plum, Christian Edinger Munk

    2014-01-01

    . The potential for making cost-effective and energy-efficient liner-shipping networks using operations research (OR) is huge and neglected. The implementation of logistic planning tools based upon OR has enhanced performance of airlines, railways, and general transportation companies, but within the field...... problem to be strongly NP-hard. A benchmark suite of data instances to reflect the business structure of a global liner shipping network is presented. The design of the benchmark suite is discussed in relation to industry standards, business rules, and mathematical programming. The data are based on real......-life data from the largest global liner-shipping company, Maersk Line, and supplemented by data from several industry and public stakeholders. Computational results yielding the first best known solutions for six of the seven benchmark instances is provided using a heuristic combining tabu search...

  9. Synthetic graph generation for data-intensive HPC benchmarking: Scalability, analysis and real-world application

    Energy Technology Data Exchange (ETDEWEB)

    Powers, Sarah S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lothian, Joshua [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-12-01

    The benchmarking effort within the Extreme Scale Systems Center at Oak Ridge National Laboratory seeks to provide High Performance Computing benchmarks and test suites of interest to the DoD sponsor. The work described in this report is a part of the effort focusing on graph generation. A previously developed benchmark, SystemBurn, allows the emulation of a broad spectrum of application behavior profiles within a single framework. To complement this effort, similar capabilities are desired for graph-centric problems. This report described the in-depth analysis of the generated synthetic graphs' properties at a variety of scales using different generator implementations and examines their applicability to replicating real world datasets.

  10. ActivityNet: A Large-Scale Video Benchmark for Human Activity Understanding

    KAUST Repository

    Heilbron, Fabian Caba

    2015-06-02

    In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper we introduce ActivityNet, a new largescale video benchmark for human activity understanding. Our benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. We illustrate three scenarios in which ActivityNet can be used to compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.

  11. Benchmarking transaction and analytical processing systems the creation of a mixed workload benchmark and its application

    CERN Document Server

    Bog, Anja

    2014-01-01

    This book introduces a new benchmark for hybrid database systems, gauging the effect of adding OLAP to an OLTP workload and analyzing the impact of commonly used optimizations in historically separate OLTP and OLAP domains in mixed-workload scenarios.

  12. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  13. A novel video dataset for change detection benchmarking.

    Science.gov (United States)

    Goyette, Nil; Jodoin, Pierre-Marc; Porikli, Fatih; Konrad, Janusz; Ishwar, Prakash

    2014-11-01

    Change detection is one of the most commonly encountered low-level tasks in computer vision and video processing. A plethora of algorithms have been developed to date, yet no widely accepted, realistic, large-scale video data set exists for benchmarking different methods. Presented here is a unique change detection video data set consisting of nearly 90 000 frames in 31 video sequences representing six categories selected to cover a wide range of challenges in two modalities (color and thermal infrared). A distinguishing characteristic of this benchmark video data set is that each frame is meticulously annotated by hand for ground-truth foreground, background, and shadow area boundaries-an effort that goes much beyond a simple binary label denoting the presence of change. This enables objective and precise quantitative comparison and ranking of video-based change detection algorithms. This paper discusses various aspects of the new data set, quantitative performance metrics used, and comparative results for over two dozen change detection algorithms. It draws important conclusions on solved and remaining issues in change detection, and describes future challenges for the scientific community. The data set, evaluation tools, and algorithm rankings are available to the public on a website and will be updated with feedback from academia and industry in the future.

  14. Fourth Doctoral Student Assembly

    CERN Multimedia

    Ingrid Haug

    2016-01-01

    On 10 May, over 130 PhD students and their supervisors, from both CERN and partner universities, gathered for the 4th Doctoral Student Assembly in the Council Chamber.   The assembly was followed by a poster session, at which eighteen doctoral students presented the outcome of their scientific work. The CERN Doctoral Student Programme currently hosts just over 200 students in applied physics, engineering, computing and science communication/education. The programme has been in place since 1985. It enables students to do their research at CERN for a maximum of three years and to work on a PhD thesis, which they defend at their University. The programme is steered by the TSC committee, which holds two selection committees per year, in June and December. The Doctoral Student Assembly was opened by the Director-General, Fabiola Gianotti, who stressed the importance of the programme in the scientific environment at CERN, emphasising that there is no more rewarding activity than lear...

  15. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  16. Property-Preserving Generation of Tailored Benchmark Petri Nets

    NARCIS (Netherlands)

    Bernhard, Steffen; Jasper, Marc; Meijer, Jeroen; van de Pol, Jaco

    Bottleneck of the validation and evaluation of analysis and verification tools for distributed systems is the shortage of benchmark problems. Specifically designed benchmark problems are typically artificial, rare, and small, and it is difficult to guarantee challenging properties of realistic

  17. A Pratical Benchmark for Quality Certification in Business Incubators

    National Research Council Canada - National Science Library

    Carmo, João Paulo do; Santos, Christian Mariani Lucas dos; Barros, João Paulo Soares de

    2017-01-01

    This study aims to create a practical quality benchmark for incubators. The benchmark is a model to be followed in order to help incubators deploy all processes defined by the quality accreditation label called Centro de...

  18. Effects of Exposure Imprecision on Estimation of the Benchmark Dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose......Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose...

  19. Effect of H-bonding interactions of water molecules in the self assembly of supramolecular architecture-joint experimental and computational studies

    Science.gov (United States)

    Jassal, Amanpreet Kaur; Kaur, Rajwinder; Islam, Nasarul; Anu; Mudsainiyan, Rahul Kumar

    2017-08-01

    A new {[Cu(4,4‧-BP)2.(H2O)4].2,6-NDC.3(H2O)} complex has been synthesized by refluxing Cu(NO3)2, 2,6-NDC and 4,4‧-BP (1:1:1 ratio) (2,6-NDC = 2,6-Naphthalene Dicarboxylic acid, 4,4‧-BP = 4,4'-bipyridine) in methanol/ammonia mixture and characterized by various spectroscopic techniques. The geometry around Cu2+ ion is typical octahedral in cationic complex, while the deprotonated 2,6-NDC act as a charge balancing counter anionic part. Water molecules (lattice and coordinated) also play important role in the self-assembly by forming Hsbnd bonded supramolecular architecture involving strong inter/intramolecular secondary interactions. The luminescence property and thermogravimetric analyses were also investigated. Both the intermolecular interactions of molecular and crystal structures of this complex were compared and discussed using Hirshfeld surface analysis and 2D-fingerprint plots. Hirshfeld surface analysis indicates that H⋯H, O⋯H and π···π contacts can account for 40.4, 19.3 and 7.7% respectively of the total Hirshfeld surface area. The DFT calculation at the CAM-B3LYP level of theory revealed the existence of three hydrogens binds in the complex. These hydrogen bonds exist between the oxygen atom of ligand and the hydrogen of coordinated water molecules.

  20. Rapid, High-Throughput Identification of Anthrax-Causing and Emetic Bacillus cereus Group Genome Assemblies via BTyper, a Computational Tool for Virulence-Based Classification of Bacillus cereus Group Isolates by Using Nucleotide Sequencing Data

    Science.gov (United States)

    Carroll, Laura M.; Miller, Rachel A.; Wiedmann, Martin

    2017-01-01

    ABSTRACT The Bacillus cereus group comprises nine species, several of which are pathogenic. Differentiating between isolates that may cause disease and those that do not is a matter of public health and economic importance, but it can be particularly challenging due to the high genomic similarity within the group. To this end, we have developed BTyper, a computational tool that employs a combination of (i) virulence gene-based typing, (ii) multilocus sequence typing (MLST), (iii) panC clade typing, and (iv) rpoB allelic typing to rapidly classify B. cereus group isolates using nucleotide sequencing data. BTyper was applied to a set of 662 B. cereus group genome assemblies to (i) identify anthrax-associated genes in non-B. anthracis members of the B. cereus group, and (ii) identify assemblies from B. cereus group strains with emetic potential. With BTyper, the anthrax toxin genes cya, lef, and pagA were detected in 8 genomes classified by the NCBI as B. cereus that clustered into two distinct groups using k-medoids clustering, while either the B. anthracis poly-γ-d-glutamate capsule biosynthesis genes capABCDE or the hyaluronic acid capsule hasA gene was detected in an additional 16 assemblies classified as either B. cereus or Bacillus thuringiensis isolated from clinical, environmental, and food sources. The emetic toxin genes cesABCD were detected in 24 assemblies belonging to panC clades III and VI that had been isolated from food, clinical, and environmental settings. The command line version of BTyper is available at https://github.com/lmc297/BTyper. In addition, BMiner, a companion application for analyzing multiple BTyper output files in aggregate, can be found at https://github.com/lmc297/BMiner. IMPORTANCE Bacillus cereus is a foodborne pathogen that is estimated to cause tens of thousands of illnesses each year in the United States alone. Even with molecular methods, it can be difficult to distinguish nonpathogenic B. cereus group isolates from their