WorldWideScience

Sample records for core computational benchmark

  1. Benchmarking the CRBLASTER Computational Framework on the 350-MHz 49-core Maestro Development Board

    Science.gov (United States)

    Mighell, K. J.

    2012-09-01

    I describe the performance of the CRBLASTER computational framework on a 350-MHz 49-core Maestro Development Board (MBD). The 49-core Interim Test Chip (ITC) was developed by the U.S. Government and is based on the intellectual property of the 64-core TILE64 processor of the Tilera Corporation. The Maestro processor is intended for use in the high radiation environments found in space; the ITC was fabricated using IBM 90-nm CMOS 9SF technology and Radiation-Hardening-by-Design (RHDB) rules. CRBLASTER is a parallel-processing cosmic-ray rejection application based on a simple computational framework that uses the high-performance computing industry standard Message Passing Interface (MPI) library. CRBLASTER was designed to be used by research scientists to easily port image-analysis programs based on embarrassingly-parallel algorithms to a parallel-processing environment such as a multi-node Beowulf cluster or multi-core processors using MPI. I describe my experience of porting CRBLASTER to the 64-core TILE64 processor, the Maestro simulator, and finally the 49-core Maestro processor itself. Performance comparisons using the ITC are presented between emulating all floating-point operations in software and doing all floating point operations with hardware assist from an IEEE-754 compliant Aurora FPU (floating point unit) that is attached to each of the 49 cores. Benchmarking of the CRBLASTER computational framework using the memory-intensive L.A.COSMIC cosmic ray rejection algorithm and a computational-intensive Poisson noise generator reveal subtleties of the Maestro hardware design. Lastly, I describe the importance of using real scientific applications during the testing phase of next-generation computer hardware; complex real-world scientific applications can stress hardware in novel ways that may not necessarily be revealed while executing simple applications or unit tests.

  2. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  3. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  4. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  5. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  6. Comparative Neutronics Analysis of DIMPLE S06 Criticality Benchmark with Contemporary Reactor Core Analysis Computer Code Systems

    Directory of Open Access Journals (Sweden)

    Wonkyeong Kim

    2015-01-01

    Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.

  7. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Ravindrudu, Rahul [Iowa State Univ., Ames, IA (United States)

    2004-01-01

    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access

  8. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  9. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...

  10. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  11. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    Energy Technology Data Exchange (ETDEWEB)

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  12. VENUS-F: A fast lead critical core for benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kochetkov, A.; Wagemans, J.; Vittiglio, G. [SCK.CEN, Boeretang 200, 2400 Mol (Belgium)

    2011-07-01

    The zero-power thermal neutron water-moderated facility VENUS at SCK-CEN has been extensively used for benchmarking in the past. In accordance with GEN-IV design tasks (fast reactor systems and accelerator driven systems), the VENUS facility was modified in 2007-2010 into the fast neutron facility VENUS-F with solid core components. This paper introduces the projects GUINEVERE and FREYA, which are being conducted at the VENUS-F facility, and it presents the measurement results obtained at the first critical core. Throughout the projects other fast lead benchmarks also will be investigated. The measurement results of the different configurations can all be used as fast neutron benchmarks. (authors)

  13. Core of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Prof. C.P.Chandgude

    2017-04-01

    Full Text Available Advancement in computing facilities marks back from 1960’s with introduction of mainframes. Each of the computing has one or the other issues, so keeping this in mind cloud computing was introduced. Cloud computing has its roots in older technologies such as hardware virtualization, distributed computing, internet technologies, and autonomic computing. Cloud computing can be described with two models, one is service model and second is deployment model. While providing several services, cloud management’s primary role is resource provisioning. While there are several such benefits of cloud computing, there are challenges in adopting public clouds because of dependency on infrastructure that is shared by many enterprises. In this paper, we present core knowledge of cloud computing, highlighting its key concepts, deployment models, service models, benefits as well as security issues related to cloud data. The aim of this paper is to provide a better understanding of the cloud computing and to identify important research directions in this field

  14. TREAT Transient Analysis Benchmarking for the HEU Core

    Energy Technology Data Exchange (ETDEWEB)

    Kontogeorgakos, D. C. [Argonne National Lab. (ANL), Argonne, IL (United States); Connaway, H. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, A. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-05-01

    This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used to determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.

  15. The design of a scalable, fixed-time computer benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, J.; Rover, D.; Elbert, S.; Carter, M.

    1990-10-01

    By using the principle of fixed time benchmarking, it is possible to compare a very wide range of computers, from a small personal computer to the most powerful parallel supercomputer, an a single scale. Fixed-time benchmarks promise far greater longevity than those based on a particular problem size, and are more appropriate for grand challenge'' capability comparison. We present the design of a benchmark, SLALOM{trademark}, that scales automatically to the computing power available, and corrects several deficiencies in various existing benchmarks: it is highly scalable, it solves a real problem, it includes input and output times, and it can be run on parallel machines of all kinds, using any convenient language. The benchmark provides a reasonable estimate of the size of problem solvable on scientific computers. Results are presented that span six orders of magnitude for contemporary computers of various architectures. The benchmarks also can be used to demonstrate a new source of superlinear speedup in parallel computers. 15 refs., 14 figs., 3 tabs.

  16. Randomized benchmarking in measurement-based quantum computing

    Science.gov (United States)

    Alexander, Rafael N.; Turner, Peter S.; Bartlett, Stephen D.

    2016-09-01

    Randomized benchmarking is routinely used as an efficient method for characterizing the performance of sets of elementary logic gates in small quantum devices. In the measurement-based model of quantum computation, logic gates are implemented via single-site measurements on a fixed universal resource state. Here we adapt the randomized benchmarking protocol for a single qubit to a linear cluster state computation, which provides partial, yet efficient characterization of the noise associated with the target gate set. Applying randomized benchmarking to measurement-based quantum computation exhibits an interesting interplay between the inherent randomness associated with logic gates in the measurement-based model and the random gate sequences used in benchmarking. We consider two different approaches: the first makes use of the standard single-qubit Clifford group, while the second uses recently introduced (non-Clifford) measurement-based 2-designs, which harness inherent randomness to implement gate sequences.

  17. Large Core Code Evaluation Working Group Benchmark Problem Four: neutronics and burnup analysis of a large heterogeneous fast reactor. Part 1. Analysis of benchmark results. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Cowan, C.L.; Protsik, R.; Lewellen, J.W. (eds.)

    1984-01-01

    The Large Core Code Evaluation Working Group Benchmark Problem Four was specified to provide a stringent test of the current methods which are used in the nuclear design and analyses process. The benchmark specifications provided a base for performing detailed burnup calculations over the first two irradiation cycles for a large heterogeneous fast reactor. Particular emphasis was placed on the techniques for modeling the three-dimensional benchmark geometry, and sensitivity studies were carried out to determine the performance parameter sensitivities to changes in the neutronics and burnup specifications. The results of the Benchmark Four calculations indicated that a linked RZ-XY (Hex) two-dimensional representation of the benchmark model geometry can be used to predict mass balance data, power distributions, regionwise fuel exposure data and burnup reactivities with good accuracy when compared with the results of direct three-dimensional computations. Most of the small differences in the results of the benchmark analyses by the different participants were attributed to ambiguities in carrying out the regionwise flux renormalization calculations throughout the burnup step.

  18. Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation

    Science.gov (United States)

    Scott, James R.

    2004-01-01

    NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.

  19. Benchmarking spin-state chemistry in starless core models

    CERN Document Server

    Sipilä, O; Harju, J

    2015-01-01

    Aims. We aim to present simulated chemical abundance profiles for a variety of important species, with special attention given to spin-state chemistry, in order to provide reference results against which present and future models can be compared. Methods. We employ gas-phase and gas-grain models to investigate chemical abundances in physical conditions corresponding to starless cores. To this end, we have developed new chemical reaction sets for both gas-phase and grain-surface chemistry, including the deuterated forms of species with up to six atoms and the spin-state chemistry of light ions and of the species involved in the ammonia and water formation networks. The physical model is kept simple in order to facilitate straightforward benchmarking of other models against the results of this paper. Results. We find that the ortho/para ratios of ammonia and water are similar in both gas-phase and gas-grain models, at late times in particular, implying that the ratios are determined by gas-phase processes. We d...

  20. Benchmarking neuromorphic vision: lessons learnt from computer vision.

    Science.gov (United States)

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  1. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Directory of Open Access Journals (Sweden)

    Seyhan Yazar

    Full Text Available A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR on Amazon EC2 instances and Google Compute Engine (GCE, using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2 for E.coli and 53.5% (95% CI: 34.4-72.6 for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1 and 173.9% (95% CI: 134.6-213.1 more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  2. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Science.gov (United States)

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  3. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-02-02

    The Task Force on Reactor-Based Plutonium Disposition, now an Expert Group, was set up through the Organization for Economic Cooperation and Development/Nuclear Energy Agency to facilitate technical assessments of burning weapons-grade plutonium mixed-oxide (MOX) fuel in U.S. pressurized-water reactors and Russian VVER nuclear reactors. More than ten countries participated to advance the work of the Task Force in a major initiative, which was a blind benchmark study to compare code benchmark calculations against experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At the Oak Ridge National Laboratory, the HELIOS-1.4 code was used to perform a comprehensive study of pin-cell and core calculations for the VENUS-2 benchmark.

  4. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4 - Revised Report

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-06-01

    The Task Force on Reactor-Based Plutonium Disposition (TFRPD) was formed by the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) to study reactor physics, fuel performance, and fuel cycle issues related to the disposition of weapons-grade (WG) plutonium as mixed-oxide (MOX) reactor fuel. To advance the goals of the TFRPD, 10 countries and 12 institutions participated in a major TFRPD activity: a blind benchmark study to compare code calculations to experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At Oak Ridge National Laboratory, the HELIOS-1.4 code system was used to perform the comprehensive study of pin-cell and MOX core calculations for the VENUS-2 MOX core benchmark study.

  5. Benchmark Problems Used to Assess Computational Aeroacoustics Codes

    Science.gov (United States)

    Dahl, Milo D.; Envia, Edmane

    2005-01-01

    The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.

  6. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.

  7. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  8. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  9. Defining core elements and outstanding practice in Nutritional Science through collaborative benchmarking.

    Science.gov (United States)

    Samman, Samir; McCarthur, Jennifer O; Peat, Mary

    2006-01-01

    Benchmarking has been adopted by educational institutions as a potentially sensitive tool for improving learning and teaching. To date there has been limited application of benchmarking methodology in the Discipline of Nutritional Science. The aim of this survey was to define core elements and outstanding practice in Nutritional Science through collaborative benchmarking. Questionnaires that aimed to establish proposed core elements for Nutritional Science, and inquired about definitions of " good" and " outstanding" practice were posted to named representatives at eight Australian universities. Seven respondents identified core elements that included knowledge of nutrient metabolism and requirement, food production and processing, modern biomedical techniques that could be applied to understanding nutrition, and social and environmental issues as related to Nutritional Science. Four of the eight institutions who agreed to participate in the present survey identified the integration of teaching with research as an indicator of outstanding practice. Nutritional Science is a rapidly evolving discipline. Further and more comprehensive surveys are required to consolidate and update the definition of the discipline, and to identify the optimal way of teaching it. Global ideas and specific regional requirements also need to be considered.

  10. Benchmark calculation for water reflected STACY cores containing low enriched uranyl nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Miyoshi, Yoshinori; Yamamoto, Toshihiro; Nakamura, Takemi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    In order to validate the availability of criticality calculation codes and related nuclear data library, a series of fundamental benchmark experiments on low enriched uranyl nitrate solution have been performed with a Static Experiment Criticality Facility, STACY in JAERI. The basic core composed of a single tank with water reflector was used for accumulating the systematic data with well-known experimental uncertainties. This paper presents the outline of the core configurations of STACY, the standard calculation model, and calculation results with a Monte Carlo code and JENDL 3.2 nuclear data library. (author)

  11. Bioinformatics and Computational Core Technology Center

    Data.gov (United States)

    Federal Laboratory Consortium — SERVICES PROVIDED BY THE COMPUTER CORE FACILITYEvaluation, purchase, set up, and maintenance of the computer hardware and network for the 170 users in the research...

  12. Bioinformatics and Computational Core Technology Center

    Data.gov (United States)

    Federal Laboratory Consortium — SERVICES PROVIDED BY THE COMPUTER CORE FACILITY Evaluation, purchase, set up, and maintenance of the computer hardware and network for the 170 users in the research...

  13. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  14. Hybrid Numerical Solvers for Massively Parallel Eigenvalue Computation and Their Benchmark with Electronic Structure Calculations

    CERN Document Server

    Imachi, Hiroto

    2015-01-01

    Optimally hybrid numerical solvers were constructed for massively parallel generalized eigenvalue problem (GEP).The strong scaling benchmark was carried out on the K computer and other supercomputers for electronic structure calculation problems in the matrix sizes of M = 10^4-10^6 with upto 105 cores. The procedure of GEP is decomposed into the two subprocedures of the reducer to the standard eigenvalue problem (SEP) and the solver of SEP. A hybrid solver is constructed, when a routine is chosen for each subprocedure from the three parallel solver libraries of ScaLAPACK, ELPA and EigenExa. The hybrid solvers with the two newer libraries, ELPA and EigenExa, give better benchmark results than the conventional ScaLAPACK library. The detailed analysis on the results implies that the reducer can be a bottleneck in next-generation (exa-scale) supercomputers, which indicates the guidance for future research. The code was developed as a middleware and a mini-application and will appear online.

  15. JACOB: a dynamic database for computational chemistry benchmarking.

    Science.gov (United States)

    Yang, Jack; Waller, Mark P

    2012-12-21

    JACOB (just a collection of benchmarks) is a database that contains four diverse benchmark studies, which in-turn included 72 data sets, with a total of 122,356 individual results. The database is constructed upon a dynamic web framework that allows users to retrieve data from the database via predefined categories. Additional flexibility is made available via user-defined text-based queries. Requested sets of results are then automatically presented as bar graphs, with parameters of the graphs being controllable via the URL. JACOB is currently available at www.wallerlab.org/jacob.

  16. NODAL3 Sensitivity Analysis for NEACRP 3D LWR Core Transient Benchmark (PWR

    Directory of Open Access Journals (Sweden)

    Surian Pinem

    2016-01-01

    Full Text Available This paper reports the results of sensitivity analysis of the multidimension, multigroup neutron diffusion NODAL3 code for the NEACRP 3D LWR core transient benchmarks (PWR. The code input parameters covered in the sensitivity analysis are the radial and axial node sizes (the number of radial node per fuel assembly and the number of axial layers, heat conduction node size in the fuel pellet and cladding, and the maximum time step. The output parameters considered in this analysis followed the above-mentioned core transient benchmarks, that is, power peak, time of power peak, power, averaged Doppler temperature, maximum fuel centerline temperature, and coolant outlet temperature at the end of simulation (5 s. The sensitivity analysis results showed that the radial node size and maximum time step give a significant effect on the transient parameters, especially the time of power peak, for the HZP and HFP conditions. The number of ring divisions for fuel pellet and cladding gives negligible effect on the transient solutions. For productive work of the PWR transient analysis, based on the present sensitivity analysis results, we recommend NODAL3 users to use 2×2 radial nodes per assembly, 1×18 axial layers per assembly, the maximum time step of 10 ms, and 9 and 1 ring divisions for fuel pellet and cladding, respectively.

  17. Processor core model for quantum computing.

    Science.gov (United States)

    Yung, Man-Hong; Benjamin, Simon C; Bose, Sougato

    2006-06-09

    We describe an architecture based on a processing "core," where multiple qubits interact perpetually, and a separate "store," where qubits exist in isolation. Computation consists of single qubit operations, swaps between the store and the core, and free evolution of the core. This enables computation using physical systems where the entangling interactions are "always on." Alternatively, for switchable systems, our model constitutes a prescription for optimizing many-qubit gates. We discuss implementations of the quantum Fourier transform, Hamiltonian simulation, and quantum error correction.

  18. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    Energy Technology Data Exchange (ETDEWEB)

    Orii, Shigeo [Japan Atomic Energy Research Inst., Tokyo (Japan)

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  19. BENCHMARK EVALUATION OF THE START-UP CORE REACTOR PHYSICS MEASUREMENTS OF THE HIGH TEMPERATURE ENGINEERING TEST REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    John Darrell Bess

    2010-05-01

    The benchmark evaluation of the start-up core reactor physics measurements performed with Japan’s High Temperature Engineering Test Reactor, in support of the Next Generation Nuclear Plant Project and Very High Temperature Reactor Program activities at the Idaho National Laboratory, has been completed. The evaluation was performed using MCNP5 with ENDF/B-VII.0 nuclear data libraries and according to guidelines provided for inclusion in the International Reactor Physics Experiment Evaluation Project Handbook. Results provided include updated evaluation of the initial six critical core configurations (five annular and one fully-loaded). The calculated keff eigenvalues agree within 1s of the benchmark values. Reactor physics measurements that were evaluated include reactivity effects measurements such as excess reactivity during the core loading process and shutdown margins for the fully-loaded core, four isothermal temperature reactivity coefficient measurements for the fully-loaded core, and axial reaction rate measurements in the instrumentation columns of three core configurations. The calculated values agree well with the benchmark experiment measurements. Fully subcritical and warm critical configurations of the fully-loaded core were also assessed. The calculated keff eigenvalues for these two configurations also agree within 1s of the benchmark values. The reactor physics measurement data can be used in the validation and design development of future High Temperature Gas-cooled Reactor systems.

  20. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  1. Numerics of High Performance Computers and Benchmark Evaluation of Distributed Memory Computers

    Directory of Open Access Journals (Sweden)

    H. S. Krishna

    2004-07-01

    Full Text Available The internal representation of numerical data, their speed of manipulation to generate the desired result through efficient utilisation of central processing unit, memory, and communication links are essential steps of all high performance scientific computations. Machine parameters, in particular, reveal accuracy and error bounds of computation, required for performance tuning of codes. This paper reports diagnosis of machine parameters, measurement of computing power of several workstations, serial and parallel computers, and a component-wise test procedure for distributed memory computers. Hierarchical memory structure is illustrated by block copying and unrolling techniques. Locality of reference for cache reuse of data is amply demonstrated by fast Fourier transform codes. Cache and register-blocking technique results in their optimum utilisation with consequent gain in throughput during vector-matrix operations. Implementation of these memory management techniques reduces cache inefficiency loss, which is known to be proportional to the number of processors. Of the two Linux clusters-ANUP16, HPC22 and HPC64, it has been found from the measurement of intrinsic parameters and from application benchmark of multi-block Euler code test run that ANUP16 is suitable for problems that exhibit fine-grained parallelism. The delivered performance of ANUP16 is of immense utility for developing high-end PC clusters like HPC64 and customised parallel computers with added advantage of speed and high degree of parallelism.

  2. Benchmarking CRBLASTER on the 350-MHz 49-core Maestro Development Board

    CERN Document Server

    Mighell, Kenneth J

    2012-01-01

    I describe the performance of the CRBLASTER computational framework on a 350-MHz 49-core Maestro Development Board (MDB). The 49-core Interim Test Chip (ITC) was developed by the U.S. Government and is based on the intellectual property of the 64-core TILE64 processor of the Tilera Corporation. The Maestro processor is intended for use in the high radiation environments found in space; the ITC was fabricated using IBM 90-nm CMOS 9SF technology and Radiation-Hardening-by-Design (RHDB) rules. CRBLASTER is a parallel-processing cosmic-ray rejection application based on a simple computational framework that uses the high-performance computing industry standard Message Passing Interface (MPI) library. CRBLASTER was designed to be used by research scientists to easily port image-analysis programs based on embarrassingly-parallel algorithms to a parallel-processing environment such as a multi-node Beowulf cluster or multi-core processors using MPI. I describe my experience of porting CRBLASTER to the 64-core TILE64 ...

  3. 3-D core modelling of RIA transient: the TMI-1 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ferraresi, P. [CEA Cadarache, Institut de Protection et de Surete Nucleaire, Dept. de Recherches en Securite, 13 - Saint Paul Lez Durance (France); Studer, E. [CEA Saclay, Dept. Modelisation de Systemes et Structures, 91 - Gif sur Yvette (France); Avvakumov, A.; Malofeev, V. [Nuclear Safety Institute of Russian Research Center, Kurchatov Institute, Moscow (Russian Federation); Diamond, D.; Bromley, B. [Nuclear Energy and Infrastructure Systems Div., Brookhaven National Lab., BNL, Upton, NY (United States)

    2001-07-01

    The increase of fuel burn up in core management poses actually the problem of the evaluation of the deposited energy during Reactivity Insertion Accidents (RIA). In order to precisely evaluate this energy, 3-D approaches are used more and more frequently in core calculations. This 'best-estimate' approach requires the evaluation of code uncertainties. To contribute to this evaluation, a code benchmark has been launched. A 3-D modelling for the TMI-1 central Ejected Rod Accident with zero and intermediate initial powers was carried out with three different methods of calculation for an inserted reactivity respectively fixed at 1.2 $ and 1.26 $. The studies implemented by the neutronics codes PARCS (BNL) and CRONOS (IPSN/CEA) describe an homogeneous assembly, whereas the BARS (KI) code allows a pin-by-pin representation (CRONOS has both possibilities). All the calculations are consistent, the variation in figures resulting mainly from the method used to build cross sections and reflectors constants. The maximum rise in enthalpy for the intermediate initial power (33 % P{sub N}) calculation is, for this academic calculation, about 30 cal/g. This work will be completed in a next step by an evaluation of the uncertainty induced by the uncertainty on model parameters, and a sensitivity study of the key parameters for a peripheral Rod Ejection Accident. (authors)

  4. Benchmark Numerical Toolkits for High Performance Computing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  5. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    Science.gov (United States)

    Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.

    2014-06-01

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.

  6. Benchmarking of computer codes and approaches for modeling exposure scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  7. Benchmarking of computational approaches for fast screening of lithium ion battery electrolyte solvents

    Science.gov (United States)

    Kim, Daejin; Guk, Hyein; Choi, Seung-Hoon; Chung, Dong Hyen

    2017-08-01

    Electrolyte solvents play an important role in lithium-ion batteries. Hence, investigation of the solvent is key to improving battery functionality. We performed benchmark calculations to suggest the best conditions for rapid screening of electrolyte candidates using semi-empirical (SEM) calculations and density functional theory (DFT). A wide selection of Hamiltonians, DFT levels, and basis sets were used for this benchmarking with typical electrolyte solvents. The most efficient condition for reducing computational costs and time is VWN/DNP+ for DFT levels and PM3 for SEM Hamiltonians.

  8. Analysis of Network Performance for Computer Communication Systems with Benchmark

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper introduced a performance evaluating approach of computer communication system based on the simulation and measurement technology, and discussed its evaluating models. The result of our experiment showed that the outcome of practical measurement on Ether-LAN fitted in well with the theoreticai analysis. The approach we presented can be used to define various kinds of artificially simulated load models conveniently, build all kinds of network application environments in a flexible way, and exert sufficientiy the widely-used and high-precision features of the traditional simulation technology and the reality,reliability, adaptability features of measurement technology.

  9. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Woodworth, Ken [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kuruganti, Teja [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-17

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the software for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.

  10. Benchmarking of Computational Models for NDE and SHM of Composites

    Science.gov (United States)

    Wheeler, Kevin; Leckey, Cara; Hafiychuk, Vasyl; Juarez, Peter; Timucin, Dogan; Schuet, Stefan; Hafiychuk, Halyna

    2016-01-01

    Ultrasonic wave phenomena constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials such as carbon-fiber-reinforced polymer (CFRP) laminates. Computational models of ultrasonic guided-wave excitation, propagation, scattering, and detection in quasi-isotropic laminates can be extremely valuable in designing practically realizable NDE and SHM hardware and software with desired accuracy, reliability, efficiency, and coverage. This paper presents comparisons of guided-wave simulations for CFRP composites implemented using three different simulation codes: two commercial finite-element analysis packages, COMSOL and ABAQUS, and a custom code implementing the Elastodynamic Finite Integration Technique (EFIT). Comparisons are also made to experimental laser Doppler vibrometry data and theoretical dispersion curves.

  11. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, A B; de Supinski, B; Mueller, F; Mckee, S A

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even more complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.

  12. Creation of a Full-Core HTR Benchmark with the Fort St. Vrain Initial Core and Assessment of Uncertainties in the FSV Fuel Composition and Geometry

    Energy Technology Data Exchange (ETDEWEB)

    Martin, William R.; Lee, John C.; baxter, Alan; Wemple, Chuck

    2012-03-31

    Information and measured data from the intial Fort St. Vrain (FSV) high temperature gas reactor core is used to develop a benchmark configuration to validate computational methods for analysis of a full-core, commercial HTR configuration. Large uncertainties in the geometry and composition data for the FSV fuel and core are identified, including: (1) the relative numbers of fuel particles for the four particle types, (2) the distribution of fuel kernel diameters for the four particle types, (3) the Th:U ratio in the initial FSV core, (4) and the buffer thickness for the fissile and fertile particles. Sensitivity studies were performed to assess each of these uncertainties. A number of methods were developed to assist in these studies, including: (1) the automation of MCNP5 input files for FSV using Python scripts, (2) a simple method to verify isotopic loadings in MCNP5 input files, (3) an automated procedure to conduct a coupled MCNP5-RELAP5 analysis for a full-core FSV configuration with thermal-hydraulic feedback, and (4) a methodology for sampling kernel diameters from arbitrary power law and Gaussian PDFs that preserved fuel loading and packing factor constraints. A reference FSV fuel configuration was developed based on having a single diameter kernel for each of the four particle types, preserving known uranium and thorium loadings and packing factor (58%). Three fuel models were developed, based on representing the fuel as a mixture of kernels with two diameters, four diameters, or a continuous range of diameters. The fuel particles were put into a fuel compact using either a lattice-bsed approach or a stochastic packing methodology from RPI, and simulated with MCNP5. The results of the sensitivity studies indicated that the uncertainties in the relative numbers and sizes of fissile and fertile kernels were not important nor were the distributions of kernel diameters within their diameter ranges. The uncertainty in the Th:U ratio in the intial FSV core was

  13. A computer scientist’s evaluation of publically available hardware Trojan benchmarks

    OpenAIRE

    Slayback, Scott M.

    2015-01-01

    Approved for public release; distribution is unlimited Dr. Hassan Salmani and Dr. Mohammed Tehranipoor have developed a collection of publically available hardware Trojans, meant to be used as common benchmarks for the analysis of detection and mitigation techniques. In this thesis, we evaluate a selection of these Trojans from the perspective of a computer scientist with limited electrical engineering background. Note that this thesis is also intended to serve as a supplement to the exist...

  14. Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flapper, Joris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry – including four dairy processes – cheese, fluid milk, butter, and milk powder.

  15. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  16. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    Science.gov (United States)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  17. Mars/master coupled system calculation of the OECD MSLB benchmark exercise 3 with refined core thermal-hydraulic nodalization

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, J.J.; Joo, H.G.; Cho, B.O.; Zee, S.Q.; Lee, W.J. [Korea Atomic Energy Research Inst., Daejeon (Korea, Republic of)

    2001-07-01

    To assess the performance of KAERI coupled multi-dimensional system thermal- hydraulics (T/H) and three-dimensional (3-D) kinetics code, MARS/MASTER, Exercise III of the OECD main steam line break benchmark problem is solved. The coupled code is capable of employing an individual flow channel for each fuel assembly as well as lumped ones. The basic analysis model of the reference plant consists of four major components: a 3-D core neutronics model, a 3-D thermal-hydraulic model for the reactor vessel employing lumped flow channels, a refined core T/H model and a 1-D T/H model for coolant system. Calculations were performed with and without the refined core T/H model. The results of the basic calculation performed without the refined core T/H model show that the core power distribution evolves to a highly localized shape due to the presence of a stuck rod, as well as asymmetric flow distribution in the reactor core. The results of the refined core T/H model indicate that the local peaking factor can be reduced by as much as 22 % through accurate representation of the local T/H feedback effects. Nonetheless, the global transient behaviors are not significantly affected. (author)

  18. Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments

    Energy Technology Data Exchange (ETDEWEB)

    Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao

    2009-05-20

    Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmark the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.

  19. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  20. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods.

    Science.gov (United States)

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. ESTABLISHING A METHODOLOGY FOR BENCHMARKING SPEECH SYNTHESIS FOR COMPUTER-ASSISTED LANGUAGE LEARNING (CALL

    Directory of Open Access Journals (Sweden)

    Zöe Handley

    2005-09-01

    Full Text Available Despite the new possibilities that speech synthesis brings about, few Computer-Assisted Language Learning (CALL applications integrating speech synthesis have found their way onto the market. One potential reason is that the suitability and benefits of the use of speech synthesis in CALL have not been proven. One way to do this is through evaluation. Yet, very few formal evaluations of speech synthesis for CALL purposes have been conducted. One possible reason for the neglect of evaluation in this context is the fact that it is expensive in terms of time and resources. An important concern given that there are several levels of evaluation from which such applications would benefit. Benchmarking, the comparison of the score obtained by a system with that obtained by one which is known, to guarantee user satisfaction in a standard task or set of tasks, is introduced as a potential solution to this problem. In this article, we report on our progress towards the development of one of these benchmarks, namely a benchmark for determining the adequacy of speech synthesis systems for use in CALL. We do so by presenting the results of a case study which aimed to identify the criteria which determine the adequacy of the output of speech synthesis systems for use in its various roles in CALL with a view to the selection of benchmark tests which will address these criteria. These roles (reading machine, pronunciation model, and conversational partner are also discussed here. An agenda for further research and evaluation is proposed in the conclusion.

  2. Benchmarking Computational Fluid Dynamics Models for Application to Lava Flow Simulations and Hazard Assessment

    Science.gov (United States)

    Dietterich, H. R.; Lev, E.; Chen, J.; Cashman, K. V.; Honor, C.

    2015-12-01

    Recent eruptions in Hawai'i, Iceland, and Cape Verde highlight the need for improved lava flow models for forecasting and hazard assessment. Existing models used for lava flow simulation range in assumptions, complexity, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess the capabilities of existing models and test the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flows, including VolcFlow, OpenFOAM, Flow3D, and COMSOL. Using new benchmark scenarios defined in Cordonnier et al. (2015) as a guide, we model Newtonian, Herschel-Bulkley and cooling flows over inclined planes, obstacles, and digital elevation models with a wide range of source conditions. Results are compared to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Our study highlights the strengths and weakness of each code, including accuracy and computational costs, and provides insights regarding code selection. We apply the best-fit codes to simulate the lava flows in Harrat Rahat, a predominately mafic volcanic field in Saudi Arabia. Input parameters are assembled from rheology and volume measurements of past flows using geochemistry, crystallinity, and present-day lidar and photogrammetric digital elevation models. With these data, we use our verified models to reconstruct historic and prehistoric events, in order to assess the hazards posed by lava flows for Harrat Rahat.

  3. Paper- and computer-based workarounds to electronic health record use at three benchmark institutions

    Science.gov (United States)

    Flanagan, Mindy E; Saleem, Jason J; Millitello, Laura G; Russ, Alissa L; Doebbeling, Bradley N

    2013-01-01

    Background Healthcare professionals develop workarounds rather than using electronic health record (EHR) systems. Understanding the reasons for workarounds is important to facilitate user-centered design and alignment between work context and available health information technology tools. Objective To examine both paper- and computer-based workarounds to the use of EHR systems in three benchmark institutions. Methods Qualitative data were collected in 11 primary care outpatient clinics across three healthcare institutions. Data collection methods included direct observation and opportunistic questions. In total, 120 clinic staff and providers and 118 patients were observed. All data were analyzed using previously developed workaround categories and examined for potential new categories. Additionally, workarounds were coded as either paper- or computer-based. Results Findings corresponded to 10 of 11 workaround categories identified in previous research. All 10 of these categories applied to paper-based workarounds; five categories also applied to computer-based workarounds. One new category, no correct path (eg, a desired option did not exist in the computer interface, precipitating a workaround), was identified for computer-based workarounds. The most consistent reasons for workarounds across the three institutions were efficiency, memory, and awareness. Conclusions Consistent workarounds across institutions suggest common challenges in outpatient clinical settings and failures to accommodate these challenges in EHR design. An examination of workarounds provides insight into how providers adapt to limiting EHR systems. Part of the design process for computer interfaces should include user-centered methods particular to providers and healthcare settings to ensure uptake and usability. PMID:23492593

  4. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    Science.gov (United States)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  5. Multiple core computer processor with globally-accessible local memories

    Energy Technology Data Exchange (ETDEWEB)

    Shalf, John; Donofrio, David; Oliker, Leonid

    2016-09-20

    A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality of processor cores.

  6. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  7. Benchmark calculation of no-core Monte Carlo shell model in light nuclei

    CERN Document Server

    Abe, T; Otsuka, T; Shimizu, N; Utsuno, Y; Vary, J P; 10.1063/1.3584062

    2011-01-01

    The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.

  8. Quantum computing applied to calculations of molecular energies: CH2 benchmark.

    Science.gov (United States)

    Veis, Libor; Pittner, Jiří

    2010-11-21

    Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.

  9. Computational fluid dynamics (CFD) round robin benchmark for a pressurized water reactor (PWR) rod bundle

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Shin K., E-mail: paengki1@tamu.edu; Hassan, Yassin A.

    2016-05-15

    Highlights: • The capabilities of steady RANS models were directly assessed for full axial scale experiment. • The importance of mesh and conjugate heat transfer was reaffirmed. • The rod inner-surface temperature was directly compared. • The steady RANS calculations showed a limitation in the prediction of circumferential distribution of the rod surface temperature. - Abstract: This study examined the capabilities and limitations of steady Reynolds-Averaged Navier–Stokes (RANS) approach for pressurized water reactor (PWR) rod bundle problems, based on the round robin benchmark of computational fluid dynamics (CFD) codes against the NESTOR experiment for a 5 × 5 rod bundle with typical split-type mixing vane grids (MVGs). The round robin exercise against the high-fidelity, broad-range (covering multi-spans and entire lateral domain) NESTOR experimental data for both the flow field and the rod temperatures enabled us to obtain important insights into CFD prediction and validation for the split-type MVG PWR rod bundle problem. It was found that the steady RANS turbulence models with wall function could reasonably predict two key variables for a rod bundle problem – grid span pressure loss and the rod surface temperature – once mesh (type, resolution, and configuration) was suitable and conjugate heat transfer was properly considered. However, they over-predicted the magnitude of the circumferential variation of the rod surface temperature and could not capture its peak azimuthal locations for a central rod in the wake of the MVG. These discrepancies in the rod surface temperature were probably because the steady RANS approach could not capture unsteady, large-scale cross-flow fluctuations and qualitative cross-flow pattern change due to the laterally confined test section. Based on this benchmarking study, lessons and recommendations about experimental methods as well as CFD methods were also provided for the future research.

  10. Scalable Parallelization of Skyline Computation for Multi-core Processors

    DEFF Research Database (Denmark)

    Chester, Sean; Sidlauskas, Darius; Assent, Ira

    2015-01-01

    The skyline is an important query operator for multi-criteria decision making. It reduces a dataset to only those points that offer optimal trade-offs of dimensions. In general, it is very expensive to compute. Recently, multi-core CPU algorithms have been proposed to accelerate the computation o...

  11. Test Anxiety, Computer-Adaptive Testing and the Common Core

    Science.gov (United States)

    Colwell, Nicole Makas

    2013-01-01

    This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that…

  12. Computer simulation of hard-core models for liquid crystals

    NARCIS (Netherlands)

    Frenkel, D.

    1987-01-01

    A review is presented of computer simulations of liquid crystal systems. It will be shown that the shape of hard-core particles is of crucial importance for the stability of the phases. Both static and dynamic properties of the systems are obtained by means of computer simulation.

  13. BIGHORN Computational Fluid Dynamics Theory, Methodology, and Code Verification & Validation Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Yidong [Idaho National Lab. (INL), Idaho Falls, ID (United States); Andrs, David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Martineau, Richard Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for time integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.

  14. Benchmarking a multiresolution discontinuous Galerkin shallow water model: Implications for computational hydraulics

    Science.gov (United States)

    Caviedes-Voullième, Daniel; Kesserwani, Georges

    2015-12-01

    Numerical modelling of wide ranges of different physical scales, which are involved in Shallow Water (SW) problems, has been a key challenge in computational hydraulics. Adaptive meshing techniques have been commonly coupled with numerical methods in an attempt to address this challenge. The combination of MultiWavelets (MW) with the Runge-Kutta Discontinuous Galerkin (RKDG) method offers a new philosophy to readily achieve mesh adaptivity driven by the local variability of the numerical solution, and without requiring more than one threshold value set by the user. However, the practical merits and implications of the MWRKDG, in terms of how far it contributes to address the key challenge above, are yet to be explored. This work systematically explores this, through the verification and validation of the MWRKDG for selected steady and transient benchmark tests, which involves the features of real SW problems. Our findings reveal a practical promise of the SW-MWRKDG solver, in terms of efficient and accurate mesh-adaptivity, but also suggest further improvement in the SW-RKDG reference scheme to better intertwine with, and harness the prowess of, the MW-based adaptivity.

  15. TiD - Introducing and Benchmarking an Event-Delivery System for Brain-Computer Interfaces.

    Science.gov (United States)

    Breitwieser, Christian; Tavella, Michele; Schreuder, Martijn; Cincotti, Febo; Leeb, Robert; Muller-Putz, Gernot R

    2017-07-18

    In this paper, we present and analyze an event distribution system for brain-computer interfaces (BCIs). Events are commonly used to mark and describe incidents during an experiment and are therefore critical for later data analysis or immediate real-time processing. The presented approach, called TiD (Tools for brain-computer interaction - interface D), delivers messages in XML format via a bus-like system using TCP (transmission control protocol) connections or shared memory. A dedicated server dispatches TiD messages to distributed or local clients. The TiD message is designed to be flexible and contains time stamps for event synchronization, whereas events describe incidents which occur during an experiment. TiD was tested extensively towards stability and latency. The effect of an occurring event jitter was analyzed and benchmarked on a reference implementation under different conditions as GBit and 100 MBit Ethernet or WiFi with a different number of event receivers. A 3 dB signal attenuation, which occurs when averaging jitter influenced trials aligned by events, is starting to become visible at around 1-2 kHz in case of a GBit connection. Mean event distribution times across operating systems are ranging from 0.3 ms to 0.5 ms for aGBit network connection for 106 events. Results for other environmental conditions are available in the paper. References already using TiD for event distribution are provided showing the applicability of TiD for event delivery with distributed or local clients.

  16. Multi-core: Adding a New Dimension to Computing

    CERN Document Server

    Amin, Md Tanvir Al

    2010-01-01

    Invention of Transistors in 1948 started a new era in technology, called Solid State Electronics. Since then, sustaining development and advancement in electronics and fabrication techniques has caused the devices to shrink in size and become smaller, paving the quest for increasing density and clock speed. That quest has suddenly come to a halt due to fundamental bounds applied by physical laws. But, demand for more and more computational power is still prevalent in the computing world. As a result, the microprocessor industry has started exploring the technology along a different dimension. Speed of a single work unit (CPU) is no longer the concern, rather increasing the number of independent processor cores packed in a single package has become the new concern. Such processors are commonly known as multi-core processors. Scaling the performance by using multiple cores has gained so much attention from the academia and the industry, that not only desktops, but also laptops, PDAs, cell phones and even embedd...

  17. Massive Computation for Understanding Core-Collapse Supernova Explosions

    CERN Document Server

    Ott, Christian D

    2016-01-01

    How do massive stars explode? Progress toward the answer is driven by increases in compute power. Petascale supercomputers are enabling detailed three-dimensional simulations of core-collapse supernovae. These are elucidating the role of fluid instabilities, turbulence, and magnetic field amplification in supernova engines.

  18. Computational Model for the Neutronic Simulation of Pebble Bed Reactor’s Core Using MCNPX

    Directory of Open Access Journals (Sweden)

    J. Rosales

    2014-01-01

    Full Text Available Very high temperature reactor (VHTR designs offer promising performance characteristics; they can provide sustainable energy, improved proliferation resistance, inherent safety, and high temperature heat supply. These designs also promise operation to high burnup and large margins to fuel failure with excellent fission product retention via the TRISO fuel design. The pebble bed reactor (PBR is a design of gas cooled high temperature reactor, candidate for Generation IV of Nuclear Energy Systems. This paper describes the features of a detailed geometric computational model for PBR whole core analysis using the MCNPX code. The validation of the model was carried out using the HTR-10 benchmark. Results were compared with experimental data and calculations of other authors. In addition, sensitivity analysis of several parameters that could have influenced the results and the accuracy of model was made.

  19. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flapper, Joris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variables affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and

  20. An FPGA computing demo core for space charge simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jinyuan; Huang, Yifei; /Fermilab

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computed using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.

  1. Polytopol computing for multi-core and distributed systems

    Science.gov (United States)

    Spaanenburg, Henk; Spaanenburg, Lambert; Ranefors, Johan

    2009-05-01

    Multi-core computing provides new challenges to software engineering. The paper addresses such issues in the general setting of polytopol computing, that takes multi-core problems in such widely differing areas as ambient intelligence sensor networks and cloud computing into account. It argues that the essence lies in a suitable allocation of free moving tasks. Where hardware is ubiquitous and pervasive, the network is virtualized into a connection of software snippets judiciously injected to such hardware that a system function looks as one again. The concept of polytopol computing provides a further formalization in terms of the partitioning of labor between collector and sensor nodes. Collectors provide functions such as a knowledge integrator, awareness collector, situation displayer/reporter, communicator of clues and an inquiry-interface provider. Sensors provide functions such as anomaly detection (only communicating singularities, not continuous observation), they are generally powered or self-powered, amorphous (not on a grid) with generation-and-attrition, field re-programmable, and sensor plug-and-play-able. Together the collector and the sensor are part of the skeleton injector mechanism, added to every node, and give the network the ability to organize itself into some of many topologies. Finally we will discuss a number of applications and indicate how a multi-core architecture supports the security aspects of the skeleton injector.

  2. Multi-block/multi-core SSOR preconditioner for the QCD quark solver for K computer

    CERN Document Server

    Boku, T; Kuramashi, Y; Minami, K; Nakamura, Y; Shoji, F; Takahashi, D; Terai, M; Ukawa, A; Yoshie, T

    2012-01-01

    We study the algorithmic optimization and performance tuning of the Lattice QCD clover-fermion solver for the K computer. We implement the L\\"uscher's SAP preconditioner with sub-blocking in which the lattice block in a node is further divided to several sub-blocks to extract enough parallelism for the 8-core CPU SPARC64$^{\\mathrm{TM}}$ VIIIfx of the K computer. To achieve a better convergence property we use the symmetric successive over-relaxation (SSOR) iteration with {\\it locally-lexicographical} ordering for the sub-blocks in obtaining the block inverse. The SAP preconditioner is included in the single precision BiCGStab solver of the nested BiCGStab solver. The single precision part of the computational kernel are solely written with the SIMD oriented intrinsics to achieve the best performance of the \\SPARC on the K computer. We benchmark the single precision BiCGStab solver on the three lattice sizes: $12^3\\times 24$, $24^3\\times 48$ and $48^3\\times 96$, with fixing the local lattice size in a node at ...

  3. Availability of Neutronics Benchmarks in the ICSBEP and IRPhEP Handbooks for Computational Tools Testing

    Energy Technology Data Exchange (ETDEWEB)

    Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana; Hill, Ian; Gulliford, Jim

    2017-02-01

    In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount. Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next

  4. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  5. Computational Astrophysics at the Bleeding Edge: Simulating Core Collapse Supernovae

    Science.gov (United States)

    Mezzacappa, Anthony

    2013-04-01

    Core collapse supernovae are the single most important source of elements in the Universe, dominating the production of elements between oxygen and iron and likely responsible for half the elements heavier than iron. They result from the death throes of massive stars, beginning with stellar core collapse and the formation of a supernova shock wave that must ultimately disrupt such stars. Past, first-principles models most often led to the frustrating conclusion the shock wave stalls and is not revived, at least given the physics included in the models. However, recent progress in the context of two-dimensional, first-principles supernova models is reversing this trend, giving us hope we are on the right track toward a solution of one of the most important problems in astrophysics. Core collapse supernovae are multi-physics events, involving general relativity, hydrodynamics and magnetohydrodynamics, nuclear burning, and radiation transport in the form of neutrinos, along with a detailed nuclear physics equation of state and neutrino weak interactions. Computationally, simulating these catastrophic stellar events presents an exascale computing challenge. I will discuss past models and milestones in core collapse supernova theory, the state of the art, and future requirements. In this context, I will present the results and plans of the collaboration led by ORNL and the University of Tennessee.

  6. Computational Models of Stellar Collapse and Core-Collapse Supernovae

    CERN Document Server

    Ott, C D; Burrows, A; Livne, E; O'Connor, E; Löffler, F

    2009-01-01

    Core-collapse supernovae are among Nature's most energetic events. They mark the end of massive star evolution and pollute the interstellar medium with the life-enabling ashes of thermonuclear burning. Despite their importance for the evolution of galaxies and life in the universe, the details of the core-collapse supernova explosion mechanism remain in the dark and pose a daunting computational challenge. We outline the multi-dimensional, multi-scale, and multi-physics nature of the core-collapse supernova problem and discuss computational strategies and requirements for its solution. Specifically, we highlight the axisymmetric (2D) radiation-MHD code VULCAN/2D and present results obtained from the first full-2D angle-dependent neutrino radiation-hydrodynamics simulations of the post-core-bounce supernova evolution. We then go on to discuss the new code Zelmani which is based on the open-source HPC Cactus framework and provides a scalable AMR approach for 3D fully general-relativistic modeling of stellar col...

  7. Computation system for nuclear reactor core analysis. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.; Petrie, L.M.

    1977-04-01

    This report documents a system which contains computer codes as modules developed to evaluate nuclear reactor core performance. The diffusion theory approximation to neutron transport may be applied with the VENTURE code treating up to three dimensions. The effect of exposure may be determined with the BURNER code, allowing depletion calculations to be made. The features and requirements of the system are discussed and aspects common to the computational modules, but the latter are documented elsewhere. User input data requirements, data file management, control, and the modules which perform general functions are described. Continuing development and implementation effort is enhancing the analysis capability available locally and to other installations from remote terminals.

  8. 3D computer visualization and animation of CANDU reactor core

    Energy Technology Data Exchange (ETDEWEB)

    Qian, T.; Echlin, M.; Tonner, P.; Sur, B. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada)

    1999-07-01

    Three-dimensional (3D) computer visualization and animation models of typical CANDU reactor cores (Darlington, Point Lepreau) have been developed using world-wide-web (WWW) browser based tools: JavaScript, hyper-text-markup language (HTML) and virtual reality modeling language (VRML). The 3D models provide three-dimensional views of internal control and monitoring structures in the reactor core, such as fuel channels, flux detectors, liquid zone controllers, zone boundaries, shutoff rods, poison injection tubes, ion chambers. Animations have been developed based on real in-core flux detector responses and rod position data from reactor shutdown. The animations show flux changing inside the reactor core with the drop of shutoff rods and/or the injection of liquid poison. The 3D models also provide hypertext links to documents giving specifications and historical data for particular components. Data in HTML format (or other format such as PDF, etc.) can be shown in text, tables, plots, drawings, etc., and further links to other sources of data can also be embedded. This paper summarizes the use of these WWW browser based tools, and describes the resulting 3D reactor core static and dynamic models. Potential applications of the models are discussed. (author)

  9. Unstructured Computational Aerodynamics on Many Integrated Core Architecture

    KAUST Repository

    Al Farhan, Mohammed A.

    2016-06-08

    Shared memory parallelization of the flux kernel of PETSc-FUN3D, an unstructured tetrahedral mesh Euler flow code previously studied for distributed memory and multi-core shared memory, is evaluated on up to 61 cores per node and up to 4 threads per core. We explore several thread-level optimizations to improve flux kernel performance on the state-of-the-art many integrated core (MIC) Intel processor Xeon Phi “Knights Corner,” with a focus on strong thread scaling. While the linear algebraic kernel is bottlenecked by memory bandwidth for even modest numbers of cores sharing a common memory, the flux kernel, which arises in the control volume discretization of the conservation law residuals and in the formation of the preconditioner for the Jacobian by finite-differencing the conservation law residuals, is compute-intensive and is known to exploit effectively contemporary multi-core hardware. We extend study of the performance of the flux kernel to the Xeon Phi in three thread affinity modes, namely scatter, compact, and balanced, in both offload and native mode, with and without various code optimizations to improve alignment and reduce cache coherency penalties. Relative to baseline “out-of-the-box” optimized compilation, code restructuring optimizations provide about 3.8x speedup using the offload mode and about 5x speedup using the native mode. Even with these gains for the flux kernel, with respect to execution time the MIC simply achieves par with optimized compilation on a contemporary multi-core Intel CPU, the 16-core Sandy Bridge E5 2670. Nevertheless, the optimizations employed to reduce the data motion and cache coherency protocol penalties of the MIC are expected to be of value for CFD and many other unstructured applications as many-core architecture evolves. We explore large-scale distributed-shared memory performance on the Cray XC40 supercomputer, to demonstrate that optimizations employed on Phi hybridize to this context, where each of

  10. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    Science.gov (United States)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  11. CATIA Core Tools Computer Aided Three-Dimensional Interactive Application

    CERN Document Server

    Michaud, Michel

    2012-01-01

    CATIA Core Tools: Computer-Aided Three-Dimensional Interactive Application explains how to use the essential features of this cutting-edge solution for product design and innovation. The book begins with the basics, such as launching the software, configuring the settings, and managing files. Next, you'll learn about sketching, modeling, drafting, and visualization tools and techniques. Easy-to-follow instructions along with detailed illustrations and screenshots help you get started using several CATIA workbenches right away. Reverse engineering--a valuable product development skill--is also covered in this practical resource.

  12. Experimental studies and computational benchmark on heavy liquid metal natural circulation in a full height-scale test loop for small modular reactors

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Yong-Hoon, E-mail: chaotics@snu.ac.kr [Department of Energy Systems Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of); Cho, Jaehyun [Korea Atomic Energy Research Institute, 111 Daedeok-daero, 989 Beon-gil, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Lee, Jueun; Ju, Heejae; Sohn, Sungjune; Kim, Yeji; Noh, Hyunyub; Hwang, Il Soon [Department of Energy Systems Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of)

    2017-05-15

    Highlights: • Experimental studies on natural circulation for lead-bismuth eutectic were conducted. • Adiabatic wall boundaries conditions were established by compensating heat loss. • Computational benchmark with a system thermal-hydraulics code was performed. • Numerical simulation and experiment showed good agreement in mass flow rate. • An empirical relation was formulated for mass flow rate with experimental data. - Abstract: In order to test the enhanced safety of small lead-cooled fast reactors, lead-bismuth eutectic (LBE) natural circulation characteristics have been studied. We present results of experiments with LBE non-isothermal natural circulation in a full-height scale test loop, HELIOS (heavy eutectic liquid metal loop for integral test of operability and safety of PEACER), and the validation of a system thermal-hydraulics code. The experimental studies on LBE were conducted under steady state as a function of core power conditions from 9.8 kW to 33.6 kW. Local surface heaters on the main loop were activated and finely tuned by trial-and-error approach to make adiabatic wall boundary conditions. A thermal-hydraulic system code MARS-LBE was validated by using the well-defined benchmark data. It was found that the predictions were mostly in good agreement with the experimental data in terms of mass flow rate and temperature difference that were both within 7%, respectively. With experiment results, an empirical relation predicting mass flow rate at a non-isothermal, adiabatic condition in HELIOS was derived.

  13. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  14. Web Server Benchmark Application WiiBench using Erlang/OTP R11 and Fedora-Core Linux 5.0

    CERN Document Server

    Mutiara, A B

    2007-01-01

    As the web grows and the amount of traffics on the web server increase, problems related to performance begin to appear. Some of the problems, such as the number of users that can access the server simultaneously, the number of requests that can be handled by the server per second (requests per second) to bandwidth consumption and hardware utilization like memories and CPU. To give better quality of service (\\textbf{\\textit{QoS}}), web hosting providers and also the system administrators and network administrators who manage the server need a benchmark application to measure the capabilities of their servers. Later, the application intends to work under Linux/Unix -- like platforms and built using Erlang/OTP R11 as a concurrent oriented language under Fedora Core Linux 5.0. \\textbf{\\textit{WiiBench}} is divided into two main parts, the controller section and the launcher section. Controller is the core of the application. It has several duties, such as read the benchmark scenario file, configure the program b...

  15. Benchmark study of UV/Visible spectra of coumarin derivatives by computational approach

    Science.gov (United States)

    Irfan, Muhammad; Iqbal, Javed; Eliasson, Bertil; Ayub, Khurshid; Rana, Usman Ali; Ud-Din Khan, Salah

    2017-02-01

    A benchmark study of UV/Visible spectra of Simple coumarins and Furanocoumarins derivatives was conducted by employing the Density Functional Theory (DFT) and Time Dependent Density Functional Theory (TD-DFT) approaches. In this study the geometries of ground and excited states, excitation energy and absorption spectra were estimated by using the DFT functional CAM-B3LYP, WB97XD, HSEH1PBE, MPW1PW91 and TD-B3LYP with 6-31 + G (d,p) basis set. CAM-B3LYP functional was found to have close agreement with the experimental values of Furranocoumarin class of coumarins while MPW1PW91 gave close results for simple coumarins. This study provided an insight about the electronic characteristics of the selected compounds and provided an effective tool for developing and designing the better UV absorber compounds.

  16. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  17. A Computer Scientist’s Evaluation of Publically Available Hardware Trojan Benchmarks

    Science.gov (United States)

    2015-09-01

    in, design for trust , hardware intellectual property cores, Hardware Oriented Security and Trust , hardware synthesis, hardware Trojans, HDL...ix LIST OF FIGURES Figure 1. Selection of a resource from the Trust Hub site. The button labeled “Download (ZIP)” will directly download an...multiple clock cycles. The stacking effect is a visual cue provided to represent a bundled multi-bit register. It is possible to unbundle this collection

  18. Computational modeling for hexcan failure under core distruptive accidental conditions

    Energy Technology Data Exchange (ETDEWEB)

    Sawada, T.; Ninokata, H.; Shimizu, A. [Tokyo Institute of Technology (Japan)

    1995-09-01

    This paper describes the development of computational modeling for hexcan wall failures under core disruptive accident conditions of fast breeder reactors. A series of out-of-pile experiments named SIMBATH has been analyzed by using the SIMMER-II code. The SIMBATH experiments were performed at KfK in Germany. The experiments used a thermite mixture to simulate fuel. The test geometry of SIMBATH ranged from single pin to 37-pin bundles. In this study, phenomena of hexcan wall failure found in a SIMBATH test were analyzed by SIMMER-II. Although the original model of SIMMER-II did not calculate any hexcan failure, several simple modifications made it possible to reproduce the hexcan wall melt-through observed in the experiment. In this paper the modifications and their significance are discussed for further modeling improvements.

  19. ASBench: benchmarking sets for allosteric discovery.

    Science.gov (United States)

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  20. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  1. Scalable Parallelization of Skyline Computation for Multi-core Processors

    DEFF Research Database (Denmark)

    Chester, Sean; Sidlauskas, Darius; Assent, Ira;

    2015-01-01

    , which is used to minimize dominance tests while maintaining high throughput. The algorithm uses an efficiently-updatable data structure over the shared, global skyline, based on point-based partitioning. Also, we release a large benchmark of optimized skyline algorithms, with which we demonstrate...

  2. A New Method for Out-of-core Applications on Computational Grids

    Institute of Scientific and Technical Information of China (English)

    Tang Jianqi(唐剑琪); Fang Binxing; Hu Mingzeng

    2003-01-01

    More and more out-of-core problems that involve solving large amounts of data are researched by scientists. The computational grid provides a wide and scalable environment for those large scale computations. A new method supporting out-of-core computations on grids is presented in this paper. The framework and the data storage strategy are described, based on which an easy and efficient out-of-core programming interface is provided for the programmers.

  3. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  4. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    Energy Technology Data Exchange (ETDEWEB)

    Pecchia, M.; D' Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  5. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  6. Financial Benchmarking

    OpenAIRE

    2012-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  7. Multi-class computational evolution: development, benchmark evaluation and application to RNA-Seq biomarker discovery.

    Science.gov (United States)

    Crabtree, Nathaniel M; Moore, Jason H; Bowyer, John F; George, Nysia I

    2017-01-01

    A computational evolution system (CES) is a knowledge discovery engine that can identify subtle, synergistic relationships in large datasets. Pareto optimization allows CESs to balance accuracy with model complexity when evolving classifiers. Using Pareto optimization, a CES is able to identify a very small number of features while maintaining high classification accuracy. A CES can be designed for various types of data, and the user can exploit expert knowledge about the classification problem in order to improve discrimination between classes. These characteristics give CES an advantage over other classification and feature selection algorithms, particularly when the goal is to identify a small number of highly relevant, non-redundant biomarkers. Previously, CESs have been developed only for binary class datasets. In this study, we developed a multi-class CES. The multi-class CES was compared to three common feature selection and classification algorithms: support vector machine (SVM), random k-nearest neighbor (RKNN), and random forest (RF). The algorithms were evaluated on three distinct multi-class RNA sequencing datasets. The comparison criteria were run-time, classification accuracy, number of selected features, and stability of selected feature set (as measured by the Tanimoto distance). The performance of each algorithm was data-dependent. CES performed best on the dataset with the smallest sample size, indicating that CES has a unique advantage since the accuracy of most classification methods suffer when sample size is small. The multi-class extension of CES increases the appeal of its application to complex, multi-class datasets in order to identify important biomarkers and features.

  8. Benchmark Results and Theoretical Treatments for Valence-to-Core X-ray Emission Spectroscopy in Transition Metal Compounds

    Energy Technology Data Exchange (ETDEWEB)

    Mortensen, Devon R.; Seidler, Gerald T.; Kas, Joshua J.; Govind, Niranjan; Schwartz, Craig; Pemmaraju, Das; Prendergast, David

    2017-09-20

    We report measurement of the valence-to-core (VTC) region of the K-shell x-ray emission spectra from several Zn and Fe inorganic compounds, and their critical comparison with several existing theoretical treatments. We find generally good agreement between the respective theories and experiment, and in particular find an important admixture of dipole and quadrupole character for Zn materials that is much weaker in Fe-based systems. These results on materials whose simple crystal structures should not, a prior, pose deep challenges to theory, will prove useful in guiding the further development of DFT and time-dependent DFT methods for VTC-XES predictions and their comparison to experiment.

  9. Thermodynamic considerations and computer simulations on the formation of core-shell nanoparticles under electrochemical conditions.

    Science.gov (United States)

    Oviedo, O A; Leiva, E P M; Mariscal, M M

    2008-06-28

    We report on thermodynamic modeling and computer simulations on the electrochemical generation of metallic and bimetallic nanoparticles (NPs) by means of quenched molecular dynamics (QMD). The present results suggest that the spontaneous formation of core-shell NPs depends on several factors, i.e. size and shape of the core, chemical composition of the system, and under-/oversaturation conditions. Homo- and heteroatomic prototypical systems were considered. The former systems were Au and Pt. The latter were Ag(core)/Au(shell), Pt(core)/Au(shell), Au(core)/Ag(shell) and Au(core)/Pt(shell).

  10. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  11. Advanced computational methods for the assessment of reactor core behaviour during reactivity initiated accidents. Final report; Fortschrittliche Rechenmethoden zum Kernverhalten bei Reaktivitaetsstoerfaellen. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Pautz, A.; Perin, Y.; Pasichnyk, I.; Velkov, K.; Zwermann, W.; Seubert, A.; Klein, M.; Gallner, L.; Krzycacz-Hausmann, B.

    2012-05-15

    The document at hand serves as the final report for the reactor safety research project RS1183 ''Advanced Computational Methods for the Assessment of Reactor Core Behavior During Reactivity-Initiated Accidents''. The work performed in the framework of this project was dedicated to the development, validation and application of advanced computational methods for the simulation of transients and accidents of nuclear installations. These simulation tools describe in particular the behavior of the reactor core (with respect to neutronics, thermal-hydraulics and thermal mechanics) at a very high level of detail. The overall goal of this project was the deployment of a modern nuclear computational chain which provides, besides advanced 3D tools for coupled neutronics/ thermal-hydraulics full core calculations, also appropriate tools for the generation of multi-group cross sections and Monte Carlo models for the verification of the individual calculational steps. This computational chain shall primarily be deployed for light water reactors (LWR), but should beyond that also be applicable for innovative reactor concepts. Thus, validation on computational benchmarks and critical experiments was of paramount importance. Finally, appropriate methods for uncertainty and sensitivity analysis were to be integrated into the computational framework, in order to assess and quantify the uncertainties due to insufficient knowledge of data, as well as due to methodological aspects.

  12. A Computational Fluid Dynamic and Heat Transfer Model for Gaseous Core and Gas Cooled Space Power and Propulsion Reactors

    Science.gov (United States)

    Anghaie, S.; Chen, G.

    1996-01-01

    A computational model based on the axisymmetric, thin-layer Navier-Stokes equations is developed to predict the convective, radiation and conductive heat transfer in high temperature space nuclear reactors. An implicit-explicit, finite volume, MacCormack method in conjunction with the Gauss-Seidel line iteration procedure is utilized to solve the thermal and fluid governing equations. Simulation of coolant and propellant flows in these reactors involves the subsonic and supersonic flows of hydrogen, helium and uranium tetrafluoride under variable boundary conditions. An enthalpy-rebalancing scheme is developed and implemented to enhance and accelerate the rate of convergence when a wall heat flux boundary condition is used. The model also incorporated the Baldwin and Lomax two-layer algebraic turbulence scheme for the calculation of the turbulent kinetic energy and eddy diffusivity of energy. The Rosseland diffusion approximation is used to simulate the radiative energy transfer in the optically thick environment of gas core reactors. The computational model is benchmarked with experimental data on flow separation angle and drag force acting on a suspended sphere in a cylindrical tube. The heat transfer is validated by comparing the computed results with the standard heat transfer correlations predictions. The model is used to simulate flow and heat transfer under a variety of design conditions. The effect of internal heat generation on the heat transfer in the gas core reactors is examined for a variety of power densities, 100 W/cc, 500 W/cc and 1000 W/cc. The maximum temperature, corresponding with the heat generation rates, are 2150 K, 2750 K and 3550 K, respectively. This analysis shows that the maximum temperature is strongly dependent on the value of heat generation rate. It also indicates that a heat generation rate higher than 1000 W/cc is necessary to maintain the gas temperature at about 3500 K, which is typical design temperature required to achieve high

  13. A computer program to determine the specific power of prismatic-core reactors

    Energy Technology Data Exchange (ETDEWEB)

    Dobranich, D.

    1987-05-01

    A computer program has been developed to determine the maximum specific power for prismatic-core reactors as a function of maximum allowable fuel temperature, core pressure drop, and coolant velocity. The prismatic-core reactors consist of hexagonally shaped fuel elements grouped together to form a cylindrically shaped core. A gas coolant flows axially through circular channels within the elements, and the fuel is dispersed within the solid element material either as a composite or in the form of coated pellets. Different coolant, fuel, coating, and element materials can be selected to represent different prismatic-core concepts. The computer program allows the user to divide the core into any arbitrary number of axial levels to account for different axial power shapes. An option in the program allows the automatic determination of the core height that results in the maximum specific power. The results of parametric specific power calculations using this program are presented for various reactor concepts.

  14. Implicit Unstructured Computational Aerodynamics on Many-Integrated Core Architecture

    KAUST Repository

    Al Farhan, Mohammed A.

    2014-05-04

    This research aims to understand the performance of PETSc-FUN3D, a fully nonlinear implicit unstructured grid incompressible or compressible Euler code with origins at NASA and the U.S. DOE, on many-integrated core architecture and how a hybridprogramming paradigm (MPI+OpenMP) can exploit Intel Xeon Phi hardware with upwards of 60 cores per node and 4 threads per core. For the current contribution, we focus on strong scaling with many-integrated core hardware. In most implicit PDE-based codes, while the linear algebraic kernel is limited by the bottleneck of memory bandwidth, the flux kernel arising in control volume discretization of the conservation law residuals and the preconditioner for the Jacobian exploits the Phi hardware well.

  15. Benchmarking a computational design method for the incorporation of metal ion-binding sites at symmetric protein interfaces.

    Science.gov (United States)

    Hansen, William A; Khare, Sagar D

    2017-08-01

    The design of novel metal-ion binding sites along symmetric axes in protein oligomers could provide new avenues for metalloenzyme design, construction of protein-based nanomaterials and novel ion transport systems. Here, we describe a computational design method, symmetric protein recursive ion-cofactor sampling (SyPRIS), for locating constellations of backbone positions within oligomeric protein structures that are capable of supporting desired symmetrically coordinated metal ion(s) chelated by sidechains (chelant model). Using SyPRIS on a curated benchmark set of protein structures with symmetric metal binding sites, we found high recovery of native metal coordinating rotamers: in 65 of the 67 (97.0%) cases, native rotamers featured in the best scoring model while in the remaining cases native rotamers were found within the top three scoring models. In a second test, chelant models were crossmatched against protein structures with identical cyclic symmetry. In addition to recovering all native placements, 10.4% (8939/86013) of the non-native placements, had acceptable geometric compatibility scores. Discrimination between native and non-native metal site placements was further enhanced upon constrained energy minimization using the Rosetta energy function. Upon sequence design of the surrounding first-shell residues, we found further stabilization of native placements and a small but significant (1.7%) number of non-native placement-based sites with favorable Rosetta energies, indicating their designability in existing protein interfaces. The generality of the SyPRIS approach allows design of novel symmetric metal sites including with non-natural amino acid sidechains, and should enable the predictive incorporation of a variety of metal-containing cofactors at symmetric protein interfaces. © 2017 The Protein Society.

  16. Benchmark Lisp And Ada Programs

    Science.gov (United States)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  17. Raw computed tomography (CT) images of sediment cores collected in 2009 offshore from Palos Verdes, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of the data release includes raw computed tomography (CT) images of sediment cores collected in 2009 offshore of Palos Verdes, California. It is one of...

  18. Raw computed tomography (CT) images of sediment cores collected in 2009 offshore from Palos Verdes, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of the data release includes raw computed tomography (CT) images of sediment cores collected in 2009 offshore of Palos Verdes, California. It is one of...

  19. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    distance functions. The frontier is given by an explicit quantile, e.g. “the best 90 %”. Using the explanatory model of the inefficiency, the user can adjust the frontiers by submitting state variables that influence the inefficiency. An efficiency study of Danish dairy farms is implemented......We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  20. Applications of Integral Benchmark Data

    Energy Technology Data Exchange (ETDEWEB)

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. (Skip) Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  1. Performance modeling and analysis of parallel Gaussian elimination on multi-core computers

    Directory of Open Access Journals (Sweden)

    Fadi N. Sibai

    2014-01-01

    Full Text Available Gaussian elimination is used in many applications and in particular in the solution of systems of linear equations. This paper presents mathematical performance models and analysis of four parallel Gaussian Elimination methods (precisely the Original method and the new Meet in the Middle –MiM– algorithms and their variants with SIMD vectorization on multi-core systems. Analytical performance models of the four methods are formulated and presented followed by evaluations of these models with modern multi-core systems’ operation latencies. Our results reveal that the four methods generally exhibit good performance scaling with increasing matrix size and number of cores. SIMD vectorization only makes a large difference in performance for low number of cores. For a large matrix size (n ⩾ 16 K, the performance difference between the MiM and Original methods falls from 16× with four cores to 4× with 16 K cores. The efficiencies of all four methods are low with 1 K cores or more stressing a major problem of multi-core systems where the network-on-chip and memory latencies are too high in relation to basic arithmetic operations. Thus Gaussian Elimination can greatly benefit from the resources of multi-core systems, but higher performance gains can be achieved if multi-core systems can be designed with lower memory operation, synchronization, and interconnect communication latencies, requirements of utmost importance and challenge in the exascale computing age.

  2. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  3. Design Principles for Synthesizable Processor Cores

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; McKee, Sally A.; Karlsson, Sven

    2012-01-01

    As FPGAs get more competitive, synthesizable processor cores become an attractive choice for embedded computing. Currently popular commercial processor cores do not fully exploit current FPGA architectures. In this paper, we propose general design principles to increase instruction throughput...... through the use of micro-benchmarks that our principles guide the design of a processor core that improves performance by an average of 38% over a similar Xilinx MicroBlaze configuration....

  4. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  5. Cooperative Computing Techniques for a Deeply Fused and Heterogeneous Many-Core Processor Architecture

    Institute of Scientific and Technical Information of China (English)

    郑方; 李宏亮; 吕晖; 过锋; 许晓红; 谢向辉

    2015-01-01

    Due to advances in semiconductor techniques, many-core processors have been widely used in high performance computing. However, many applications still cannot be carried out efficiently due to the memory wall, which has become a bottleneck in many-core processors. In this paper, we present a novel heterogeneous many-core processor architecture named deeply fused many-core (DFMC) for high performance computing systems. DFMC integrates management processing ele-ments (MPEs) and computing processing elements (CPEs), which are heterogeneous processor cores for different application features with a unified ISA (instruction set architecture), a unified execution model, and share-memory that supports cache coherence. The DFMC processor can alleviate the memory wall problem by combining a series of cooperative computing techniques of CPEs, such as multi-pattern data stream transfer, efficient register-level communication mechanism, and fast hardware synchronization technique. These techniques are able to improve on-chip data reuse and optimize memory access performance. This paper illustrates an implementation of a full system prototype based on FPGA with four MPEs and 256 CPEs. Our experimental results show that the effect of the cooperative computing techniques of CPEs is significant, with DGEMM (double-precision matrix multiplication) achieving an efficiency of 94%, FFT (fast Fourier transform) obtaining a performance of 207 GFLOPS and FDTD (finite-difference time-domain) obtaining a performance of 27 GFLOPS.

  6. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  7. Incorporating core hysteresis properties in three-dimensional computations of transformer inrush current forces

    Science.gov (United States)

    Adly, A. A.; Hanafy, H. H.

    2009-04-01

    It is well known that transformer inrush currents depend upon the core properties, residual flux, switching instant, and the overall circuit parameters. Large transient inrush currents introduce abnormal electromagnetic forces which may destroy the transformer windings. This paper presents an approach through which core hysteresis may be incorporated in three-dimensional computations of transformer inrush current forces. Details of the approach, measurements, and simulations for a shell-type transformer are given in the paper.

  8. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  9. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  10. Seed robustness of oriented relative fuzzy connectedness: core computation and its applications

    Science.gov (United States)

    Tavares, Anderson C. M.; Bejar, Hans H. C.; Miranda, Paulo A. V.

    2017-02-01

    In this work, we present a formal definition and an efficient algorithm to compute the cores of Oriented Relative Fuzzy Connectedness (ORFC), a recent seed-based segmentation technique. The core is a region where the seed can be moved without altering the segmentation, an important aspect for robust techniques and reduction of user effort. We show how ORFC cores can be used to build a powerful hybrid image segmentation approach. We also provide some new theoretical relations between ORFC and Oriented Image Foresting Transform (OIFT), as well as their cores. Experimental results among several methods show that the hybrid approach conserves high accuracy, avoids the shrinking problem and provides robustness to seed placement inside the desired object due to the cores properties.

  11. Multiphysics Computational Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber

    Science.gov (United States)

    Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen

    2007-01-01

    The objective of this effort is to develop an efficient and accurate computational heat transfer methodology to predict thermal, fluid, and hydrogen environments for a hypothetical solid-core, nuclear thermal engine - the Small Engine. In addition, the effects of power profile and hydrogen conversion on heat transfer efficiency and thrust performance were also investigated. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics platform, while formulations of conjugate heat transfer were implemented to describe the heat transfer from solid to hydrogen inside the solid-core reactor. The computational domain covers the entire thrust chamber so that the afore-mentioned heat transfer effects impact the thrust performance directly. The result shows that the computed core-exit gas temperature, specific impulse, and core pressure drop agree well with those of design data for the Small Engine. Finite-rate chemistry is very important in predicting the proper energy balance as naturally occurring hydrogen decomposition is endothermic. Locally strong hydrogen conversion associated with centralized power profile gives poor heat transfer efficiency and lower thrust performance. On the other hand, uniform hydrogen conversion associated with a more uniform radial power profile achieves higher heat transfer efficiency, and higher thrust performance.

  12. Cardiac computed tomography core syllabus of the European Association of Cardiovascular Imaging (EACVI).

    Science.gov (United States)

    Nieman, Koen; Achenbach, Stephan; Pugliese, Francesca; Cosyns, Bernard; Lancellotti, Patrizio; Kitsiou, Anastasia

    2015-04-01

    The European Association of Cardiovascular Imaging (EACVI) Core Syllabus for Cardiac Computed Tomography (CT) is now available online. The syllabus lists key elements of knowledge in Cardiac CT. It represents a framework for the development of training curricula and provides expected knowledge-based learning outcomes to the Cardiac CT trainees.

  13. Transient LOFA computations for a VHTR using one-twelfth core flow models

    Energy Technology Data Exchange (ETDEWEB)

    Tung, Yu-Hsin, E-mail: touushin@gmail.com [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu, Taiwan (China); Ferng, Yuh-Ming, E-mail: ymferng@ess.nthu.edu.tw [Institute of Nuclear Engineering and Science, National Tsing Hua University, Hsinchu, Taiwan (China); Johnson, Richard W., E-mail: rwjohnson@cableone.net [Idaho National Laboratory, Idaho Falls, ID (United States); Chieng, Ching-Chang, E-mail: ccchieng@cityu.edu.hk [Dept of Mechanical and Biomedical Engineering, City University of Hong Kong, Kowloon (Hong Kong)

    2016-05-15

    Highlights: • Investigation of flow and heat transfer for a 1/12 VHTR core model using CFD. • The high performance computing using ∼531 M sufficient refined mesh. • LOFA transient calculations employ both laminar and turbulence models to characterize natural convection. • The comparisons with small models suggest the need of large flow model. - Abstract: A prismatic gas-cooled very high temperature reactor (VHTR) is being developed under the next generation nuclear program. One of the concerns for the reactor design is the effects of a loss of flow accident (LOFA) where the coolant circulators are lost for some reason, causing a loss of forced coolant flow through the core. In the previous studies, the natural circulation in the whole reactor vessel (RV) was obtained by segmentation strategies if the computational fluid dynamic (CFD) analysis with a sufficiently refined mesh was conducted, due to the limits of computer capability. The computational domains in the previous studies were segmented sections which were small flow region models, such as 1/12 sectors, or a combination of a few number of the 1/12 sector (ranging from 2 to 15) using geometric symmetry, for a full dome region. The present paper investigates the flow and heat transfer for a much larger flow region model, a 1/12 core model, using high performance computing. The computation meshes for 1/12 sector and 1/12 reactor core are of 7.8 M and ∼531 M, respectively. Over 85,000 and 35,000 iterations for steady and transient (100 s) calculations are required to achieve convergence, respectively. ∼0.1 min CPU time was required using 192 computer cores for the 1/12 sector model and ∼1.3 min CPU time using 768 cores in parallel for the 1/12 core model, for every iteration, using ALPS, Advanced Large-scale Parallel Superclusters. For the LOFA transient condition, this study employs both laminar flow and different turbulence models to characterize the phenomenon of natural convection. The

  14. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  15. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Directory of Open Access Journals (Sweden)

    Akifumi S Tanabe

    Full Text Available Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need

  16. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Science.gov (United States)

    Tanabe, Akifumi S; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate

  17. Parallel computation of a dam-break flow model using OpenMP on a multi-core computer

    Science.gov (United States)

    Zhang, Shanghong; Xia, Zhongxi; Yuan, Rui; Jiang, Xiaoming

    2014-05-01

    High-performance calculations are of great importance to the simulation of dam-break events, as discontinuous solutions and accelerated speed are key factors in the process of dam-break flow modeling. In this study, Roe's approximate Riemann solution of the finite volume method is adopted to solve the interface flux of grid cells and accurately simulate the discontinuous flow, and shared memory technology (OpenMP) is used to realize parallel computing. Because an explicit discrete technique is used to solve the governing equations, and there is no correlation between grid calculations in a single time step, the parallel dam-break model can be easily realized by adding OpenMP instructions to the loop structure of the grid calculations. The performance of the model is analyzed using six computing cores and four different grid division schemes for the Pangtoupao flood storage area in China. The results show that the parallel computing improves precision and increases the simulation speed of the dam-break flow, the simulation of 320 h flood process can be completed within 1.6 h on a 16-kernel computer; a speedup factor of 8.64× is achieved. Further analysis reveals that the models involving a larger number of calculations exhibit greater efficiency and a higher rate of acceleration. At the same time, the model has good extendibility, as the speedup increases with the number of processor cores. The parallel model based on OpenMP can make full use of multi-core processors, making it possible to simulate dam-break flows in large-scale watersheds on a single computer.

  18. Mobile Computing Clouds Interactive Model and Algorithm Based On Multi-core Grids

    Directory of Open Access Journals (Sweden)

    Liu Lizhao

    2013-09-01

    Full Text Available Multi-core technology is the key technology of mobile cloud computing, with the boom development of cloud technology, the authors focus on the problem of how to make the target code computed by mobile cloud terminal multi-core compiler to use cloud multi-core system construction, to ensure synchronization of data cross-validation compilation, and propose the concept of end mobile cloud entity indirect synchronization and direct synchronization; use wave ormation energy conversion, give our a method to calculate indirect synchronization value and direct synchronization value according to the cross experience and cross time of compilation entity; construct function relative level algorithm with Hellinger distance,  and give an algorithm method of comprehensive synchronization value. Through experiment statistics and analysis, take threshold limit value as the average, self-synchronization value as deviation, the update function of indirect synchronization value is constructed; an inter-domain multi-core synchronization flow chart is given; then inter-domain compilation data synchronization update experiment is carried out with more than 3000 end mobile cloud multi-core compilation environment. Through the analysis of data compilation operation process and results, the synchronization algorithm is proved to be reasonable and effective.  

  19. CORE

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Hundebøll, Martin

    2013-01-01

    different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

  20. CORE

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Hundebøll, Martin

    2013-01-01

    different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

  1. Nanocrystalline material in toroidal cores for current transformer: analytical study and computational simulations

    Directory of Open Access Journals (Sweden)

    Benedito Antonio Luciano

    2005-12-01

    Full Text Available Based on electrical and magnetic properties, such as saturation magnetization, initial permeability, and coercivity, in this work are presented some considerations about the possibilities of applications of nanocrystalline alloys in toroidal cores for current transformers. It is discussed how the magnetic characteristics of the core material affect the performance of the current transformer. From the magnetic characterization and the computational simulations, using the finite element method (FEM, it has been verified that, at the typical CT operation value of flux density, the nanocrystalline alloys properties reinforce the hypothesis that the use of these materials in measurement CT cores can reduce the ratio and phase errors and can also improve its accuracy class.

  2. A simulation study on proton computed tomography (CT) stopping power accuracy using dual energy CT scans as benchmark

    DEFF Research Database (Denmark)

    Hansen, David Christoffer; Seco, Joao; Sørensen, Thomas Sangild

    2015-01-01

    Background. Accurate stopping power estimation is crucial for treatment planning in proton therapy, and the uncertainties in stopping power are currently the largest contributor to the employed dose margins. Dual energy x-ray computed tomography (CT) (clinically available) and proton CT (in...

  3. IC3 Internet and Computing Core Certification Global Standard 4 study guide

    CERN Document Server

    Rusen, Ciprian Adrian

    2015-01-01

    Hands-on IC3 prep, with expert instruction and loads of tools IC3: Internet and Computing Core Certification Global Standard 4 Study Guide is the ideal all-in-one resource for those preparing to take the exam for the internationally-recognized IT computing fundamentals credential. Designed to help candidates pinpoint weak areas while there's still time to brush up, this book provides one hundred percent coverage of the exam objectives for all three modules of the IC3-GS4 exam. Readers will find clear, concise information, hands-on examples, and self-paced exercises that demonstrate how to per

  4. BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis, Version III

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W. III.

    1981-06-01

    This report is a condensed documentation for VERSION III of the BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis. An experienced analyst should be able to use this system routinely for solving problems by referring to this document. Individual reports must be referenced for details. This report covers basic input instructions and describes recent extensions to the modules as well as to the interface data file specifications. Some application considerations are discussed and an elaborate sample problem is used as an instruction aid. Instructions for creating the system on IBM computers are also given.

  5. Determining Core Components of Computer-Supported Collaborative Learning Within Educational Managerial Game Context

    OpenAIRE

    2016-01-01

    The exploratory factor analysis has been used to determine which selected inner components of computer-supported collaborative learning (CSCL) should be considered as the core components. The research itself builds on three models of group learning, namely cooperative learning elements, the “Big Five” in the teamwork model and the theoretical framework of CSCL. The analysis of data collected from university students participating in a managerial group game suggests that future research in the...

  6. Design of Gas-phase Synthesis of Core-Shell Particles by Computational Fluid - Aerosol Dynamics.

    Science.gov (United States)

    Buesser, B; Pratsinis, S E

    2011-11-01

    Core-shell particles preserve the bulk properties (e.g. magnetic, optical) of the core while its surface is modified by a shell material. Continuous aerosol coating of core TiO2 nanoparticles with nanothin silicon dioxide shells by jet injection of hexamethyldisiloxane precursor vapor downstream of titania particle formation is elucidated by combining computational fluid and aerosol dynamics. The effect of inlet coating vapor concentration and mixing intensity on product shell thickness distribution is presented. Rapid mixing of the core aerosol with the shell precursor vapor facilitates efficient synthesis of hermetically coated core-shell nanoparticles. The predicted extent of hermetic coating shells is compared to the measured photocatalytic oxidation of isopropanol by such particles as hermetic SiO2 shells prevent the photocatalytic activity of titania. Finally the performance of a simpler, plug-flow coating model is assessed by comparisons to the present detailed CFD model in terms of coating efficiency and silica average shell thickness and texture.

  7. Computed tomography-guided core-needle biopsy of lung lesions: an oncology center experience

    Energy Technology Data Exchange (ETDEWEB)

    Guimaraes, Marcos Duarte; Fonte, Alexandre Calabria da; Chojniak, Rubens, E-mail: marcosduarte@yahoo.com.b [Hospital A.C. Camargo, Sao Paulo, SP (Brazil). Dept. of Radiology and Imaging Diagnosis; Andrade, Marcony Queiroz de [Hospital Alianca, Salvador, BA (Brazil); Gross, Jefferson Luiz [Hospital A.C. Camargo, Sao Paulo, SP (Brazil). Dept. of Chest Surgery

    2011-03-15

    Objective: The present study is aimed at describing the experience of an oncology center with computed tomography guided core-needle biopsy of pulmonary lesions. Materials and Methods: Retrospective analysis of 97 computed tomography-guided core-needle biopsy of pulmonary lesions performed in the period between 1996 and 2004 in a Brazilian reference oncology center (Hospital do Cancer - A.C. Camargo). Information regarding material appropriateness and the specific diagnoses were collected and analyzed. Results: Among 97 lung biopsies, 94 (96.9%) supplied appropriate specimens for histological analyses, with 71 (73.2%) cases being diagnosed as malignant lesions and 23 (23.7%) diagnosed as benign lesions. Specimens were inappropriate for analysis in three cases. The frequency of specific diagnosis was 83 (85.6%) cases, with high rates for both malignant lesions with 63 (88.7%) cases and benign lesions with 20 (86.7%). As regards complications, a total of 12 cases were observed as follows: 7 (7.2%) cases of hematoma, 3 (3.1%) cases of pneumothorax and 2 (2.1%) cases of hemoptysis. Conclusion: Computed tomography-guided core needle biopsy of lung lesions demonstrated high rates of material appropriateness and diagnostic specificity, and low rates of complications in the present study. (author)

  8. RF-TSV DESIGN, MODELING AND APPLICATION FOR 3D MULTI-CORE COMPUTER SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Yu Le; Yang Haigang; Xie Yuanlu

    2012-01-01

    The state-of-the-art multi-core computer systems are based on Very Large Scale three Dimensional (3D) Integrated circuits (VLSI).In order to provide high-speed vertical data transmission in such 3D systems,efficient Through-Silicon Via (TSV) technology is critically important.In this paper,various Radio Frequency (RF) TSV designs and models are proposed.Specifically,the Cu-plug TSV with surrounding ground TSVs is used as the baseline structure.For further improvement,the dielectric coaxial and novel air-gap coaxial TSVs are introduced.Using the empirical parameters of these coaxial TSVs,the simulation results are obtained demonstrating that these coaxial RF-TSVs can provide two-order higher of cut-off frequencies than the Cu-plug TSVs.Based on these new RF-TSV technologies,we propose a novel 3D multi-core computer system as well as new architectures for manipulating the interfaces between RF and baseband circuit.Taking into consideration the scaling down of IC manufacture technologies,predictions for the performance of future generations of circuits are made.With simulation results indicating energy per bit and area per bit being reduced by 7% and 11% respectively,we can conclude that the proposed method is a worthwhile guideline for the design of future multi-core computer ICs.

  9. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  10. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  11. Parallel computing of discrete element method on multi-core processors

    Institute of Scientific and Technical Information of China (English)

    Yusuke Shigeto; Mikio Sakai

    2011-01-01

    This paper describes parallel simulation techniques for the discrete element method (DEM) on multi-core processors.Recently,multi-core CPU and GPU processors have attracted much attention in accelerating computer simulations in various fields.We propose a new algorithm for multi-thread parallel computation of DEM,which makes effective use of the available memory and accelerates the computation.This study shows that memory usage is drastically reduced by using this algorithm.To show the practical use of DEM in industry,a large-scale powder system is simulated with a complicated drive unit.We compared the performance of the simulation between the latest GPU and CPU processors with optimized programs for each processor.The results show that the difference in performance is not substantial when using either GPUs or CPUs with a multi-thread parallel algorithm.In addition,DEM algorithm is shown to have high scalability in a multi-thread parallel computation on a CPU.

  12. CoreFlow: a computational platform for integration, analysis and modeling of complex biological data.

    Science.gov (United States)

    Pasculescu, Adrian; Schoof, Erwin M; Creixell, Pau; Zheng, Yong; Olhovsky, Marina; Tian, Ruijun; So, Jonathan; Vanderlaan, Rachel D; Pawson, Tony; Linding, Rune; Colwill, Karen

    2014-04-04

    A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion, and modeling of multiple/selected reaction monitoring (MRM/SRM) results. CoreFlow was purposely designed as an environment for programmers to rapidly perform data analysis. These analyses are assembled into project-specific workflows that are readily shared with biologists to guide the next stages of experimentation. Its simple yet powerful interface provides a structure where scripts can be written and tested virtually simultaneously to shorten the life cycle of code development for a particular task. The scripts are exposed at every step so that a user can quickly see the relationships between the data, the assumptions that have been made, and the manipulations that have been performed. Since the scripts use commonly available programming languages, they can easily be

  13. Exploring the Future of Out-of-Core Computing with Compute-Local Non-Volatile Memory

    Directory of Open Access Journals (Sweden)

    Myoungsoo Jung

    2014-01-01

    Full Text Available Drawing parallels to the rise of general purpose graphical processing units (GPGPUs as accelerators for specific high-performance computing (HPC workloads, there is a rise in the use of non-volatile memory (NVM as accelerators for I/O-intensive scientific applications. However, existing works have explored use of NVM within dedicated I/O nodes, which are distant from the compute nodes that actually need such acceleration. As NVM bandwidth begins to out-pace point-to-point network capacity, we argue for the need to break from the archetype of completely separated storage. Therefore, in this work we investigate co-location of NVM and compute by varying I/O interfaces, file systems, types of NVM, and both current and future SSD architectures, uncovering numerous bottlenecks implicit in these various levels in the I/O stack. We present novel hardware and software solutions, including the new Unified File System (UFS, to enable fuller utilization of the new compute-local NVM storage. Our experimental evaluation, which employs a real-world Out-of-Core (OoC HPC application, demonstrates throughput increases in excess of an order of magnitude over current approaches.

  14. Efficient Support for Matrix Computations on Heterogeneous Multi-core and Multi-GPU Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Fengguang [Univ. of Tennessee, Knoxville, TN (United States); Tomov, Stanimire [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2011-06-01

    We present a new methodology for utilizing all CPU cores and all GPUs on a heterogeneous multicore and multi-GPU system to support matrix computations e ciently. Our approach is able to achieve the objectives of a high degree of parallelism, minimized synchronization, minimized communication, and load balancing. Our main idea is to treat the heterogeneous system as a distributed-memory machine, and to use a heterogeneous 1-D block cyclic distribution to allocate data to the host system and GPUs to minimize communication. We have designed heterogeneous algorithms with two di erent tile sizes (one for CPU cores and the other for GPUs) to cope with processor heterogeneity. We propose an auto-tuning method to determine the best tile sizes to attain both high performance and load balancing. We have also implemented a new runtime system and applied it to the Cholesky and QR factorizations. Our experiments on a compute node with two Intel Westmere hexa-core CPUs and three Nvidia Fermi GPUs demonstrate good weak scalability, strong scalability, load balance, and e ciency of our approach.

  15. Value of perfusion computed tomography in acute ischemic stroke: diagnosis of infarct core and penumbra.

    Science.gov (United States)

    Pan, Jiawei; Zhang, Jun; Huang, Weiyuan; Cheng, Xin; Ling, Yifeng; Dong, Qiang; Geng, Daoying

    2013-01-01

    This study aimed to perform an evaluation of 4 perfusion computed tomographic (PCT) parameters (relative cerebral blood flow, cerebral blood volume, mean transit time [MTT], and delay time [DT]) in a series of patients with acute ischemic stroke to find optimal parameters to predict infarct core and penumbra. Twenty-six patients with symptoms suggesting stroke less than 7 hours from onset were enrolled in this study. They all underwent admission and 24-hour PCT and a 24-hour diffusion-weighted imaging. Perfusion computed tomographic maps were assessed for relative reduced cerebral blood flow and cerebral blood volume and increased MTT and DT. Receiver operating characteristic curve analysis was performed to locate the optimal threshold for each parameter, using diffusion-weighted imaging as the gold standard. The PCT parameter that most accurately describes the penumbra is the relative MTT of 150% or greater (area under the curve, 0.827; 95% confidence interval, 0.826-0.827), whereas the parameter that most accurately describes the infarct core is the relative DT of + 2.0 seconds or greater (area under the curve, 0.879; 95% confidence interval, 0.878-0.879). The optimal parameters to define the infarct core and the penumbra are relative DT (≥+ 2.0 seconds) and relative MTT (≥ 150%).

  16. Value of computed tomography-guided core needle biopsy in diagnosis of primary pulmonary lymphomas.

    Science.gov (United States)

    Wang, Zhiwei; Li, Xiaoguang; Chen, Jin; Jin, Zhengyu; Shi, Haifeng; Zhang, Xiaobo; Pan, Jie; Liu, Wei; Yang, Ning; Chen, Jie

    2013-01-01

    To evaluate the value of computed tomography (CT)-guided core needle biopsy in diagnosis of primary pulmonary lymphoma and its subtypes. A retrospective analysis of the records of all patients with primary pulmonary lymphoma between January 2005 and August 2011 was performed. There were 25 patients referred to the radiology department for CT-guided core needle biopsy. The success rate and complications were assessed. A definitive diagnosis and accurate histologic subtype were obtained in 21 patients with a success rate of 84.0%. Diagnosis was made in the other four patients with bronchoscopy and surgery. Non-Hodgkin lymphoma (NHL) was the diagnosis in all patients. Most subtypes were mucosa-associated lymphoid tissue (MALT) lymphomas (n = 19). The remaining subtypes included three diffuse large B-cell NHLs, two peripheral T-cell lymphomas not otherwise specified, and one anaplastic large cell NHL. The success rate of core needle biopsy was 95% (18 of 19) for MALT lymphomas, 67% (2 of 3) for diffuse large B cell NHLs, and 33% (1 of 3) for other NHLs. The success rate for MALT lymphomas was significantly higher than that of non-MALT lymphomas according to Fisher exact t test (P = .031). No serious complications occurred in any patients. CT-guided core needle biopsy is a reliable procedure to assist in diagnosis and classification of primary pulmonary lymphomas, especially MALT lymphomas. Copyright © 2013 SIR. Published by Elsevier Inc. All rights reserved.

  17. Characterizing soil macroporosity by X-ray microfocus computed tomography and quantification of the coring damages.

    Directory of Open Access Journals (Sweden)

    Caner L.

    2010-06-01

    Full Text Available X-ray Computed Tomography (X-ray µCT was employed to characterize vertical variations of structural porosity of a soil profile (pore dimension higher than 5.103 µm3. Three distinct horizons of a Cambisol have been studied for a total depth of 75 cm: L, S1/S2 and S2/SFe horizons. Samples have been cored in situ by driving in PVC tubes (inner diameter 10 cm. From reconstructed and filtered volumes, pores segmentation allows to study variations of structural porosity within the profile. Two kinds of porosity were identified: biological pores (tube-like and physical pores (fracture-like. Structural porosity content varies strongly according to the horizons: from 5.48% in the L horizon to 6.48% in the S1/S2 horizon. The 3D connectivity of both of these pore types was also assessed. During sampling, soil shearing induced damages around the cores. Identification and quantification of the damaged zone was performed from the calculation of porosity profile from core surface to core heart. In average, the damaged zone reaches a depth of 1 cm. Porosity loss (compaction or porosity increase (fracturing was observed according to the studied profile.

  18. fissioncore: A desktop-computer simulation of a fission-bomb core

    Science.gov (United States)

    Cameron Reed, B.; Rohe, Klaus

    2014-10-01

    A computer program, fissioncore, has been developed to deterministically simulate the growth of the number of neutrons within an exploding fission-bomb core. The program allows users to explore the dependence of criticality conditions on parameters such as nuclear cross-sections, core radius, number of secondary neutrons liberated per fission, and the distance between nuclei. Simulations clearly illustrate the existence of a critical radius given a particular set of parameter values, as well as how the exponential growth of the neutron population (the condition that characterizes criticality) depends on these parameters. No understanding of neutron diffusion theory is necessary to appreciate the logic of the program or the results. The code is freely available in FORTRAN, C, and Java and is configured so that modifications to accommodate more refined physical conditions are possible.

  19. Adaptive Fault Tolerance for Many-Core Based Space-Borne Computing

    Science.gov (United States)

    James, Mark; Springer, Paul; Zima, Hans

    2010-01-01

    This paper describes an approach to providing software fault tolerance for future deep-space robotic NASA missions, which will require a high degree of autonomy supported by an enhanced on-board computational capability. Such systems have become possible as a result of the emerging many-core technology, which is expected to offer 1024-core chips by 2015. We discuss the challenges and opportunities of this new technology, focusing on introspection-based adaptive fault tolerance that takes into account the specific requirements of applications, guided by a fault model. Introspection supports runtime monitoring of the program execution with the goal of identifying, locating, and analyzing errors. Fault tolerance assertions for the introspection system can be provided by the user, domain-specific knowledge, or via the results of static or dynamic program analysis. This work is part of an on-going project at the Jet Propulsion Laboratory in Pasadena, California.

  20. The future of commodity computing and many-core versus the interests of HEP software

    CERN Document Server

    CERN. Geneva

    2012-01-01

    As the mainstream computing world has shifted from multi-core to many-core platforms, the situation for software developers has changed as well. With the numerous hardware and software options available, choices balancing programmability and performance are becoming a significant challenge. The expanding multiplicative dimensions of performance offer a growing number of possibilities that need to be assessed and addressed on several levels of abstraction. This paper reviews the major tradeoffs forced upon the software domain by the changing landscape of parallel technologies – hardware and software alike. Recent developments, paradigms and techniques are considered with respect to their impact on the rather traditional HEP programming models. Other considerations addressed include aspects of efficiency and reasonably achievable targets for the parallelization of large scale HEP workloads.

  1. The future of commodity computing and many-core versus the interests of HEP software

    CERN Document Server

    Jarp, Sverre; Nowak, Andrzej

    2012-01-01

    As the mainstream computing world has shifted from multi-core to many-core platforms, the situation for software developers has changed as well. With the numerous hardware and software options available, choices balancing programmability and performance are becoming a significant challenge. The expanding multiplicative dimensions of performance offer a growing number of possibilities that need to be assessed and addressed on several levels of abstraction. This paper reviews the major trade-offs forced upon the software domain by the changing landscape of parallel technologies - hardware and software alike. Recent developments, paradigms and techniques are considered with respect to their impact on the rather traditional HEP programming models. Other considerations addressed include aspects of efficiency and reasonably achievable targets for the parallelization of large scale HEP workloads.

  2. CoreFlow: A computational platform for integration, analysis and modeling of complex biological data

    DEFF Research Database (Denmark)

    Pasculescu, Adrian; Schoof, Erwin; Creixell, Pau

    2014-01-01

    between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion...... provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts...... into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time...

  3. Computation of a core disruptive accident in the MARS mock-up

    Energy Technology Data Exchange (ETDEWEB)

    Robbe, M.F. [CEA Saclay, Bat 118, 91191 Gif sur Yvette Cedex (France)]. E-mail: marie-france.robbe@cea.fr; Lepareux, M. [CEA Saclay, Bat 118, 91191 Gif sur Yvette Cedex (France); Seinturier, E. [Socotec Industrie, 1 av. du Parc, 78180 Montigny le Bretonneux (France)

    2005-06-01

    A hypothetical core disruptive accident in a liquid metal fast breeder reactor (LMFBR) results from the interaction between molten fuel and liquid sodium, which creates a high-pressure bubble of gas in the core. The violent expansion of this bubble loads and deforms the vessel and the internal structures. The MARS experimental test simulates a HCDA in a small-scale mock-up containing all the significant internal components of a fast breeder reactor. The mock-up is filled with water, topped by an argon blanket, and the explosion is generated by an explosive charge. This paper presents a numerical simulation of the test with the EUROPLEXUS code. The top closure is represented by massive structures and the main internal structures are described by shells. The current numerical results are described and compared with the experimental ones, and previous computations with the CASTEM-PLEXUS code.

  4. Benchmarking v ICT

    OpenAIRE

    Blecher, Jan

    2009-01-01

    The aim of this paper is to describe benefits of benchmarking IT in wider context and benchmarking scope at all. I specify benchmarking as a process and mention basic rules and guidelines. Further I define IT benchmarking domains and describe possibilities of their use. Best known type of IT benchmark is cost benchmark which represents only a subset of benchmark opportunities. In this paper, is cost benchmark rather an imaginary first step to benchmarking contribution to company. IT benchmark...

  5. Genome-wide computational prediction and analysis of core promoter elements across plant monocots and dicots.

    Directory of Open Access Journals (Sweden)

    Sunita Kumari

    Full Text Available Transcription initiation, essential to gene expression regulation, involves recruitment of basal transcription factors to the core promoter elements (CPEs. The distribution of currently known CPEs across plant genomes is largely unknown. This is the first large scale genome-wide report on the computational prediction of CPEs across eight plant genomes to help better understand the transcription initiation complex assembly. The distribution of thirteen known CPEs across four monocots (Brachypodium distachyon, Oryza sativa ssp. japonica, Sorghum bicolor, Zea mays and four dicots (Arabidopsis thaliana, Populus trichocarpa, Vitis vinifera, Glycine max reveals the structural organization of the core promoter in relation to the TATA-box as well as with respect to other CPEs. The distribution of known CPE motifs with respect to transcription start site (TSS exhibited positional conservation within monocots and dicots with slight differences across all eight genomes. Further, a more refined subset of annotated genes based on orthologs of the model monocot (O. sativa ssp. japonica and dicot (A. thaliana genomes supported the positional distribution of these thirteen known CPEs. DNA free energy profiles provided evidence that the structural properties of promoter regions are distinctly different from that of the non-regulatory genome sequence. It also showed that monocot core promoters have lower DNA free energy than dicot core promoters. The comparison of monocot and dicot promoter sequences highlights both the similarities and differences in the core promoter architecture irrespective of the species-specific nucleotide bias. This study will be useful for future work related to genome annotation projects and can inspire research efforts aimed to better understand regulatory mechanisms of transcription.

  6. DSP Platform Benchmarking : DSP Platform Benchmarking

    OpenAIRE

    Xinyuan, Luo

    2009-01-01

    Benchmarking of DSP kernel algorithms was conducted in the thesis on a DSP processor for teaching in the course TESA26 in the department of Electrical Engineering. It includes benchmarking on cycle count and memory usage. The goal of the thesis is to evaluate the quality of a single MAC DSP instruction set and provide suggestions for further improvement in instruction set architecture accordingly. The scope of the thesis is limited to benchmark the processor only based on assembly coding. The...

  7. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  8. Efficient Backprojection-Based Synthetic Aperture Radar Computation with Many-Core Processors

    Directory of Open Access Journals (Sweden)

    Jongsoo Park

    2013-01-01

    Full Text Available Tackling computationally challenging problems with high efficiency often requires the combination of algorithmic innovation, advanced architecture, and thorough exploitation of parallelism. We demonstrate this synergy through synthetic aperture radar (SAR via backprojection, an image reconstruction method that can require hundreds of TFLOPS. Computation cost is significantly reduced by our new algorithm of approximate strength reduction; data movement cost is economized by software locality optimizations facilitated by advanced architecture support; parallelism is fully harnessed in various patterns and granularities. We deliver over 35 billion backprojections per second throughput per compute node on an Intel® Xeon® processor E5-2670-based cluster, equipped with Intel® Xeon Phi™ coprocessors. This corresponds to processing a 3K×3K image within a second using a single node. Our study can be extended to other settings: backprojection is applicable elsewhere including medical imaging, approximate strength reduction is a general code transformation technique, and many-core processors are emerging as a solution to energy-efficient computing.

  9. Evaluation of PWR and BWR assembly benchmark calculations. Status report of EPRI computational benchmark results, performed in the framework of the Netherlands` PINK programme (Joint project of ECN, IRI, KEMA and GKN)

    Energy Technology Data Exchange (ETDEWEB)

    Gruppelaar, H. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Klippel, H.T. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Kloosterman, J.L. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Hoogenboom, J.E. [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Verhagen, F.C.M. [Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands); Bruggink, J.C. [Gemeenschappelijke Kernenergiecentrale Nederland N.V., Dodewaard (Netherlands)

    1993-11-01

    Benchmark results of the Dutch PINK working group on calculational benchmarks on single pin cell and multipin assemblies as defined by EPRI are presented and evaluated. First a short update of methods used by the various institutes involved is given as well as an update of the status with respect to previous performed pin-cell calculations. Problems detected in previous pin-cell calculations are inspected more closely. Detailed discussion of results of multipin assembly calculations is given. The assembly consists of 9 pins in a multicell square lattice in which the central pin is filled differently, i.e. a Gd pin for the BWR assembly and a control rod/guide tube for the PWR assembly. The results for pin cells showed a rather good overall agreement between the four participants although BWR pins with high void fraction turned out to be difficult to calculate. With respect to burnup calculations good overall agreement for the reactivity swing was obtained, provided that a fine time grid is used. (orig.)

  10. Randomized benchmarking of multiqubit gates.

    Science.gov (United States)

    Gaebler, J P; Meier, A M; Tan, T R; Bowler, R; Lin, Y; Hanneke, D; Jost, J D; Home, J P; Knill, E; Leibfried, D; Wineland, D J

    2012-06-29

    We describe an extension of single-qubit gate randomized benchmarking that measures the error of multiqubit gates in a quantum information processor. This platform-independent protocol evaluates the performance of Clifford unitaries, which form a basis of fault-tolerant quantum computing. We implemented the benchmarking protocol with trapped ions and found an error per random two-qubit Clifford unitary of 0.162±0.008, thus setting the first benchmark for such unitaries. By implementing a second set of sequences with an extra two-qubit phase gate inserted after each step, we extracted an error per phase gate of 0.069±0.017. We conducted these experiments with transported, sympathetically cooled ions in a multizone Paul trap-a system that can in principle be scaled to larger numbers of ions.

  11. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  12. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  13. Implementation of NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  14. Benchmarking a DSP processor

    OpenAIRE

    Lennartsson, Per; Nordlander, Lars

    2002-01-01

    This Master thesis describes the benchmarking of a DSP processor. Benchmarking means measuring the performance in some way. In this report, we have focused on the number of instruction cycles needed to execute certain algorithms. The algorithms we have used in the benchmark are all very common in signal processing today. The results we have reached in this thesis have been compared to benchmarks for other processors, performed by Berkeley Design Technology, Inc. The algorithms were programm...

  15. CFD Simulation of Thermal-Hydraulic Benchmark V1000CT-2 Using ANSYS CFX

    OpenAIRE

    2009-01-01

    Plant measured data from VVER-1000 coolant mixing experiments were used within the OECD/NEA and AER coupled code benchmarks for light water reactors to test and validate computational fluid dynamic (CFD) codes. The task is to compare the various calculations with measured data, using specified boundary conditions and core power distributions. The experiments, which are provided for CFD validation, include single loop cooling down or heating-up by disturbing the heat transfer in the steam gene...

  16. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    , and more are underway. As a result, there is an increasing need for an independent benchmark for spatio-temporal indexes. This paper characterizes the spatio-temporal indexing problem and proposes a benchmark for the performance evaluation and comparison of spatio-temporal indexes. Notably, the benchmark...

  17. Criticality Safety Code Validation with LWBR’s SB Cores

    Energy Technology Data Exchange (ETDEWEB)

    Putman, Valerie Lee

    2003-01-01

    The first set of critical experiments from the Shippingport Light Water Breeder Reactor Program included eight, simple geometry critical cores built with 233UO2-ZrO2, 235UO2-ZrO2, ThO2, and ThO2-233UO2 nuclear materials. These cores are evaluated, described, and modeled to provide benchmarks and validation information for INEEL criticality safety calculation methodology. In addition to consistency with INEEL methodology, benchmark development and nuclear data are consistent with International Criticality Safety Benchmark Evaluation Project methodology.Section 1 of this report introduces the experiments and the reason they are useful for validating some INEEL criticality safety calculations. Section 2 provides detailed experiment descriptions based on currently available experiment reports. Section 3 identifies criticality safety validation requirement sources and summarizes requirements that most affect this report. Section 4 identifies relevant hand calculation and computer code calculation methodologies used in the experiment evaluation, benchmark development, and validation calculations. Section 5 provides a detailed experiment evaluation. This section identifies resolutions for currently unavailable and discrepant information. Section 5 also reports calculated experiment uncertainty effects. Section 6 describes the developed benchmarks. Section 6 includes calculated sensitivities to various benchmark features and parameters. Section 7 summarizes validation results. Appendices describe various assumptions and their bases, list experimenter calculations results for items that were independently calculated for this validation work, report other information gathered and developed by SCIENTEC personnel while evaluating these same experiments, and list benchmark sample input and miscellaneous supplementary data.

  18. Computer simulation of Angra-2 PWR nuclear reactor core using MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Medeiros, Marcos P.C. de; Rebello, Wilson F., E-mail: eng.cavaliere@ime.eb.br, E-mail: rebello@ime.eb.br [Instituto Militar de Engenharia - Secao de Engenharia Nuclear, Rio de Janeiro, RJ (Brazil); Oliveira, Claudio L. [Universidade Gama Filho, Departamento de Matematica, Rio de Janeiro, RJ (Brazil); Vellozo, Sergio O., E-mail: vellozo@cbpf.br [Centro Tecnologico do Exercito. Divisao de Defesa Quimica, Biologica e Nuclear, Rio de Janeiro, RJ (Brazil); Silva, Ademir X. da, E-mail: ademir@nuclear.ufrj.br [Coordenacao dos Programas de Pos Gaduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    In this work the MCNPX (Monte Carlo N-Particle Transport Code) code was used to develop a computerized model of the core of Angra 2 PWR (Pressurized Water Reactor) nuclear reactor. The model was created without any kind of homogenization, but using real geometric information and material composition of that reactor, obtained from the FSAR (Final Safety Analysis Report). The model is still being improved and the version presented in this work is validated by comparing values calculated by MCNPX with results calculated by others means and presented on FSAR. This paper shows the results already obtained to K{sub eff} and K{infinity}, general parameters of the core, considering the reactor operating under stationary conditions of initial testing and operation. Other stationary operation conditions have been simulated and, in all tested cases, there was a close agreement between values calculated computationally through this model and data presented on the FSAR, which were obtained by other codes. This model is expected to become a valuable tool for many future applications. (author)

  19. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  20. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  1. Performance benchmarks for a next generation numerical dynamo model

    Science.gov (United States)

    Matsui, Hiroaki; Heien, Eric; Aubert, Julien; Aurnou, Jonathan M.; Avery, Margaret; Brown, Ben; Buffett, Bruce A.; Busse, Friedrich; Christensen, Ulrich R.; Davies, Christopher J.; Featherstone, Nicholas; Gastine, Thomas; Glatzmaier, Gary A.; Gubbins, David; Guermond, Jean-Luc; Hayashi, Yoshi-Yuki; Hollerbach, Rainer; Hwang, Lorraine J.; Jackson, Andrew; Jones, Chris A.; Jiang, Weiyuan; Kellogg, Louise H.; Kuang, Weijia; Landeau, Maylis; Marti, Philippe; Olson, Peter; Ribeiro, Adolfo; Sasaki, Youhei; Schaeffer, Nathanaël.; Simitev, Radostin D.; Sheyko, Andrey; Silva, Luis; Stanley, Sabine; Takahashi, Futoshi; Takehiro, Shin-ichi; Wicht, Johannes; Willis, Ashley P.

    2016-05-01

    Numerical simulations of the geodynamo have successfully represented many observable characteristics of the geomagnetic field, yielding insight into the fundamental processes that generate magnetic fields in the Earth's core. Because of limited spatial resolution, however, the diffusivities in numerical dynamo models are much larger than those in the Earth's core, and consequently, questions remain about how realistic these models are. The typical strategy used to address this issue has been to continue to increase the resolution of these quasi-laminar models with increasing computational resources, thus pushing them toward more realistic parameter regimes. We assess which methods are most promising for the next generation of supercomputers, which will offer access to O(106) processor cores for large problems. Here we report performance and accuracy benchmarks from 15 dynamo codes that employ a range of numerical and parallelization methods. Computational performance is assessed on the basis of weak and strong scaling behavior up to 16,384 processor cores. Extrapolations of our weak-scaling results indicate that dynamo codes that employ two-dimensional or three-dimensional domain decompositions can perform efficiently on up to ˜106 processor cores, paving the way for more realistic simulations in the next model generation.

  2. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    Science.gov (United States)

    2017-04-13

    AFRL-AFOSR-UK-TR-2017-0029 Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems ...MultiCore Systems 5a. CONTRACT NUMBER FA8655-12-1-2021 5b. GRANT NUMBER Grant 12-2021 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S...code for Heterogeneous multicore systems . The approach was based on the OmpSs programming model and the performance tools that constitute two strategic

  3. Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores

    Science.gov (United States)

    Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.

    2015-12-01

    Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture

  4. A simulation of a pebble bed reactor core by the MCNP-4C computer code

    Directory of Open Access Journals (Sweden)

    Bakhshayesh Moshkbar Khalil

    2009-01-01

    Full Text Available Lack of energy is a major crisis of our century; the irregular increase of fossil fuel costs has forced us to search for novel, cheaper, and safer sources of energy. Pebble bed reactors - an advanced new generation of reactors with specific advantages in safety and cost - might turn out to be the desired candidate for the role. The calculation of the critical height of a pebble bed reactor at room temperature, while using the MCNP-4C computer code, is the main goal of this paper. In order to reduce the MCNP computing time compared to the previously proposed schemes, we have devised a new simulation scheme. Different arrangements of kernels in fuel pebble simulations were investigated and the best arrangement to decrease the MCNP execution time (while keeping the accuracy of the results, chosen. The neutron flux distribution and control rods worth, as well as their shadowing effects, have also been considered in this paper. All calculations done for the HTR-10 reactor core are in good agreement with experimental results.

  5. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  6. GPUs benchmarking in subpixel image registration algorithm

    Science.gov (United States)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  7. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  8. Uncommon primary tumors of the orbit diagnosed by computed tomography-guided core needle biopsy: report of two cases

    Energy Technology Data Exchange (ETDEWEB)

    Tyng, Chiang Jeng; Matushita Junior, Joao Paulo Kawaoka; Bitencourt, Almir Galvao Vieira; Amoedo, Mauricio Kauark; Barbosa, Paula Nicole Vieira; Chojniak, Rubens, E-mail: almirgvb@yahoo.com.br [A.C.Camargo Cancer Center, Sao Paulo, SP (Brazil). Dept. de Imagem; Neves, Flavia Branco Cerqueira Serra [Hospital do Servidor Publico Estadual, Sao Paulo, SP (Brazil). Div. de Oftalmologia

    2014-11-15

    Computed tomography-guided percutaneous biopsy is a safe and effective alternative method for evaluating selected intra-orbital lesions where the preoperative diagnosis is important for the therapeutic planning. The authors describe two cases of patients with uncommon primary orbital tumors whose diagnosis was obtained by means of computed tomography-guided core needle biopsy, with emphasis on the technical aspects of the procedure. (author)

  9. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  10. SIMMER-II: A computer program for LMFBR disrupted core analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bohl, W.R.; Luck, L.B.

    1990-06-01

    SIMMER-2 (Version 12) is a computer program to predict the coupled neutronic and fluid-dynamics behavior of liquid-metal fast reactors during core-disruptive accident transients. The modeling philosophy is based on the use of general, but approximate, physics to represent interactions of accident phenomena and regimes rather than a detailed representation of specialized situations. Reactor neutronic behavior is predicted by solving space (r,z), energy, and time-dependent neutron conservation equations (discrete ordinates transport or diffusion). The neutronics and the fluid dynamics are coupled via temperature- and background-dependent cross sections and the reactor power distribution. The fluid-dynamics calculation solves multicomponent, multiphase, multifield equations for mass, momentum, and energy conservation in (r,z) or (x,y) geometry. A structure field with nine density and five energy components; a liquid field with eight density and six energy components; and a vapor field with six density and on energy component are coupled by exchange functions representing a modified-dispersed flow regime with a zero-dimensional intra-cell structure model.

  11. SUPERENERGY-2: a multiassembly, steady-state computer code for LMFBR core thermal-hydraulic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Basehore, K.L.; Todreas, N.E.

    1980-08-01

    Core thermal-hydraulic design and performance analyses for Liquid Metal Fast Breeder Reactors (LMFBRs) require repeated detailed multiassembly calculations to determine radial temperature profiles and subchannel outlet temperatures for various core configurations and subassembly structural analyses. At steady-state, detailed core-wide temperature profiles are required for core restraint calculations and subassembly structural analysis. In addition, sodium outlet temperatures are routinely needed for each reactor operating cycle. The SUPERENERGY-2 thermal-hydraulic code was designed specifically to meet these designer needs. It is applicable only to steady-state, forced-convection flow in LMFBR core geometries.

  12. Benchmarking ENDF/B-VII.0

    Science.gov (United States)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  13. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  14. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  15. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  16. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  17. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  18. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection.

  19. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  20. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  1. Experimental characterization of cement-bentonite interaction using core infiltration techniques and 4D computed tomography

    Science.gov (United States)

    Dolder, F.; Mäder, U.; Jenni, A.; Schwendener, N.

    Deep geological storage of radioactive waste foresees cementitious materials as reinforcement of tunnels and as backfill. Bentonite is proposed to enclose spent fuel drums, and as drift seals. The emplacement of cementitious material next to clay material generates an enormous chemical gradient in pore water composition that drives diffusive solute transport. Laboratory studies and reactive transport modeling predict significant mineral alteration at and near interfaces, mainly resulting in a decrease of porosity in bentonite. The goal of this project is to characterize and quantify the cement/bentonite skin effects spatially and temporally in laboratory experiments. A newly developed mobile X-ray transparent core infiltration device was used, which allows performing X-ray computed tomography (CT) periodically without interrupting a running experiment. A pre-saturated cylindrical MX-80 bentonite sample (1920 kg/m3 average wet density) is subjected to a confining pressure as a constant total pressure boundary condition. The infiltration of a hyperalkaline (pH 13.4), artificial OPC (ordinary Portland cement) pore water into the bentonite plug alters the mineral assemblage over time as an advancing reaction front. The related changes in X-ray attenuation values are related to changes in phase densities, porosity and local bulk density and are tracked over time periodically by non-destructive CT scans. Mineral precipitation is observed in the inflow filter. Mineral alteration in the first millimeters of the bentonite sample is clearly detected and the reaction front is presently progressing with an average linear velocity that is 8 times slower than that for anions. The reaction zone is characterized by a higher X-ray attenuation compared to the signal of the pre-existing mineralogy. Chemical analysis of the outflow fluid showed initially elevated anion and cation concentrations compared to the infiltration fluid due to anion exclusion effects related to compaction of

  2. CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions. User`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O. [Sandia National Labs., Albuquerque, NM (United States)

    1993-10-01

    The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.

  3. Clogging evaluation of porous asphalt concrete cores in conjunction with medical x-ray computed tomography

    Science.gov (United States)

    Su, Yu-Min; Hsu, Chen-Yu; Lin, Jyh-Dong

    2014-03-01

    This study was to assess the porosity of Porous Asphalt Concrete (PAC) in conjunction with a medical X-ray computed tomography (CT) facility. The PAC was designed as the surface course to achieve the target porosity 18%. There were graded aggregates, soils blended with 50% of coarse sand, and crushed gravel wrapped with geotextile compacted and served as the base, subbase, and infiltration layers underneath the PAC. The test site constructed in 2004 is located in Northern of Taiwan in which the daily traffic has been light and limited. The porosity of the test track was investigated. The permeability coefficient of PAC was found severely degraded from 2.2×10-1 to 1.2×10-3 -cm/sec, after nine-year service, while the permeability below the surface course remained intact. Several field PAC cores were drilled and brought to evaluate the distribution of air voids by a medical X-ray CT nondestructively. The helical mode was set to administrate the X-ray CT scan and two cross-sectional virtual slices were exported in seconds for analyzing air voids distribution. It shows that the clogging of voids occurred merely 20mm below the surface and the porosity can reduce as much about 3%. It was also found that the roller compaction can decrease the porosity by 4%. The permeability reduction in this test site can attribute to the voids of PAC that were compacted by roller during the construction and filled by the dusts on the surface during the service.

  4. The use of gamma ray computed tomography to investigate soil compaction due to core sampling devices

    Energy Technology Data Exchange (ETDEWEB)

    Pires, Luiz F.; Arthur, Robson C.J.; Correchel, Vladia; Bacchi, Osny O.S.; Reichardt, Klaus [Centro de Energia Nuclear na Agricultura (CENA), Piracicaba, SP (Brazil); Brasil, Rene P. Camponez do [Sao Paulo Univ., Piracicaba, SP (Brazil). Escola Superior de Agricultura Luiz de Queiroz. Dept. de Engenharia Rural

    2004-09-15

    Compaction processes can influence soil physical properties such as soil density, porosity, pore size distribution, and processes like soil water and nutrient movements, root system distribution, and others. Soil porosity modification has important consequences like alterations in results of soil water retention curves. These alterations may cause differences in soil water storage calculations and matrix potential values, which are utilized in irrigation management systems. Because of this, soil-sampling techniques should avoid alterations of sample structure. In this work soil sample compaction caused by core sampling devices was investigated using the gamma ray computed tomography technique. A first generation tomograph with fixed source-detector arrangement and translation/rotational movements of the sample was utilized to obtain the images. The radioactive source is {sup 241}Am, with an activity of 3.7 GBq, and the detector consists of a 3 in. x 3 in. NaI(Tl) scintillation crystal coupled to a photomultiplier tube. Soil samples were taken from an experimental field utilizing cylinders 4.0 cm high and 2.6 cm in diameter. Based on image analyses it was possible to detect compacted regions in all samples next to the cylinder wall due to the sampling system. Tomographic unit profiles of the sample permitted to identify higher values of soil density for deeper regions of the sample, and it was possible to determine the average densities and thickness of these layers. Tomographic analyses showed to be a very useful tool for soil compaction characterization and presented many advantages in relation to traditional methods. (author)

  5. Non-invasive volumetric assessment of aortic atheroma: a core laboratory validation using computed tomography angiography.

    Science.gov (United States)

    Hammadah, Muhammad; Qintar, Mohammed; Nissen, Steven E; John, Julie St; Alkharabsheh, Saqer; Mobolaji-Lawal, Motunrayo; Philip, Femi; Uno, Kiyoko; Kataoka, Yu; Babb, Brett; Poliszczuk, Roman; Kapadia, Samir R; Tuzcu, E Murat; Schoenhagen, Paul; Nicholls, Stephen J; Puri, Rishi

    2016-01-01

    Aortic atherosclerosis has been linked with worse peri- and post-procedural outcomes following a range of aortic procedures. Yet, there are currently no standardized methods for non-invasive volumetric pan-aortic plaque assessment. We propose a novel means of more accurately assessing plaque volume across whole aortic segments using computed tomography angiography (CTA) imaging. Sixty patients who underwent CTA prior to trans-catheter aortic valve implantation were included in this analysis. Specialized software analysis (3mensio Vascular™, Pie Medical, Maastricht, Netherlands) was used to reconstruct images using a centerline approach, thus creating true cross-sectional aortic images, akin to those images produced with intravascular ultrasonography. Following aortic segmentation (from the aortic valve to the renal artery origin), atheroma areas were measured across multiple contiguous evenly spaced (10 mm) cross-sections. Percent atheroma volume (PAV), total atheroma volume (TAV) and calcium score were calculated. In our populations (age 79.9 ± 8.5 years, male 52 %, diabetes 27 %, CAD 84 %, PVD 20 %), mean ± SD number of cross sections measured for each patient was 35.1 ± 3.5 sections. Mean aortic PAV and TAV were 33.2 ± 2.51 % and 83,509 ± 17,078 mm(3), respectively. Median (IQR) calcium score was 1.5 (0.7-2.5). Mean (SD) inter-observer coefficient of variation and agreement for plaque area among 4 different analysts was 14.1 (5.4), and the mean (95 % CI) Lin's concordance correlation coefficient was 0.79 (0.62-0.89), effectively simulating a Core Laboratory scenario. We provide an initial validation of cross-sectional volumetric aortic atheroma assessment using CTA. This proposed methodology highlights the potential for utilizing non-invasive aortic plaque imaging for risk prediction across a range of clinical scenarios.

  6. Post mortem computed tomography and core needle biopsy in comparison to autopsy in eleven Bernese mountain dogs with histiocytic sarcoma.

    Science.gov (United States)

    Hostettler, Franziska C; Wiener, Dominique J; Welle, Monika M; Posthaus, Horst; Geissbühler, Urs

    2015-09-02

    Bernese mountain dogs are reported to have a shorter life expectancy than other breeds. A major reason for this has been assigned to a high tumour prevalence, especially of histiocytic sarcoma. The efforts made by the breeding clubs to improve the longevity with the help of genetic tests and breeding value estimations are impeded by insufficiently reliable diagnoses regarding the cause of death. The current standard for post mortem examination in animals is performance of an autopsy. In human forensic medicine, imaging modalities, such as computed tomography and magnetic resonance imaging, are used with increasing frequency as a complement to autopsy. The present study investigates, whether post mortem computed tomography in combination with core needle biopsy is able to provide a definitive diagnosis of histiocytic sarcoma. For this purpose we have analysed the results of post mortem computed tomography and core needle biopsy in eleven Bernese mountain dogs. In the subsequent autopsy, every dog had a definitive diagnosis of histiocytic sarcoma, based on immunohistochemistry. Computed tomography revealed space-occupying lesions in all dogs. Lesion detection by post mortem computed tomography was similar to lesion detection in autopsy for lung tissue (9 cases in computed tomography / 8 cases in autopsy), thoracic lymph nodes (9/8), spleen (6/7), kidney (2/2) and bone (3/3). Hepatic nodules, however, were difficult to detect with our scanning protocol (2/7). Histology of the core needle biopsies provided definitive diagnoses of histiocytic sarcoma in ten dogs, including confirmation by immunohistochemistry in six dogs. The biopsy samples of the remaining dog did not contain any identifiable neoplastic cells. Autolysis was the main reason for uncertain histological diagnoses. Post mortem computed tomography is a fast and effective method for the detection of lesions suspicious for histiocytic sarcoma in pulmonary, thoracic lymphatic, splenic, osseous and renal tissue

  7. Measuring NUMA effects with the STREAM benchmark

    CERN Document Server

    Bergstrom, Lars

    2011-01-01

    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

  8. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  9. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  10. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  11. Benchmarking of Heavy Ion Transport Codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, Igor [ORNL; Ronningen, Reginald M. [Michigan State University, East Lansing; Heilbronn, Lawrence [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required.

  12. Embedded atom computer simulation of lattice distortion and dislocation core structure and mobility in Fe-Cr alloys

    Energy Technology Data Exchange (ETDEWEB)

    Farkas, D.; Schon, C.G.; Lima, M.S.F. de [Virginia Polytechnic Inst., Blacksburg, VA (United States). Dept. of Materials Science and Engineering; Goldenstein, H. [Escola Politecnica USP, Sao Paulo (Brazil). Dept. de Metalurgia

    1996-01-01

    The atomistic structure of dislocation cores of <111> screw dislocations in disordered Fe-Cr b.c.c. alloys was simulated using embedded atom method potentials and molecular statics computer simulation. The mixed Fe-Cr interatomic potentials used were derived by fitting to the thermodynamic data of the disordered system and the measured lattice parameter changes of Fe upon Cr additions. The potentials predict phase separation as the most stable configuration for the central region of the phase diagram. The next most stable situation is the disordered b.c.c. phase. The structure of the screw 1/2 <111> dislocation core was studied using atomistic computer simulation and an improved visualization method for the representation of the resulting structures. The structure of the dislocation core is different from that typical of 1/2 <111> dislocations in pure b.c.c. materials. The core structure in the alloy tends to lose the threefold symmetry seen in pure b.c.c. materials and the stress necessary to initiate dislocation motion increases with Cr content. The mobility of kinks in these screw dislocations was also simulated and it was found that while the critical stress for kink motion in pure Fe is extremely low, it increases significantly with the addition of Cr. The implications of these differences for mechanical behavior are discussed.

  13. Eigenvalue analysis using a full-core Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Okafor, K.C.; Zino, J.F. (Westinghouse Savannah River Co., Aiken, SC (United States))

    1992-01-01

    The reactor physics codes used at the Savannah River Site (SRS) to predict reactor behavior have been continually benchmarked against experimental and operational data. A particular benchmark variable is the observed initial critical control rod position. Historically, there has been some difficulty predicting this position because of the difficulties inherent in using computer codes to model experimental or operational data. The Monte Carlo method is applied in this paper to study the initial critical control rod positions for the SRS K Reactor. A three-dimensional, full-core MCNP model of the reactor was developed for this analysis.

  14. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  15. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  16. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  17. Percutaneous computed tomography-guided core needle biopsy of soft tissue tumors: results and correlation with surgical specimen analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chojniak, Rubens; Grigio, Henrique Ramos; Bitencourt, Almir Galvao Vieira; Pinto, Paula Nicole Vieira; Tyng, Chiang J.; Cunha, Isabela Werneck da; Aguiar Junior, Samuel; Lopes, Ademar, E-mail: chojniak@uol.com.br [Hospital A.C. Camargo, Sao Paulo, SP (Brazil)

    2012-09-15

    Objective: To evaluate the efficacy of percutaneous computed tomography (CT)-guided core needle biopsy of soft tissue tumors in obtaining appropriate samples for histological analysis, and compare its diagnosis with the results of the surgical pathology as available. Materials and Methods: The authors reviewed medical records, imaging and histological reports of 262 patients with soft-tissue tumors submitted to CT-guided core needle biopsy in an oncologic reference center between 2003 and 2009. Results: Appropriate samples were obtained in 215 (82.1%) out of the 262 patients. The most prevalent tumors were sarcomas (38.6%), metastatic carcinomas (28.8%), benign mesenchymal tumors (20.5%) and lymphomas (9.3%). Histological grading was feasible in 92.8% of sarcoma patients, with the majority of them (77.9%) being classified as high grade tumors. Out of the total sample, 116 patients (44.3%) underwent surgical excision and diagnosis confirmation. Core biopsy demonstrated 94.6% accuracy in the identification of sarcomas, with 96.4% sensitivity and 89.5% specificity. A significant intermethod agreement about histological grading was observed between core biopsy and surgical resection (p < 0.001; kappa = 0.75). Conclusion: CT-guided core needle biopsy demonstrated a high diagnostic accuracy in the evaluation of soft tissue tumors as well as in the histological grading of sarcomas, allowing an appropriate therapeutic planning (author)

  18. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  19. Benchmarking in Foodservice Operations.

    Science.gov (United States)

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  20. Multi-Core Technology for and Fault Tolerant High-Performance Spacecraft Computer Systems

    Science.gov (United States)

    Behr, Peter M.; Haulsen, Ivo; Van Kampenhout, J. Reinier; Pletner, Samuel

    2012-08-01

    The current architectural trends in the field of multi-core processors can provide an enormous increase in processing power by exploiting the parallelism available in many applications. In particular because of their high energy efficiency, it is obvious that multi-core processor-based systems will also be used in future space missions. In this paper we present the system architecture of a powerful optical sensor system based on the eight core multi-core processor P4080 from Freescale. The fault tolerant structure and the highly effective FDIR concepts implemented on different hardware and software levels of the system are described in detail. The space application scenario and thus the main requirements for the sensor system have been defined by a complex tracking sensor application for autonomous landing or docking manoeuvres.

  1. Current transformers with nanocrystalline alloy toroidal core: analytical, computational and experimental studies

    Directory of Open Access Journals (Sweden)

    Benedito Antonio Luciano

    2012-10-01

    Full Text Available In this paper are presented theoretical analysis and experimental results concerning the performance of toroidal cores used in current transformers. For most problems concerning transformers design, analytical methods are useful, but numerical methods provide a better understanding of the transformers electromagnetic behaviour. Numerical field solutions may be used to determine the electrical equivalent circuit parameters of toroidal core current transformers. Since the exciting current of current transformers alters the ratio and phase angle of primary and secondary currents, it is made as small as possible though the use of high permeability and low loss magnetic material in the construction of the core. According to experimental results presented in this work, in comparison with others soft magnetic materials, nanocrystalline alloys appear as the best material to be used in toroidal core for current transformers.

  2. Benchmarking File System Benchmarking: It *IS* Rocket Science

    OpenAIRE

    Seltzer, Margo I.; Tarasov, Vasily; Bhanage, Saumitra; Zadok, Erez

    2011-01-01

    The quality of file system benchmarking has not improved in over a decade of intense research spanning hundreds of publications. Researchers repeatedly use a wide range of poorly designed benchmarks, and in most cases, develop their own ad-hoc benchmarks. Our community lacks a definition of what we want to benchmark in a file system. We propose several dimensions of file system benchmarking and review the wide range of tools and techniques in widespread use. We experimentally show that even t...

  3. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  4. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  5. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Junghoon Lee

    2011-03-01

    Full Text Available Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  6. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    Science.gov (United States)

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  7. Computational Design and Analysis of Core Material of Single-Phase Capacitor Run Induction Motor

    Directory of Open Access Journals (Sweden)

    Gurmeet Singh

    2014-07-01

    Full Text Available A Single-phase induction motor (SPIM has very crucial role in industrial, domestic and commercial sectors. So, the efficient SPIM is a foremost requirement of today's market. For efficient motors, many research methodologies and propositions have been given by researchers in past. Various parameters like as stator/rotor slot variation, size and shape of stator/rotor slots, stator/rotor winding configuration, choice of core material etc. have momentous impact on machine design. Core material influences the motor performance to a degree. Magnetic flux linkage and leakage preliminary depends upon the magnetic properties of core material and air gap. The analysis of effects of core material on the magnetic flux distribution and the performance of induction motor is of immense importance to meet out the desirable performance. An increase in the air gap length will result in the air gap performance characteristics deterioration and decrease in air gap length will lead to serious mechanical balancing concern. So possibility of much variation in air gap beyond the limits on both sides is not feasible. For the optimized performance of the induction motor the core material plays a significant role. Using higher magnetic flux density, reduction on a magnetizing reactance and leakage of flux can be achieved. In this thesis work the analysis of single phase induction motor has been carried out with different core materials. The four models have been simulated using Ansys Maxwell 15.0. Higher flux density selection for same machine dimensions result into huge amount of reduction in iron core losses and thereby improve the efficiency. In this paper 2% higher efficiency has been achieved with Steel_1010 as compared to the machine using conventional D23 material. Out of four models result reflected by the machine using steel_1010 and steel_1008 are found to be better.

  8. Building with Benchmarks: The Role of the District in Philadelphia's Benchmark Assessment System

    Science.gov (United States)

    Bulkley, Katrina E.; Christman, Jolley Bruce; Goertz, Margaret E.; Lawrence, Nancy R.

    2010-01-01

    In recent years, interim assessments have become an increasingly popular tool in districts seeking to improve student learning and achievement. Philadelphia has been at the forefront of this change, implementing a set of Benchmark assessments aligned with its Core Curriculum district-wide in 2004. In this article, we examine the overall context…

  9. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  10. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  11. Many-core technologies: The move to energy-efficient, high-throughput x86 computing (TFLOPS on a chip)

    CERN Document Server

    CERN. Geneva

    2012-01-01

    With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms at all levels of integration and programming to achieve higher performance and energy efficiency. Especially in the area of High-Performance Computing (HPC) users can entertain a combination of different hardware and software parallel architectures and programming environments. Those technologies range from vectorization and SIMD computation over shared memory multi-threading (e.g. OpenMP) to distributed memory message passing (e.g. MPI) on cluster systems. We will discuss HPC industry trends and Intel's approach to it from processor/system architectures and research activities to hardware and software tools technologies. This includes the recently announced new Intel(r) Many Integrated Core (MIC) architecture for highly-parallel workloads and general purpose, energy efficient TFLOPS performance, some of its architectural features and its programming environment. At the end we will have a br...

  12. Performance of Artificial Intelligence Workloads on the Intel Core 2 Duo Series Desktop Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2010-12-01

    Full Text Available As the processor architecture becomes more advanced, Intel introduced its Intel Core 2 Duo series processors. Performance impact on Intel Core 2 Duo processors are analyzed using SPEC CPU INT 2006 performance numbers. This paper studied the behavior of Artificial Intelligence (AI benchmarks on Intel Core 2 Duo series processors. Moreover, we estimated the task completion time (TCT @1 GHz, @2 GHz and @3 GHz Intel Core 2 Duo series processors frequency. Our results show the performance scalability in Intel Core 2 Duo series processors. Even though AI benchmarks have similar execution time, they have dissimilar characteristics which are identified using principal component analysis and dendogram. As the processor frequency increased from 1.8 GHz to 3.167 GHz the execution time is decreased by ~370 sec for AI workloads. In the case of Physics/Quantum Computing programs it was ~940 sec.

  13. Computational screening of core@shell nanoparticles for the hydrogen evolution and oxygen reduction reactions

    Science.gov (United States)

    Corona, Benjamin; Howard, Marco; Zhang, Liang; Henkelman, Graeme

    2016-12-01

    Using density functional theory calculations, a set of candidate nanoparticle catalysts are identified based on reactivity descriptors and segregation energies for the oxygen reduction and hydrogen evolution reactions. Trends in the data were identified by screening over 700 core@shell 2 nm transition metal nanoparticles for each reaction. High activity was found for nanoparticles with noble metal shells and a variety of core metals for both reactions. By screening for activity and stability, we obtain a set of interesting bimetallic catalysts, including cases that have reduced noble metal loadings and a higher predicted activity as compared to monometallic Pt nanoparticles.

  14. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint US/Russian Progress Report for Fiscal Year 1997, Volume 4, part 4-ESADA Plutonium Program Critical Experiments: Single-Region Core Configurations

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, H.; Abdurrahman, N.M.

    1999-05-01

    The purpose of this study is to simulate and assess the findings from selected ESADA experiments. It is presented in the format prescribed by the Nuclear Energy Agency Nuclear Science Committee for material to be included in the International Handbook of Evaluated Criticality Safety Benchmark Experiments.

  15. A Computer-Aided Bibliometrics System for Journal Citation Analysis and Departmental Core Journal Ranking List Generation

    Directory of Open Access Journals (Sweden)

    Yih-Chearng Shiue

    2004-12-01

    Full Text Available Due to the tremendous increase and variation in serial publications, faculties in department of university are finding it difficult to generate and update their departmental core journal list regularly and accurately, and libraries are finding it difficult to maintain their current serial collection for different departments. Therefore, the evaluation of a departmental core journal list is an important task for departmental faculties and librarians. A departmental core journal list not only helps departments understand research performances of faculties and students, but also helps librarians make decisions about which journals to retain and which to cancel. In this study, a Computer-Aided Bibliometrics System was implemented and two methodologies (JCDF and LibJF were proposed in order to generate a departmental core journal ranking list and make the journal citation analysis. Six departments were taken as examples, with MIS as the major one. One journal citation pattern was found and the ratio of Turning point-to-No. journal was always around 0.07 among the 10 journals and 6 departments. After comparing with four methodologies via overlapping rate and standard deviation distances, the two proposed methodologies were shown to be better than questionnaire and library subscription method.

  16. Exploration and Evaluation of Nanometer Low-power Multi-core VLSI Computer Architectures

    Science.gov (United States)

    2015-03-01

    datapath size, the number and type of functional units, and the subset of available instructions supported. The tools then use these inputs to generate...to have problems synchronizing with the computation part of any computer system, typically called its datapath . This occurs, because any memory... datapaths , they tend to be the defining computer element for speed and solving problems efficiently. Unfortunately, the gap between memory performance

  17. COCO: a computer program for seismic analysis of a single column of the HTGR core

    Energy Technology Data Exchange (ETDEWEB)

    Rickard, N.D.

    1978-02-01

    The document serves as a user's manual and theoretical manual for the COCO code. COCO is a nonlinear numerical integration program designed to analyze a single column of the HTGR core for seismic excitation. Output of the code includes dowel forces, collision forces, and a time history of the motion of the blocks.

  18. Genome-wide computational prediction and analysis of core promoter elements across plant monocots and dicots

    Science.gov (United States)

    Transcription initiation, essential to gene expression regulation, involves recruitment of basal transcription factors to the core promoter elements (CPEs). The distribution of currently known CPEs across plant genomes is largely unknown. This is the first large scale genome-wide report on the compu...

  19. Benchmarking Pthreads performance

    Energy Technology Data Exchange (ETDEWEB)

    May, J M; de Supinski, B R

    1999-04-27

    The importance of the performance of threads libraries is growing as clusters of shared memory machines become more popular POSIX threads, or Pthreads, is an industry threads library standard. We have implemented the first Pthreads benchmark suite. In addition to measuring basic thread functions, such as thread creation, we apply the L.ogP model to standard Pthreads communication mechanisms. We present the results of our tests for several hardware platforms. These results demonstrate that the performance of existing Pthreads implementations varies widely; parts of nearly all of these implementations could be further optimized. Since hardware differences do not fully explain these performance variations, optimizations could improve the implementations. 2. Incorporating Threads Benchmarks into SKaMPI is an MPI benchmark suite that provides a general framework for performance analysis [7]. SKaMPI does not exhaustively test the MPI standard. Instead, it

  20. Cloud Computing as a Core Discipline in a Technology Entrepreneurship Program

    Science.gov (United States)

    Lawler, James; Joseph, Anthony

    2012-01-01

    Education in entrepreneurship continues to be a developing area of curricula for computer science and information systems students. Entrepreneurship is enabled frequently by cloud computing methods that furnish benefits to especially medium and small-sized firms. Expanding upon an earlier foundation paper, the authors of this paper present an…

  1. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  2. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  3. Conceptual study of advanced PWR core design. Development of advanced PWR core neutronics analysis system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Chang Hyo; Kim, Seung Cho; Kim, Taek Kyum; Cho, Jin Young; Lee, Hyun Cheol; Lee, Jung Hun; Jung, Gu Young [Seoul National University, Seoul (Korea, Republic of)

    1995-08-01

    The neutronics design system of the advanced PWR consists of (i) hexagonal cell and fuel assembly code for generation of homogenized few-group cross sections and (ii) global core neutronics analysis code for computations of steady-state pin-wise or assembly-wise core power distribution, core reactivity with fuel burnup, control rod worth and reactivity coefficients, transient core power, etc.. The major research target of the first year is to establish the numerical method and solution of multi-group diffusion equations for neutronics code development. Specifically, the following studies are planned; (i) Formulation of various numerical methods such as finite element method(FEM), analytical nodal method(ANM), analytic function expansion nodal(AFEN) method, polynomial expansion nodal(PEN) method that can be applicable for the hexagonal core geometry. (ii) Comparative evaluation of the numerical effectiveness of these methods based on numerical solutions to various hexagonal core neutronics benchmark problems. Results are follows: (i) Formulation of numerical solutions to multi-group diffusion equations based on numerical methods. (ii) Numerical computations by above methods for the hexagonal neutronics benchmark problems such as -VVER-1000 Problem Without Reflector -VVER-440 Problem I With Reflector -Modified IAEA PWR Problem Without Reflector -Modified IAEA PWR Problem With Reflector -ANL Large Heavy Water Reactor Problem -Small HTGR Problem -VVER-440 Problem II With Reactor (iii) Comparative evaluation on the numerical effectiveness of various numerical methods. (iv) Development of HEXFEM code, a multi-dimensional hexagonal core neutronics analysis code based on FEM. In the target year of this research, the spatial neutronics analysis code for hexagonal core geometry(called NEMSNAP-H temporarily) will be completed. Combination of NEMSNAP-H with hexagonal cell and assembly code will then equip us with hexagonal core neutronics design system. (Abstract Truncated)

  4. HPCS HPCchallenge Benchmark Suite

    Science.gov (United States)

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  5. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...

  6. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  7. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  8. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  9. Computer simulation of dislocation core structure of metastable left angle 111 right angle dislocations in NiAl

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Z.Y. (Dept. of Materials Science and Engineering, Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (United States)); Vailhe, C. (Dept. of Materials Science and Engineering, Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (United States)); Farkas, D. (Dept. of Materials Science and Engineering, Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (United States))

    1993-10-01

    The atomistic structure of dislocation cores of left angle 111 right angle dislocations in NiAl was simulated using embedded atom method potentials and molecular statics computer simulation. In agreement with previous simulation work and experimental observations, the complete left angle 111 right angle dislocation is stable with respect to the two superpartials of 1/2 left angle 111 right angle separated by an antiphase boundary. The structure of the latter configuration, though metastable, is of interest in the search for ways of improving ductility in this material. The structure of the complete dislocation and that of the metastable superpartials was studied using atomistic computer simulation. An improved visualization method was used for the representation of the resulting structures. The structure of the partials is different from that typical of 1/2 left angle 111 right angle dislocations in b.c.c. materials and that reported previously for the B2 structure using model pair potentials. (orig.)

  10. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  11. Natural Nuclear Reactor Oklo and Variation of Fundamental Constants Part 1: Computation of Neutronic of Fresh Core

    CERN Document Server

    Petrov, Yu V; Onegin, M S; Petrov, V Yu; Sakhnovskii, E G; Petrov, Yu.V.

    2006-01-01

    Using a modern methods of reactor physics we have performed the full-scale calculations of the natural reactor Oklo. For reliability we have used the recent version of two Monte Carlo codes: the Russian code MCU REA and world wide known code MCNP (USA). Both codes produce close results. We constructed computer model of zone RZ2 of reactor Oklo which takes into account all details of design and composition. The calculations were performed for the three fresh cores with different uranium contents. Multiplication factors, reactivities and neutron fluxes were calculated. We estimated also the temperature and void effects for the fresh core. As would be expected, we have found for the fresh core a great difference between reactor spectra and Maxwell's one, which was used before for averaging cross sections in the Oklo reactor. The averaged cross section of Sm and its dependence on the shift of resonance position (due to variation of fundamental constants) are significantly different from previous results. Contrary...

  12. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    Science.gov (United States)

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc.

  13. In-Core Computation of Geometric Centralities with HyperBall: A Hundred Billion Nodes and Beyond

    CERN Document Server

    Boldi, Paolo

    2013-01-01

    Given a social network, which of its nodes are more central? This question has been asked many times in sociology, psychology and computer science, and a whole plethora of centrality measures (a.k.a. centrality indices, or rankings) were proposed to account for the importance of the nodes of a network. In this paper, we approach the problem of computing geometric centralities, such as closeness and harmonic centrality, on very large graphs; traditionally this task requires an all-pairs shortest-path computation in the exact case, or a number of breadth-first traversals for approximated computations, but these techniques yield very weak statistical guarantees on highly disconnected graphs. We rather assume that the graph is accessed in a semi-streaming fashion, that is, that adjacency lists are scanned almost sequentially, and that a very small amount of memory (in the order of a dozen bytes) per node is available in core memory. We leverage the newly discovered algorithms based on HyperLogLog counters, making...

  14. Implementation of the NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for High Performance Computing (HPC). In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for CFD applications.

  15. Experiences in Benchmarking of Autonomic Systems

    Science.gov (United States)

    Etchevers, Xavier; Coupaye, Thierry; Vachet, Guy

    Autonomic computing promises improvements of systems quality of service in terms of availability, reliability, performance, security, etc. However, little research and experimental results have so far demonstrated this assertion, nor provided proof of the return on investment stemming from the efforts that introducing autonomic features requires. Existing works in the area of benchmarking of autonomic systems can be characterized by their qualitative and fragmented approaches. Still a crucial need is to provide generic (i.e. independent from business, technology, architecture and implementation choices) autonomic computing benchmarking tools for evaluating and/or comparing autonomic systems from a technical and, ultimately, an economical point of view. This article introduces a methodology and a process for defining and evaluating factors, criteria and metrics in order to qualitatively and quantitatively assess autonomic features in computing systems. It also discusses associated experimental results on three different autonomic systems.

  16. Natural nuclear reactor at Oklo and variation of fundamental constants: Computation of neutronics of a fresh core

    Science.gov (United States)

    Petrov, Yu. V.; Nazarov, A. I.; Onegin, M. S.; Petrov, V. Yu.; Sakhnovsky, E. G.

    2006-12-01

    Using modern methods of reactor physics, we performed full-scale calculations of the Oklo natural reactor. For reliability, we used recent versions of two Monte Carlo codes: the Russian code MCU-REA and the well-known international code MCNP. Both codes produced similar results. We constructed a computer model of the Oklo reactor zone RZ2 which takes into account all details of design and composition. The calculations were performed for three fresh cores with different uranium contents. Multiplication factors, reactivities, and neutron fluxes were calculated. We also estimated the temperature and void effects for the fresh core. As would be expected, we found for the fresh core a significant difference between reactor and Maxwell spectra, which had been used before for averaging cross sections in the Oklo reactor. The averaged cross section of 62149Sm and its dependence on the shift of a resonance position Er (due to variation of fundamental constants) are significantly different from previous results. Contrary to the results of previous papers, we found no evidence of a change of the samarium cross section: a possible shift of the resonance energy is given by the limits -73⩽ΔEr⩽62 meV. Following tradition, we have used formulas of Damour and Dyson to estimate the rate of change of the fine structure constant α. We obtain new, more accurate limits of -4×10-17⩽α·/α⩽3×10-17yr-1. Further improvement of the accuracy of the limits can be achieved by taking account of the core burn-up. These calculations are in progress.

  17. 基于Core i7处理器的高性能计算机主模块设计%Design of Single Board Computer Based on Core i7 Processor

    Institute of Scientific and Technical Information of China (English)

    黄斌

    2012-01-01

    为了提高基于Compact PCI的抗恶劣环境计算机的处理能力,提出了一种基于Intel Core i7低功耗双核处理器的Compact PCI计算模块的设计方法;该方法中包括了基于Intel Core i7低功耗双核处理器的计算模块的主要设计思路和实现过程;该方法通过采用Intel Core i7 620LE处理器提高了计算机性能,采用热设计保证了被动散热的效果;该计算机主模块已经投入应用,在应用过程中取得了良好的效果.%In order to improve the processing power of ami-harsh environment computer based on CPCI, a design method of Single Board Computer (SBC) based on Intel Core i7 processor is proposed. The method includes major design ideas and implementation processes of SBC based on Intel Core i7 processor. In this method, Intel Core i7 processor is used to improve computer performance, Thermal design method is adopted to improve passive thermal dispersion. This SRC has been put into use, in the application process achieved good results.

  18. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  19. Using a Cloud-Based Computing Environment to Support Teacher Training on Common Core Implementation

    Science.gov (United States)

    Robertson, Cory

    2013-01-01

    A cloud-based computing environment, Google Apps for Education (GAFE), has provided the Anaheim City School District (ACSD) a comprehensive and collaborative avenue for creating, sharing, and editing documents, calendars, and social networking communities. With this environment, teachers and district staff at ACSD are able to utilize the deep…

  20. Design Tools for Accelerating Development and Usage of Multi-Core Computing Platforms

    Science.gov (United States)

    2014-04-01

    e.g., see [Bhattacharyya 2013]). Through their connections to computation graphs [ Karp 1966] and Kahn process networks [Kahn 1974, Lee 1995...parallel programming. In Proceedings of the IFIP Congress, 1974. [ Karp 1966] R. M. Karp and R. E. Miller. Properties of a model for parallel

  1. Incorporating Computer-Aided Software in the Undergraduate Chemical Engineering Core Courses

    Science.gov (United States)

    Alnaizy, Raafat; Abdel-Jabbar, Nabil; Ibrahim, Taleb H.; Husseini, Ghaleb A.

    2014-01-01

    Introductions of computer-aided software and simulators are implemented during the sophomore-year of the chemical engineering (ChE) curriculum at the American University of Sharjah (AUS). Our faculty concurs that software integration within the curriculum is beneficial to our students, as evidenced by the positive feedback received from industry…

  2. Incorporating Computer-Aided Software in the Undergraduate Chemical Engineering Core Courses

    Science.gov (United States)

    Alnaizy, Raafat; Abdel-Jabbar, Nabil; Ibrahim, Taleb H.; Husseini, Ghaleb A.

    2014-01-01

    Introductions of computer-aided software and simulators are implemented during the sophomore-year of the chemical engineering (ChE) curriculum at the American University of Sharjah (AUS). Our faculty concurs that software integration within the curriculum is beneficial to our students, as evidenced by the positive feedback received from industry…

  3. Using a Cloud-Based Computing Environment to Support Teacher Training on Common Core Implementation

    Science.gov (United States)

    Robertson, Cory

    2013-01-01

    A cloud-based computing environment, Google Apps for Education (GAFE), has provided the Anaheim City School District (ACSD) a comprehensive and collaborative avenue for creating, sharing, and editing documents, calendars, and social networking communities. With this environment, teachers and district staff at ACSD are able to utilize the deep…

  4. Comparing Neuromorphic Solutions in Action: Implementing a Bio-Inspired Solution to a Benchmark Classification Task on Three Parallel-Computing Platforms.

    Science.gov (United States)

    Diamond, Alan; Nowotny, Thomas; Schmuker, Michael

    2015-01-01

    Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and "neuromorphic algorithms" are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for

  5. Comparing neuromorphic solutions in action: implementing a bio-inspired solution to a benchmark classification task on three parallel-computing platforms

    Directory of Open Access Journals (Sweden)

    Alan eDiamond

    2016-01-01

    Full Text Available Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and neuromorphic algorithms are being developed. As they are maturing towards deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability and power efficiency.Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analogue Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task.We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model’s ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached.With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication

  6. Radiography benchmark 2014

    Energy Technology Data Exchange (ETDEWEB)

    Jaenisch, G.-R., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Deresch, A., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Bellon, C., E-mail: Gerd-Ruediger.Jaenisch@bam.de [Federal Institute for Materials Research and Testing, Unter den Eichen 87, 12205 Berlin (Germany); Schumm, A.; Lucet-Sanchez, F.; Guerin, P. [EDF R and D, 1 avenue du Général de Gaulle, 92141 Clamart (France)

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  7. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperf...

  8. VN-Sim: A Way to Keep Core Concepts in a Crowded Computing Curriculum

    Directory of Open Access Journals (Sweden)

    R. Raymond Lang

    2012-02-01

    Full Text Available Contemporary computer science curricula must accommodate a broad array of developments important to the field. Tough choices have to be made between introducing newer topics and retaining fundamentals that ground the discipline as a whole. All too frequently, understanding of low level coding and its relation to basic hardware is sacrificed to make room for newer material. VN-Sim, a von Neumann machine simulator, provides a mechanism for streamlined coverage of low level coding and hardware topics.

  9. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    Energy Technology Data Exchange (ETDEWEB)

    Abanades, Alberto [Universidad Politecnica de Madrid (Spain); Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto [ANL, Argonne (United States); Bornos, Victor; Kiyavitskaya, Anna [Joint Institute of Power Eng. and Nucl. Research ' Sosny' , Minsk (Belarus); Carta, Mario [ENEA, Casaccia (Italy); Janczyszyn, Jerzy [AGH-University of Science and Technology, Krakow (Poland); Maiorino, Jose [IPEN, Sao Paulo (Brazil); Pyeon, Cheolho [Kyoto University (Japan); Stanculescu, Alexander [IAEA, Vienna (Austria); Titarenko, Yury [ITEP, Moscow (Russian Federation); Westmeier, Wolfram [Wolfram Westmeier GmbH, Ebsdorfergrund (Germany)

    2008-07-01

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  10. Support for the Core Research Activities and Studies of the Computer Science and Telecommunications Board (CSTB)

    Energy Technology Data Exchange (ETDEWEB)

    Jon Eisenberg, Director, CSTB

    2008-05-13

    The Computer Science and Telecommunications Board of the National Research Council considers technical and policy issues pertaining to computer science (CS), telecommunications, and information technology (IT). The functions of the board include: (1) monitoring and promoting the health of the CS, IT, and telecommunications fields, including attention as appropriate to issues of human resources and funding levels and program structures for research; (2) initiating studies involving CS, IT, and telecommunications as critical resources and sources of national economic strength; (3) responding to requests from the government, non-profit organizations, and private industry for expert advice on CS, IT, and telecommunications issues; and to requests from the government for expert advice on computer and telecommunications systems planning, utilization, and modernization; (4) fostering interaction among CS, IT, and telecommunications researchers and practitioners, and with other disciplines; and providing a base of expertise in the National Research Council in the areas of CS, IT, and telecommunications. This award has supported the overall operation of CSTB. Reports resulting from the Board's efforts have been widely disseminated in both electronic and print form, and all CSTB reports are available at its World Wide Web home page at cstb.org. The following reports, resulting from projects that were separately funded by a wide array of sponsors, were completed and released during the award period: 2007: * Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale * Social Security Administration Electronic Service Provision: A Strategic Assessment * Toward a Safer and More Secure Cyberspace * Software for Dependable Systems: Sufficient Evidence? * Engaging Privacy and Information Technology in a Digital Age * Improving Disaster Management: The Role of IT in Mitigation, Preparedness, Response, and Recovery 2006: * Renewing U.S. Telecommunications

  11. Customized Architecture for Complex Routing Analysis: Case Study for the Convey Hybrid-Core Computer

    Science.gov (United States)

    2014-02-18

    circuits  that  can   be  reconfigured  using  a  hardware  description  language  such  as   Verilog .  Current   state...a   Verilog -­‐based  design  environment,  is  used  to  implement  a  custom-­‐ designed  computer  architecture,  or

  12. Theory and computer simulation of hard-core Yukawa mixtures: thermodynamical, structural and phase coexistence properties

    Science.gov (United States)

    Mkanya, Anele; Pellicane, Giuseppe; Pini, Davide; Caccamo, Carlo

    2017-09-01

    We report extensive calculations, based on the modified hypernetted chain (MHNC) theory, on the hierarchical reference theory (HRT), and on Monte Carlo simulations, of thermodynamical, structural and phase coexistence properties of symmetric binary hard-core Yukawa mixtures (HCYM) with attractive interactions at equal species concentration. The obtained results are throughout compared with those available in the literature for the same systems. It turns out that the MHNC predictions for thermodynamic and structural quantities are quite accurate in comparison with the MC data. The HRT is equally accurate for thermodynamics, and slightly less accurate for structure. Liquid-vapor (LV) and liquid-liquid (LL) consolute coexistence conditions as emerging from simulations, are also highly satisfactorily reproduced by both the MHNC and HRT for relatively long ranged potentials. When the potential range reduces, the MHNC faces problems in determining the LV binodal line; however, the LL consolute line and the critical end point (CEP) temperature and density turn out to be still satisfactorily predicted within this theory. The HRT also predicts with good accuracy the CEP position. The possibility of employing liquid state theories HCYM for the purpose of reliably determining phase equilibria in multicomponent colloidal fluids of current technological interest, is discussed.

  13. A new model for the computation of the formation factor of core rocks

    Science.gov (United States)

    Beltrán, A.; Chávez, O.; Zaldivar, J.; Godínez, F. A.; García, A.; Zenit, R.

    2017-04-01

    Among all the rock parameters measured by modern well logging tools, the formation factor is essential because it can be used to calculate the volume of oil- and/or gas in wellsite. A new mathematical model to calculate the formation factor is analytically derived from first principles. Given the electrical properties of both rock and brine (resistivities) and tortuosity (a key parameter of the model), it is possible to calculate the dependence of the formation factor with porosity with good accuracy. When the cementation exponent ceases to remain constant with porosity; the new model is capable of capturing both: the non-linear behavior (for small porosity values) and the typical linear one in log-log plots for the formation factor vs. porosity. Comparisons with experimental data from four different conventional core rock lithologies: sands, sandstone, limestone and volcanic are shown, for all of them a good agreement is observed. This new model is robust, simple and of easy implementation for practical applications. In some cases, it could substitute Archie's law replacing its empirical nature.

  14. Adapting to a New Core Curriculum at Hood College: From Computation to Quantitative Literacy

    Directory of Open Access Journals (Sweden)

    Betty Mayfield

    2015-07-01

    Full Text Available Our institution, a small, private liberal arts college, recently revised its core curriculum. In the Department of Mathematics, we took this opportunity to formally introduce Quantitative Literacy into the language and the reality of the academic requirements for all students. We developed a list of characteristics that we thought all QL courses should exhibit, no matter in which department they are taught. We agreed on a short list of learning outcomes for students who complete those courses. Then we conducted a preliminary assessment of those two attributes: the fidelity of QL-labeled courses to our list of desired characteristics, and our students’ success in meeting the learning objectives. We also performed an attitudes survey in two courses, measuring students’ attitudes towards mathematics before and after completing a QL course. In the process we have had valuable conversations with full- and part-time faculty, and we have been led to re-examine the role of adjunct faculty in our department. In this paper we list our course characteristics and include one instructor’s description of how she ensured that her QL course exhibited many of those traits. We include examples of student work illustrating how they met the learning objectives, and we report on the results of our attitudes survey. Much remains to be done; we describe our preliminary conclusions and plans for the future.

  15. Support for the Core Research Activities and Studies of the Computer Science and Telecommunications Board (CSTB)

    Energy Technology Data Exchange (ETDEWEB)

    Jon Eisenberg, Director, CSTB

    2008-05-13

    The Computer Science and Telecommunications Board of the National Research Council considers technical and policy issues pertaining to computer science (CS), telecommunications, and information technology (IT). The functions of the board include: (1) monitoring and promoting the health of the CS, IT, and telecommunications fields, including attention as appropriate to issues of human resources and funding levels and program structures for research; (2) initiating studies involving CS, IT, and telecommunications as critical resources and sources of national economic strength; (3) responding to requests from the government, non-profit organizations, and private industry for expert advice on CS, IT, and telecommunications issues; and to requests from the government for expert advice on computer and telecommunications systems planning, utilization, and modernization; (4) fostering interaction among CS, IT, and telecommunications researchers and practitioners, and with other disciplines; and providing a base of expertise in the National Research Council in the areas of CS, IT, and telecommunications. This award has supported the overall operation of CSTB. Reports resulting from the Board's efforts have been widely disseminated in both electronic and print form, and all CSTB reports are available at its World Wide Web home page at cstb.org. The following reports, resulting from projects that were separately funded by a wide array of sponsors, were completed and released during the award period: 2007: * Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale * Social Security Administration Electronic Service Provision: A Strategic Assessment * Toward a Safer and More Secure Cyberspace * Software for Dependable Systems: Sufficient Evidence? * Engaging Privacy and Information Technology in a Digital Age * Improving Disaster Management: The Role of IT in Mitigation, Preparedness, Response, and Recovery 2006: * Renewing U.S. Telecommunications

  16. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  17. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  18. Multiple-pressure-tapped core holder combined with X-ray computed tomography scanning for gas-water permeability measurements of methane-hydrate-bearing sediments

    Science.gov (United States)

    Konno, Yoshihiro; Jin, Yusuke; Uchiumi, Takashi; Nagao, Jiro

    2013-06-01

    We present a novel setup for measuring the effective gas-water permeability of methane-hydrate-bearing sediments. We developed a core holder with multiple pressure taps for measuring the pressure gradient of the gas and water phases. The gas-water flooding process was simultaneously detected using an X-ray computed tomography scanner. We successfully measured the effective gas-water permeability of an artificial sandy core with methane hydrate during the gas-water flooding test.

  19. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques....... In this paper, we review the modern foundations for frontier-based regulation and we discuss its actual use in several jurisdictions....

  20. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  1. Benchmarking Query Execution Robustness

    Science.gov (United States)

    Wiener, Janet L.; Kuno, Harumi; Graefe, Goetz

    Benchmarks that focus on running queries on a well-tuned database system ignore a long-standing problem: adverse runtime conditions can cause database system performance to vary widely and unexpectedly. When the query execution engine does not exhibit resilience to these adverse conditions, addressing the resultant performance problems can contribute significantly to the total cost of ownership for a database system in over-provisioning, lost efficiency, and increased human administrative costs. For example, focused human effort may be needed to manually invoke workload management actions or fine-tune the optimization of specific queries.

  2. Predictive uncertainty reduction in coupled neutron-kinetics/thermal hydraulics modeling of the BWR-TT2 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Badea, Aurelian F., E-mail: aurelian.badea@kit.edu [Karlsruhe Institute of Technology, Vincenz-Prießnitz-Str. 3, 76131 Karlsruhe (Germany); Cacuci, Dan G. [Center for Nuclear Science and Energy/Dept. of ME, University of South Carolina, 300 Main Street, Columbia, SC 29208 (United States)

    2017-03-15

    Highlights: • BWR Turbine Trip 2 (BWR-TT2) benchmark. • Substantial (up to 50%) reduction of uncertainties in the predicted transient power. • 6660 uncertain model parameters were calibrated. - Abstract: By applying a comprehensive predictive modeling methodology, this work demonstrates a substantial (up to 50%) reduction of uncertainties in the predicted total transient power in the BWR Turbine Trip 2 (BWR-TT2) benchmark while calibrating the numerical simulation of this benchmark, comprising 6090 macroscopic cross sections, and 570 thermal-hydraulics parameters involved in modeling the phase-slip correlation, transient outlet pressure, and total mass flow. The BWR-TT2 benchmark is based on an experiment that was carried out in 1977 in the NPP Peach Bottom 2, involving the closure of the turbine stop valve which caused a pressure wave that propagated with attenuation into the reactor core. The condensation of the steam in the reactor core caused by the pressure increase led to a positive reactivity insertion. The subsequent rise of power was limited by the feedback and the insertion of the control rods. The BWR-TT2 benchmark was modeled with the three-dimensional reactor physics code system DYN3D, by coupling neutron kinetics with two-phase thermal-hydraulics. All 6660 DYN3D model parameters were calibrated by applying a predictive modeling methodology that combines experimental and computational information to produce optimally predicted best-estimate results with reduced predicted uncertainties. Simultaneously, the predictive modeling methodology yields optimally predicted values for the BWR total transient power while reducing significantly the accompanying predicted standard deviations.

  3. Algorithm and Architecture Independent Benchmarking with SEAK

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  4. Benchmark Generation and Simulation at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Lagadapati, Mahesh [North Carolina State University (NCSU), Raleigh; Mueller, Frank [North Carolina State University (NCSU), Raleigh; Engelmann, Christian [ORNL

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  5. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  6. SedCT: MATLAB™ tools for standardized and quantitative processing of sediment core computed tomography (CT) data collected using a medical CT scanner

    Science.gov (United States)

    Reilly, B. T.; Stoner, J. S.; Wiest, J.

    2017-08-01

    Computed tomography (CT) of sediment cores allows for high-resolution images, three-dimensional volumes, and down core profiles. These quantitative data are generated through the attenuation of X-rays, which are sensitive to sediment density and atomic number, and are stored in pixels as relative gray scale values or Hounsfield units (HU). We present a suite of MATLAB™ tools specifically designed for routine sediment core analysis as a means to standardize and better quantify the products of CT data collected on medical CT scanners. SedCT uses a graphical interface to process Digital Imaging and Communications in Medicine (DICOM) files, stitch overlapping scanned intervals, and create down core HU profiles in a manner robust to normal coring imperfections. Utilizing a random sampling technique, SedCT reduces data size and allows for quick processing on typical laptop computers. SedCTimage uses a graphical interface to create quality tiff files of CT slices that are scaled to a user-defined HU range, preserving the quantitative nature of CT images and easily allowing for comparison between sediment cores with different HU means and variance. These tools are presented along with examples from lacustrine and marine sediment cores to highlight the robustness and quantitative nature of this method.

  7. Laser anemometer measurements and computations in an annular cascade of high turning core turbine vanes

    Science.gov (United States)

    Goldman, Louis J.; Seasholtz, Richard G.

    1992-01-01

    An advanced laser anemometer (LA) was used to measure the axial and tangential velocity components in an annular cascade of turbine stator vanes designed for a high bypass ratio engine. These vanes were based on a redesign of the first-stage stator, of a two-stage turbine, that produced 75 degrees of flow turning. Tests were conducted on a 0.771 scale model of the engine size stator. The advanced LA fringe system was designed to employ thinner than usual laser beams resulting in a 50-micron-diameter probe volume. Window correction optics were used to ensure that the laser beams did not uncross in passing through the curved optical access port. Experimental LA measurements of velocity and turbulence were obtained both upstream, within, and downstream of the stator vane row at the design exit critical velocity ratio of 0.896 at the hub. Static pressures were also measured on the vane surface. The measurements are compared, where possible with calculations from a 3-D inviscid flow analysis. The data are presented in both graphic and tabulated form so that they may be readily used to compare against other turbomachinery computations.

  8. Time Is Not Space: Core Computations and Domain-Specific Networks for Mental Travels.

    Science.gov (United States)

    Gauthier, Baptiste; van Wassenhove, Virginie

    2016-11-23

    Humans can consciously project themselves in the future and imagine themselves at different places. Do mental time travel and mental space navigation abilities share common cognitive and neural mechanisms? To test this, we recorded fMRI while participants mentally projected themselves in time or in space (e.g., 9 years ago, in Paris) and ordered historical events from their mental perspective. Behavioral patterns were comparable for mental time and space and shaped by self-projection and by the distance of historical events to the mental position of the self, suggesting the existence of egocentric mapping in both dimensions. Nonetheless, self-projection in space engaged the medial and lateral parietal cortices, whereas self-projection in time engaged a widespread parietofrontal network. Moreover, while a large distributed network was found for spatial distances, temporal distances specifically engaged the right inferior parietal cortex and the anterior insula. Across these networks, a robust overlap was only found in a small region of the inferior parietal lobe, adding evidence for its role in domain-general egocentric mapping. Our findings suggest that mental travel in time or space capitalizes on egocentric remapping and on distance computation, which are implemented in distinct dimension-specific cortical networks converging in inferior parietal lobe.

  9. Computational simulation of natural convection of a molten core in lower head of a PWR pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Camila Braga; Romero, Gabriel Alves; Jian Su, E-mail: camila@lasme.coppe.ufrj.b, E-mail: gabrielromero@lasme.coppe.ufrj.b, E-mail: sujian@lasme.coppe.ufrj.b [Universidade Federal do Rio de Janeiro (COPPE/UFRJ), RJ (Brazil). Nuclear Engineering Program

    2010-07-01

    Computational simulation of natural convection in a molten core during a hypothetical severe accident in the lower head of a typical PWR pressure vessel was performed for two-dimensional semi-circular geometry with isothermal walls. Transient turbulent natural convection heat transfer of a fluid with uniformly distributed volumetric heat generation rate was simulated by using a commercial computational fluid dynamics software ANSYS CFX 12.0. The Boussinesq model was used for the buoyancy effect generated by the internal heat source in the flow field. The two-equation k-{omega} based SST (Shear Stress Transport) turbulence model was used to mould the turbulent stresses in the Reynolds-Average Navier-Stokes equations (RANS). Two Prandtl numbers, 6:13 and 7:0, were considered. Five Rayleigh numbers were simulated for each Prandtl number used (109, 1010, 1011, 1012, and 1013). The average Nusselt numbers on the bottom surface of the semicircular cavity were in excellent agreement with Mayinger et al. (1976) correlation and only at Ra = 109 the average Nusselt number on the top flat surface was in agreement with Mayinger et al. (1976) and Kulacki and Emara (1975) correlations. (author)

  10. Entropy-based benchmarking methods

    OpenAIRE

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  11. A New Benchmark For Evaluation Of Graph-Theoretic Algorithms

    CERN Document Server

    Yoo, Andy B; Vaidya, Sheila; Poole, Stephen

    2010-01-01

    We propose a new graph-theoretic benchmark in this paper. The benchmark is developed to address shortcomings of an existing widely-used graph benchmark. We thoroughly studied a large number of traditional and contemporary graph algorithms reported in the literature to have clear understanding of their algorithmic and run-time characteristics. Based on this study, we designed a suite of kernels, each of which represents a specific class of graph algorithms. The kernels are designed to capture the typical run-time behavior of target algorithms accurately, while limiting computational and spatial overhead to ensure its computation finishes in reasonable time. We expect that the developed benchmark will serve as a much needed tool for evaluating different architectures and programming models to run graph algorithms.

  12. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  13. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mausolff, Zander [Univ. of Florida, Gainesville, FL (United States); Weems, Zach [Univ. of Florida, Gainesville, FL (United States); Popp, Dustin [Univ. of Florida, Gainesville, FL (United States); Smith, Kristin [Univ. of Florida, Gainesville, FL (United States); Shriver, Forrest [Univ. of Florida, Gainesville, FL (United States); Goluoglu, Sedat [Univ. of Florida, Gainesville, FL (United States); Prince, Zachary [Texas A & M Univ., College Station, TX (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  14. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  15. Performance and Scalability of the NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for scientific applications. In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for scientific applications.

  16. IAEA GT-MHR Benchmark Calculations Using the HELIOS/MASTER Two-Step Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyung Hoon; Kim, Kang Seog; Cho, Jin Young; Song, Jae Seung; Noh, Jae Man; Lee, Chung Chan; Zee, Sung Quun

    2007-05-15

    A new two-step procedure based on the HELISO/MASTER code system has been developed for the prismatic VHTR physics analysis. This procedure employs the HELIOS code for the transport lattice calculation to generate a few group constants, and the MASTER code for the 3-dimensional core calculation to perform the reactor physics analysis. Double heterogeneity effect due to the random distribution of the particulate fuel could be dealt with the recently developed reactivity-equivalent physical transformation (RPT) method. The strong spectral effects of the graphite moderated reactor core could be solved both by optimizing the number of energy groups and group boundaries, and by employing a partial core model instead of a single block one to generate a few group cross sections. Burnable poisons in the inner reflector and asymmetrically located large control rod can be treated by adopting the equivalence theory applied for the multi-block models to generate surface dependent discontinuity factors. Effective reflector cross sections were generated by using a simple mini-core model and an equivalence theory. In this study the IAEA GT-MHR benchmark problems with a plutonium fuel were analyzed by using the HELIOS/MASTER code package and the Monte Carlo code MCNP. Benchmark problems include pin, block and core models. The computational results of the HELIOS/MASTER code system were compared with those of MCNP and other participants. The results show that the 2-step procedure using HELIOS/MASTER can be applied to the reactor physics analysis for the prismatic VHTR with a good accuracy.

  17. In-core fuel management for pebble-bed reactors

    Energy Technology Data Exchange (ETDEWEB)

    Milian Perez, Daniel; Rodriguez Garcia, Lorena; Garcia Hernandez, Carlos; Milian Lorenzo, Daniel, E-mail: dperez@instec.cu, E-mail: cgh@instec.cu, E-mail: dmilian@instec.cu [Higher Institute of Technologies and Applied Sciences, Havana (Cuba); Velasco, Abanades, E-mail: abanades@etsii.upm.es [Department of Simulation of Thermo Energy Systems, Polytechnic University of Madrid (Spain)

    2013-07-01

    In this paper a calculation procedure to reduce the power peak in the core of a Very High Temperature pebble bed Reactor is presented. This procedure combines the fuel depletion and the neutronic behavior of the fuel in the reactor core, modeling once-through-then-out cycles as well as cycles in which pebbles are recirculated through the core an arbitrary number of times, obtaining the asymptotic fuel-loading pattern. The procedure consists in several coupled computational codes, which are used iteratively until convergence is reached. The utilization of the MCNPX 2.6e, as one of these computational codes, is validated through the calculation of benchmarks announced by IAEA (IAEA-TECDOC-1249, 2001). To complete the verification of the calculation procedure a base case described in (Annals of Nuclear Energy 29 (2002) 1345-1364), was performed. The procedure has been applied to a model of Pebble Bed Modular Reactor (200 MW) design. (author)

  18. Computer modeling reveals that modifications of the histone tail charges define salt-dependent interaction of the nucleosome core particles.

    Science.gov (United States)

    Yang, Ye; Lyubartsev, Alexander P; Korolev, Nikolay; Nordenskiöld, Lars

    2009-03-18

    Coarse-grained Langevin molecular dynamics computer simulations were conducted for systems that mimic solutions of nucleosome core particles (NCPs). The NCP was modeled as a negatively charged spherical particle representing the complex of DNA and the globular part of the histones combined with attached strings of connected charged beads modeling the histone tails. The size, charge, and distribution of the tails relative to the core were built to match real NCPs. Three models of NCPs were constructed to represent different extents of covalent modification on the histone tails: (nonmodified) recombinant (rNCP), acetylated (aNCP), and acetylated and phosphorylated (paNCP). The simulation cell contained 10 NCPs in a dielectric continuum with explicit mobile counterions and added salt. The NCP-NCP interaction is decisively dependent on the modification state of the histone tails and on salt conditions. Increasing the monovalent salt concentration (KCl) from salt-free to physiological concentration leads to NCP aggregation in solution for rNCP, whereas NCP associates are observed only occasionally in the system of aNCPs. In the presence of divalent salt (Mg(2+)), rNCPs form dense stable aggregates, whereas aNCPs form aggregates less frequently. Aggregates are formed via histone-tail bridging and accumulation of counterions in the regions of NCP-NCP contacts. The paNCPs do not show NCP-NCP interaction upon addition of KCl or in the presence of Mg(2+). Simulations for systems with a gradual substitution of K(+) for Mg(2+), to mimic the Mg(2+) titration of an NCP solution, were performed. The rNCP system showed stronger aggregation that occurred at lower concentrations of added Mg(2+), compared to the aNCP system. Additional molecular dynamics simulations performed with a single NCP in the simulation cell showed that detachment of the tails from the NCP core was modest under a wide range of salt concentrations. This implies that salt-induced tail dissociation of the

  19. Benchmarking and accounting for the (private) cloud

    Science.gov (United States)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  20. NPS-NRL-Rice-UIUC Collaboration on Navy Atmosphere-Ocean Coupled Models on Many-Core Computer Architectures Annual Report

    Science.gov (United States)

    2015-09-30

    the U.S. community that can synergistically move the knowledge of accelerator-based computing to many of the climate, weather, and ocean modeling...this project are targeted first to help our many-core acceleration of NUMA effort but should be generally applicable to many scientific computing codes...steady progress has been made on the Euler mini-app, gNUMA. This includes the debugging of discontinuous diffusion kernels. Thus, we now have all

  1. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  2. Construction for multi-mode computing system based on many-core processor%众核多计算模式系统的构建

    Institute of Scientific and Technical Information of China (English)

    王可锋; 吴晓; 罗眉

    2013-01-01

    复杂应用领域中的一些具体计算任务不仅需要计算平台具备高效的计算能力,而且也应具有与计算任务特点相匹配的计算模式。依据NVIDIA Kepler GK110架构中Hyper-Q特性与CUDA流的关系,提出单任务并行、多任务并行与多任务流式计算三种计算模式。采用空位标记的方法对计算模式进行构建与切换,结合数据缓冲机制和计算任务加载方式,设计了众核多计算模式处理系统,实现了众核处理机多模式计算的功能。%Some specific computing tasks in complex application domains not only require that the computing platform has efficient computing capability,but also have corresponding computation modes which match with the characteristics of computing tasks. Based on the relation between hyper-Q and CUDA stream in NVIDIA Kepler GK110 architecture,three computing models are presented in this paper:single task parallel computation,multi-task parallel computation and multi-task stream-oriented com-putation. The method of marking notation on unoccupied location is adopted to construct and switch computing modes. In combi-nation with data buffering mechanism and computational task loading,the multi-mode computing system based on many-core pro-cessor was designed and the multi-mode computation function of many-core processor was implemented.

  3. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  4. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  5. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  6. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth pre

  7. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  8. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  10. Visualizing 3D/4D Environmental Big Data Using Many-core Compute Unified Device Architecture (CUDA) and Multi-core Central Processing Unit (CPUs)

    Science.gov (United States)

    Li, J.; Jiang, Y.; Yang, C.; Huang, Q.

    2012-12-01

    Visualizing 3D/4D environmental Big Data is critical to understand and predict environmental phenomena for relevant decision making. This research explores how to best utilize Graphics Process Units (GPUs) and Central Processing Units (CPUs) collaboratively to speed up the visualization process. Taking the visualization of dust storm as an example, we developed a systematic visualization framework. To compare the potential speedup of using GPUs versus that of using CPUs, we implemented visualization components based on both multi-core CPUs and many-core GPUs. We found that 1) multi-core CPUs and many-core GPUs can improve the efficiency of mathematical calculations and graphics rendering using multithreading techniques; 2) when increasing the size of blocks of GPUs for reprojecting, interpolating and rendering the same data, the executing time drops consistently before reaching a peak.; 3) GPU-based implementations is faster than CPU-based implementations. However, the best performance of rendering with GPUs is very close to that with CPUs. Therefore, visualization of 3D/4D environmental data using GPUs is a better solution than that of using CPUs.

  11. Benchmarks to supplant export FPDR (Floating Point Data Rate) calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D.; Brooks, E.; Dongarra, J.; Hayes, A.; Lyon, G.

    1988-06-01

    Because modern computer architectures render application of the FPDR (Floating Point Data Processing Rate) increasingly difficult, there has been increased interest in export evaluation via actual system performances. The report discusses benchmarking of uniprocessor (usually vector) machines for scientific computation (SIMD array processors are not included), and parallel processing and its characterization for export control.

  12. Performance analysis of the FDTD method applied to holographic volume gratings: Multi-core CPU versus GPU computing

    Science.gov (United States)

    Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.

    2013-03-01

    The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.

  13. Common Core: Victory Is Yours!

    Science.gov (United States)

    Fink, Jennifer L. W.

    2012-01-01

    In this article, the author discusses how to implement the Common Core State Standards in the classroom. She presents examples and activities that will leave teachers feeling "rosy" about tackling the new standards. She breaks down important benchmarks and shows how other teachers are doing the Core--and loving it!

  14. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  15. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport...

  16. A chemical solver to compute molecule and grain abundances and non-ideal MHD resistivities in prestellar core collapse calculations

    CERN Document Server

    Marchand, Pierre; Chabrier, Gilles; Hennebelle, Patrick; Commerçon, Benoit; Vaytet, Neil

    2016-01-01

    We develop a detailed chemical network relevant to the conditions characteristic of prestellar core collapse. We solve the system of time-dependent differential equations to calculate the equilibrium abundances of molecules and dust grains, with a size distribution given by size-bins for these latter. These abundances are used to compute the different non-ideal magneto-hydrodynamics resistivities (ambipolar, Ohmic and Hall), needed to carry out simulations of protostellar collapse. For the first time in this context, we take into account the evaporation of the grains, the thermal ionisation of Potassium, Sodium and Hydrogen at high temperature, and the thermionic emission of grains in the chemical network, and we explore the impact of various cosmic ray ionisation rates. All these processes significantly affect the non-ideal magneto-hydrodynamics resistivities, which will modify the dynamics of the collapse. Ambipolar diffusion and Hall effect dominate at low densities, up to n_H = 10^12 cm^-3, after which Oh...

  17. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  18. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  19. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  20. Correlational effect size benchmarks.

    Science.gov (United States)

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  1. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  2. Comparison between triangular and hexagonal modeling of a hexagonal-structured reactor core using box method

    Energy Technology Data Exchange (ETDEWEB)

    Malmir, Hessam, E-mail: malmir@energy.sharif.edu [Department of Energy Engineering, Sharif University of Technology, Azadi Street, Tehran (Iran, Islamic Republic of); Moghaddam, Nader Maleki [Department of Nuclear Engineering and Physics, Amir Kabir University of Technology (Tehran Polytechnique), Hafez Street, Tehran (Iran, Islamic Republic of); Zahedinejad, Ehsan [Department of Energy Engineering, Sharif University of Technology, Azadi Street, Tehran (Iran, Islamic Republic of)

    2011-02-15

    A hexagonal-structured reactor core (e.g. VVER-type) is mostly modeled by structured triangular and hexagonal mesh zones. Although both the triangular and hexagonal models give good approximations over the neutronic calculation of the core, there are some differences between them that seem necessary to be clarified. For this purpose, the neutronic calculations of a hexagonal-structured reactor core have to be performed using the structured triangular and hexagonal meshes based on box method of discretisation and then the results of two models should be benchmarked in different cases. In this paper, the box method of discretisation is derived for triangular and hexagonal meshes. Then, two 2-D 2-group static simulators for triangular and hexagonal geometries (called TRIDIF-2 and HEXDIF-2, respectively) are developed using the box method. The results are benchmarked against the well-known CITATION computer code in case of a VVER-1000 reactor core. Furthermore, the relative powers calculated by the TRIDIF-2 and HEXDIF-2 along with the ones obtained by the CITATION code are compared with the verified results which have been presented in the Final Safety Analysis Report (FSAR) of the aforementioned reactor. Different benchmark cases revealed the reliability of the box method in contrast with the CITATION code. Furthermore, it is shown that the triangular modeling of the core is more acceptable compared with the hexagonal one.

  3. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  4. Transthoracic Computed Tomography-Guided Lung Nodule Biopsy: Comparison of Core Needle and Fine Needle Aspiration Techniques.

    Science.gov (United States)

    Sangha, Bippan S; Hague, Cameron J; Jessup, Jennifer; O'Connor, Robert; Mayo, John R

    2016-08-01

    To determine if there is a statistically significant difference in the computed tomography (CT)-guided trans-thoracic needle biopsy diagnostic rate, complication rate, and degree of pathologist confidence in diagnosis between core needle biopsy (CNB) and fine needle aspiration biopsy (FNAB). A retrospective cohort design was used to compare the diagnostic biopsy rate, diagnostic confidence, and biopsy-related complications of pneumothorax, chest tube placement, pulmonary hemorrhage, hemoptysis, admission to hospital, and length of stay between 251 transthoracic needle biopsies obtained via CNB (126) or FNAB (125). Complication rates were assessed using imaging and clinical follow-up. Final diagnosis was confirmed via surgical pathology or clinical follow-up over a period of up to 10 years. CNB provided diagnostic samples in 91% and FNA in 80% of biopsies, which was statistically significant (P < .05). The sensitivities for CNB and FNAB were 89% (85 of 95) and 95% (84 of 88), respectively. The specificity of CNB was 100% (21 of 21) and for FNAB was 81% (2 of 11) with 2 false positives in the FNAB group. The differences in complication rate was not statistically significant for pneumothorax (50% vs 46%; determined by routine postbiopsy CT), chest tube (2% vs 4%), hemoptysis (4% vs 6%), and pulmonary hemorrhage (38% vs 47%) between FNAB and CNB, respectively. Seven patients requiring chest tube were admitted to hospital, 2 in the FNAB cohort for an average of 2.5 days and 5 in the CNB cohort for an average of 4.6 days. CNB provided more diagnostic samples with no statistical difference in complication rate. Copyright © 2016 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  5. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  6. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  7. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  8. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...... already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity. © IWA Publishing 2013....... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  9. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  10. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  11. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  12. Applicability of 3D Monte Carlo simulations for local values calculations in a PWR core

    Science.gov (United States)

    Bernard, Franck; Cochet, Bertrand; Jinaphanh, Alexis; Jacquet, Olivier

    2014-06-01

    As technical support of the French Nuclear Safety Authority, IRSN has been developing the MORET Monte Carlo code for many years in the framework of criticality safety assessment and is now working to extend its application to reactor physics. For that purpose, beside the validation for criticality safety (more than 2000 benchmarks from the ICSBEP Handbook have been modeled and analyzed), a complementary validation phase for reactor physics has been started, with benchmarks from IRPHEP Handbook and others. In particular, to evaluate the applicability of MORET and other Monte Carlo codes for local flux or power density calculations in large power reactors, it has been decided to contribute to the "Monte Carlo Performance Benchmark" (hosted by OECD/NEA). The aim of this benchmark is to monitor, in forthcoming decades, the performance progress of detailed Monte Carlo full core calculations. More precisely, it measures their advancement towards achieving high statistical accuracy in reasonable computation time for local power at fuel pellet level. A full PWR reactor core is modeled to compute local power densities for more than 6 million fuel regions. This paper presents results obtained at IRSN for this benchmark with MORET and comparisons with MCNP. The number of fuel elements is so large that source convergence as well as statistical convergence issues could cause large errors in local tallies, especially in peripheral zones. Various sampling or tracking methods have been implemented in MORET, and their operational effects on such a complex case have been studied. Beyond convergence issues, to compute local values in so many fuel regions could cause prohibitive slowing down of neutron tracking. To avoid this, energy grid unification and tallies preparation before tracking have been implemented, tested and proved to be successful. In this particular case, IRSN obtained promising results with MORET compared to MCNP, in terms of local power densities, standard

  13. ABM11 parton distributions and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, Sergey [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institut Fiziki Vysokikh Ehnergij, Protvino (Russian Federation); Bluemlein, Johannes; Moch, Sven-Olaf [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-08-15

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant {alpha}{sub s} at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS scheme for {alpha}{sub s} and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  14. Shielding integral benchmark archive and database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L.; Grove, R.E. [Radiation Safety Information Computational Center RSICC, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831-6171 (United States); Kodeli, I. [Josef Stefan Inst., Jamova 39, 1000 Ljubljana (Slovenia); Gulliford, J.; Sartori, E. [OECD NEA Data Bank, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  15. Characterization of liquid-core/liquid-cladding optical waveguides of a sodium chloride solution/water system by computational fluid dynamics.

    Science.gov (United States)

    Kamiyama, Junya; Asanuma, Soto; Murata, Hiroyasu; Sugii, Yasuhiko; Hotta, Hiroki; Sato, Kiichi; Tsunoda, Kin-ichi

    2013-12-01

    A stable liquid/liquid optical waveguide (LLW) was formed using a sheath flow, where a 15% sodium chloride (NaCl) solution functioned as the core solution and water functioned as the cladding solution (15% NaCl/water LLW). The LLW was at least 200 mm in length. The concentration distributions of the liquid core and liquid cladding solutions in the LLW system were predicted by computational fluid dynamics (CFD) to validate the characteristics of the waveguide. The broadening of the region of the fluorescence of Rhodamine B excited by the guided light and the increase in the critical angle of the guided light with the increase in the contact time of the core and the cladding solutions were well explained by CFD calculations. However, no substantial leakage of the guided light was observed despite the considerably large change in the refractive index profile of the LLW; thus, a narrower and longer waveguide was realized.

  16. 以“计算思维”为核心的计算机课程体系研究%Research on Computer Course System with "Computa-tional Thinking"as the Core

    Institute of Scientific and Technical Information of China (English)

    韦良芬; 张健

    2014-01-01

    How to develop university students' ability of the"computing thinking"is a hotspot of current research. According to the corresponding relationship between the core concept of computational thinking and computer courses, this paper con-structed a computer course system with computational thinking as the core. The course system can enhance computational thinking by graduall penetration and guidance of multi-courses, and it plays a positive role in developing the computational thinking of university students.%如何培养大学生的“计算思维”能力是当前研究的热点。根据计算思维的核心概念与计算机相关课程的对应关系,构建了以计算思维为核心的计算机课程体系。该课程体系通过多门课程的逐渐渗透和引导的方法强化计算思维,对培养大学生的计算思维能力将起到积极的作用。

  17. Theoretical and computational comparison of models for dislocation dissociation and stacking fault/core formation in fcc crystals

    Science.gov (United States)

    Mianroodi, J. R.; Hunter, A.; Beyerlein, I. J.; Svendsen, B.

    2016-10-01

    The purpose of the current work is the theoretical and computational comparison of selected models for the energetics of dislocation dissociation resulting in stacking fault and partial dislocation (core) formation in fcc crystals as based on the (generalized) Peierls-Nabarro (GPN: e.g., Xiang et al., 2008; Shen et al., 2014), and phase-field (PF: e.g., Shen and Wang, 2004; Hunter et al., 2011, 2013; Mianroodi and Svendsen, 2015), methodologies (e.g., Wang and Li, 2010). More specifically, in the current work, the GPN-based model of Xiang et al. (2008) is compared theoretically with the PF-based models of Shen and Wang (2004), Hunter et al. (2011, 2013), and Mianroodi and Svendsen (2015). This is carried out here with the help of a unified formulation for these models via a generalization of the approach of Cahn and Hilliard (1958) to mechanics. Differences among these include the model forms for the free energy density ψela of the lattice and the free energy density ψsli associated with dislocation slip. In the PF-based models, for example, ψela is formulated with respect to the residual distortion HR due to dislocation slip (e.g., Khachaturyan, 1983; Mura, 1987), and with respect to the dislocation tensor curl HR in the GPN model (e.g., Xiang et al., 2008). As shown here, both model forms for ψela are in fact mathematically equal and so physically equivalent. On the other hand, model forms for ψsli differ in the assumed dependence on the phase or disregistry fields ϕ, whose spatial variation represents the transition from unslipped to slipped regions in the crystal. In particular, Xiang et al. (2008) and Hunter et al. (2011, 2013) work with ψsli(ϕ). On the other hand, Shen and Wang (2004) and Mianroodi and Svendsen (2015) employ ψsli(ϕ , ∇ ϕ). To investigate the consequences of these differences for the modeling of the dislocation core, dissociation, and stacking fault formation, predictions from the models of Hunter et al. (2011, 2013) and Mianroodi

  18. WIPP Benchmark calculations with the large strain SPECTROM codes

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, G.D.; DeVries, K.L. [RE/SPEC, Inc., Rapid City, SD (United States)

    1995-08-01

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems.

  19. Benchmark Comparison for a Multi-Processing Ion Mobility Calculator in the Free Molecular Regime

    Science.gov (United States)

    Shrivastav, Vaibhav; Nahin, Minal; Hogan, Christopher J.; Larriba-Andaluz, Carlos

    2017-08-01

    A benchmark comparison between two ion mobility and collision cross-section (CCS) calculators, MOBCAL and IMoS, is presented here as a standard to test the efficiency and performance of both programs. Utilizing 47 organic ions, results are in excellent agreement between IMoS and MOBCAL in He and N2, when both programs use identical input parameters. Due to a more efficiently written algorithm and to its parallelization, IMoS is able to calculate the same CCS (within 1%) with a speed around two orders of magnitude faster than its MOBCAL counterpart when seven cores are used. Due to the high computational cost of MOBCAL in N2, reaching tens of thousands of seconds even for small ions, the comparison between IMoS and MOBCAL is stopped at 70 atoms. Large biomolecules (>10000 atoms) remain computationally expensive when IMoS is used in N2 (even when employing 16 cores). Approximations such as diffuse trajectory methods (DHSS, TDHSS) with and without partial charges and projected area approximation corrections can be used to reduce the total computational time by several folds without hurting the accuracy of the solution. These latter methods can in principle be used with coarse-grained model structures and should yield acceptable CCS results.

  20. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  1. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  2. First TPC-Energy Benchmark: Lessons Learned in Practice

    Science.gov (United States)

    Young, Erik; Cao, Paul; Nikolaiev, Mike

    TPC-Energy specification augments the existing TPC Benchmarks with Energy Metrics. The TPC-Energy specification is designed to help hardware buyers identify energy efficient equipment that meets both their computational and budgetary requirements. In this paper we discuss our experience is publishing industry's first-ever TPC-Energy metric publication.

  3. Programming the Linpack Benchmark for the IBM PowerXCell 8i Processor

    Directory of Open Access Journals (Sweden)

    Michael Kistler

    2009-01-01

    Full Text Available In this paper we present the design and implementation of the Linpack benchmark for the IBM BladeCenter QS22, which incorporates two IBM PowerXCell 8i1 processors. The PowerXCell 8i is a new implementation of the Cell Broadband Engine™2 architecture and contains a set of special-purpose processing cores known as Synergistic Processing Elements (SPEs. The SPEs can be used as computational accelerators to augment the main PowerPC processor. The added computational capability of the SPEs results in a peak double precision floating point capability of 108.8 GFLOPS. We explain how we modified the standard open source implementation of Linpack to accelerate key computational kernels using the SPEs of the PowerXCell 8i processors. We describe in detail the implementation and performance of the computational kernels and also explain how we employed the SPEs for high-speed data movement and reformatting. The result of these modifications is a Linpack benchmark optimized for the IBM PowerXCell 8i processor that achieves 170.7 GFLOPS on a BladeCenter QS22 with 32 GB of DDR2 SDRAM memory. Our implementation of Linpack also supports clusters of QS22s, and was used to achieve a result of 11.1 TFLOPS on a cluster of 84 QS22 blades. We compare our results on a single BladeCenter QS22 with the base Linpack implementation without SPE acceleration to illustrate the benefits of our optimizations.

  4. MalStone: Towards A Benchmark for Analytics on Large Data Clouds

    CERN Document Server

    Bennett, Collin; Locke, David; Seidman, Jonathan; Vejcik, Steve

    2010-01-01

    Developing data mining algorithms that are suitable for cloud computing platforms is currently an active area of research, as is developing cloud computing platforms appropriate for data mining. Currently, the most common benchmark for cloud computing is the Terasort (and related) benchmarks. Although the Terasort Benchmark is quite useful, it was not designed for data mining per se. In this paper, we introduce a benchmark called MalStone that is specifically designed to measure the performance of cloud computing middleware that supports the type of data intensive computing common when building data mining models. We also introduce MalGen, which is a utility for generating data on clouds that can be used with MalStone.

  5. Non-destructive Analysis of Oil-Contaminated Soil Core Samples by X-ray Computed Tomography and Low-Field Nuclear Magnetic Resonance Relaxometry: a Case Study

    Science.gov (United States)

    Mitsuhata, Yuji; Nishiwaki, Junko; Kawabe, Yoshishige; Utsuzawa, Shin; Jinguuji, Motoharu

    2010-01-01

    Non-destructive measurements of contaminated soil core samples are desirable prior to destructive measurements because they allow obtaining gross information from the core samples without touching harmful chemical species. Medical X-ray computed tomography (CT) and time-domain low-field nuclear magnetic resonance (NMR) relaxometry were applied to non-destructive measurements of sandy soil core samples from a real site contaminated with heavy oil. The medical CT visualized the spatial distribution of the bulk density averaged over the voxel of 0.31 × 0.31 × 2 mm3. The obtained CT images clearly showed an increase in the bulk density with increasing depth. Coupled analysis with in situ time-domain reflectometry logging suggests that this increase is derived from an increase in the water volume fraction of soils with depth (i.e., unsaturated to saturated transition). This was confirmed by supplementary analysis using high-resolution micro-focus X-ray CT at a resolution of ∼10 μm, which directly imaged the increase in pore water with depth. NMR transverse relaxation waveforms of protons were acquired non-destructively at 2.7 MHz by the Carr–Purcell–Meiboom–Gill (CPMG) pulse sequence. The nature of viscous petroleum molecules having short transverse relaxation times (T2) compared to water molecules enabled us to distinguish the water-saturated portion from the oil-contaminated portion in the core sample using an M0–T2 plot, where M0 is the initial amplitude of the CPMG signal. The present study demonstrates that non-destructive core measurements by medical X-ray CT and low-field NMR provide information on the groundwater saturation level and oil-contaminated intervals, which is useful for constructing an adequate plan for subsequent destructive laboratory measurements of cores. PMID:21258437

  6. Lessons learned for participation in recent OECD-NEA reactor physics and thermalhydraulic benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Novog, D.R.; Leung, K.H.; Ball, M. [McMaster Univ., Dept. of Engineering Physics, Hamilton, Ontario (Canada)

    2013-07-01

    Over the last 6 years the OECD-NEA has initiated a series of computational benchmarks in the fields of reactor physics and thermalhydraulics. Within this context McMaster university has been a key contributor and applied several state of the art tools including TSUNAMI, DRAGON, ASSERT, STAR-CCM+, RELAP and TRACE. Considering the tremendous amount of international participation in these benchmarks, there were many lessons of both technical and non-technical that should be shared. This paper presents a summary of the benchmarks, the results and contributions from McMaster, and the authors opinion on the overall conclusions gained from these extensive benchmarks. The benchmarks discussed in this paper include the Uncertainty Analysis in Modelling (UAM), the BWR fine mesh bundle test (BFBT), the PWR Subchannel Boiling Test (PSBT), the MATiS mixing experiment and the IAEA super critical water benchmarks on heat transfer and stability. (author)

  7. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  8. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart

    2015-11-01

    Full Text Available Benchmarks   Terrence C. Stewart 1* , Travis DeWolf 1 , Ashley Kleinhans 2 , Chris Eliasmith 1   1 University of Waterloo, Canada, 2 Council for Scientific and Industrial Research, South Africa   Submitted to Journal:   Frontiers in Neuroscience   Specialty... the study was exempt from ethical approval procedures.) Did the study presented in the manuscript involve human or animal subjects: No I v i w 1Closed-loop Neuromorphic Benchmarks Terrence C. Stewart 1,∗, Travis DeWolf 1, Ashley Kleinhans 2 and Chris...

  9. The contextual benchmark method: benchmarking e-government services

    NARCIS (Netherlands)

    Jansen, Jurjen; Vries, de Sjoerd; Schaik, van Paul

    2010-01-01

    This paper offers a new method for benchmarking e-Government services. Government organizations no longer doubt the need to deliver their services on line. Instead, the question that is more relevant is how well the electronic services offered by a particular organization perform in comparison with

  10. IceChrono v1: a probabilistic model to compute a common and optimal chronology for several ice cores

    Directory of Open Access Journals (Sweden)

    F. Parrenin

    2014-10-01

    Full Text Available Polar ice cores provides exceptional archives of past environmental conditions. Dating ice and air bubbles/hydrates in ice cores is complicated since it involves different dating methods: modeling of the sedimentation process (accumulation of snow at surface, densification of snow into ice with air trapping and ice flow, use of dated horizons by comparison to other well dated targets (other dated paleo-archives or calculated variations of Earth's orbital parameters, use of dated depth intervals, use of Δdepth information (depth shift between synchronous events in the ice matrix and its air/hydrate content, use of stratigraphic links in between ice cores (ice-ice, air-air or mix ice-air links. Here I propose IceChrono v1, a new probabilistic model to combine these different kinds of chronological information to obtain a common and optimized chronology for several ice cores, as well as its confidence interval. It is based on the inversion of three quantities: the surface accumulation rate, the Lock-In Depth (LID of air bubbles and the vertical thinning function. IceChrono is similar in scope to the Datice model, but has differences on the mathematical, numerical and programming point of views. I apply IceChrono on two dating experiments. The first one is similar to the AICC2012 experiment and I find similar results than Datice within a few centuries, which is a confirmation of both IceChrono and Datice codes. The second experiment involves only the Berkner ice core in Antarctica and I produce the first dating of this ice core. IceChrono v1 is freely available under the GPL v3 open source license.

  11. Furthering Baseline Core Lucid Standard Specification in the Context of the History of Lucid, Intensional Programming, and Context-Aware Computing

    CERN Document Server

    Paquet, Joey

    2011-01-01

    This work is multifold. We review the historical literature on the Lucid programming language, its dialects, intensional logic, intensional programming, the implementing systems, and context-oriented and context-aware computing and so on that provide a contextual framework for the converging Core Lucid standard programming model. We are designing a standard specification of a baseline Lucid virtual machine for generic execution of Lucid programs. The resulting Core Lucid language would inherit the properties of generalization attempts of GIPL (1999-2011) and TransLucid (2008-2011) for all future and recent Lucid-implementing systems to follow. We also maintain this work across local research group in order to foster deeper collaboration, maintain a list of recent and historical bibliography and a reference manual and reading list for students. We form a (for now informal) SIGLUCID group to keep track of this standard and historical records with eventual long-term goal through iterative revisions for this work...

  12. Benchmarking Internet of Things devices

    CSIR Research Space (South Africa)

    Kruger, CP

    2014-07-01

    Full Text Available International Conference on Industrial Informatics (INDIN), 27-30 July 2014 Benchmarking Internet of Things devices C.P. Kruger y and G.P. Hancke yz *Advanced Sensor Networks Research Group, Counsil for Scientific and Industrial Research, South...

  13. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  14. Engine Benchmarking - Final CRADA Report

    Energy Technology Data Exchange (ETDEWEB)

    Wallner, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    Detailed benchmarking of the powertrains of three light-duty vehicles was performed. Results were presented and provided to CRADA partners. The vehicles included a MY2011 Audi A4, a MY2012 Mini Cooper and a MY2014 Nissan Versa.

  15. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  16. Diagnostic performance of combined noninvasive coronary angiography and myocardial perfusion imaging using 320 row detector computed tomography: design and implementation of the CORE320 multicenter, multinational diagnostic study.

    Science.gov (United States)

    Vavere, Andrea L; Simon, Gregory G; George, Richard T; Rochitte, Carlos E; Arai, Andrew E; Miller, Julie M; Di Carli, Marcello; Arbab-Zadeh, Armin; Zadeh, Armin A; Dewey, Marc; Niinuma, Hiroyuki; Laham, Roger; Rybicki, Frank J; Schuijf, Joanne D; Paul, Narinder; Hoe, John; Kuribyashi, Sachio; Sakuma, Hajime; Nomura, Cesar; Yaw, Tan Swee; Kofoed, Klaus F; Yoshioka, Kunihiro; Clouse, Melvin E; Brinker, Jeffrey; Cox, Christopher; Lima, Joao A C

    2011-01-01

    Multidetector coronary computed tomography angiography (CTA) is a promising modality for widespread clinical application because of its noninvasive nature and high diagnostic accuracy as found in previous studies using 64 to 320 simultaneous detector rows. It is, however, limited in its ability to detect myocardial ischemia. In this article, we describe the design of the CORE320 study ("Combined coronary atherosclerosis and myocardial perfusion evaluation using 320 detector row computed tomography"). This prospective, multicenter, multinational study is unique in that it is designed to assess the diagnostic performance of combined 320-row CTA and myocardial CT perfusion imaging (CTP) in comparison with the combination of invasive coronary angiography and single-photon emission computed tomography myocardial perfusion imaging (SPECT-MPI). The trial is being performed at 16 medical centers located in 8 countries worldwide. CT has the potential to assess both anatomy and physiology in a single imaging session. The co-primary aim of the CORE320 study is to define the per-patient diagnostic accuracy of the combination of coronary CTA and myocardial CTP to detect physiologically significant coronary artery disease compared with (1) the combination of conventional coronary angiography and SPECT-MPI and (2) conventional coronary angiography alone. If successful, the technology could revolutionize the management of patients with symptomatic CAD.

  17. High-Performance Physics Simulations Using Multi-Core CPUs and GPGPUs in a Volunteer Computing Context

    CERN Document Server

    Karimi, Kamran; Hamze, Firas

    2010-01-01

    This paper presents two conceptually simple methods for parallelizing a Parallel Tempering Monte Carlo simulation in a distributed volunteer computing context, where computers belonging to the general public are used. The first method uses conventional multi-threading. The second method uses CUDA, a graphics card computing system. Parallel Tempering is described, and challenges such as parallel random number generation and mapping of Monte Carlo chains to different threads are explained. While conventional multi-threading on CPUs is well-established, GPGPU programming techniques and technologies are still developing and present several challenges, such as the effective use of a relatively large number of threads. Having multiple chains in Parallel Tempering allows parallelization in a manner that is similar to the serial algorithm. Volunteer computing introduces important constraints to high performance computing, and we show that both versions of the application are able to adapt themselves to the varying an...

  18. Memory-intensive benchmarks: IRAM vs. cache-based machines

    Energy Technology Data Exchange (ETDEWEB)

    Gaeke, Brian G.; Husbands, Parry; Kim, Hyun Jin; Li, Xiaoye S.; Moon, Hyun Jin; Oliker, Leonid; Yelick, Katherine A.; Biswas, Rupak

    2001-09-29

    The increasing gap between processor and memory performance has led to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic structures, and the ratio of computation to memory operation.

  19. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  20. Evaluating the scalability of HEP software and multi-core hardware

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A

    2011-01-01

    As researchers have reached the practical limits of processor performance improvements by frequency scaling, it is clear that the future of computing lies in the effective utilization of parallel and multi-core architectures. Since this significant change in computing is well underway, it is vital for HEP programmers to understand the scalability of their software on modern hardware and the opportunities for potential improvements. This work aims to quantify the benefit of new mainstream architectures to the HEP community through practical benchmarking on recent hardware solutions, including the usage of parallelized HEP applications.

  1. IceChrono1: a probabilistic model to compute a common and optimal chronology for several ice cores

    Science.gov (United States)

    Parrenin, Frédéric; Bazin, Lucie; Capron, Emilie; Landais, Amaëlle; Lemieux-Dudon, Bénédicte; Masson-Delmotte, Valérie

    2016-04-01

    Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores and the estimation of the age scale uncertainty are essential to interpret the climate and environmental records that they contain. It is however a complex problem which involves different methods. Here, we present IceChrono1, a new probabilistic model integrating various sources of chronological information to produce a common and optimized chronology for several ice cores, as well as its uncertainty. IceChrono1 is based on the inversion of three quantities: the surface accumulation rate, the Lock-In Depth (LID) of air bubbles and the thinning function. The chronological information integrated into the model are: models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice and air dated horizons, ice and air depth intervals with known durations, Δdepth observations (depth shift between synchronous events recorded in the ice and in the air) and finally air and ice stratigraphic links in between ice cores. The optimization is formulated as a least squares problem, implying that all densities of probabilities are assumed to be Gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono follows an approach similar to that of the Datice model which was recently used to produce the AICC2012 chronology for 4 Antarctic ice cores and 1 Greenland ice core. IceChrono1 provides improvements and simplifications with respect to Datice from the mathematical, numerical and programming point of views. The capabilities of IceChrono is demonstrated on a case study similar to the AICC2012 dating experiment. We find results similar to those of Datice, within a few centuries, which is a confirmation of both IceChrono and Datice codes. We also test new functionalities with respect to the original version of Datice: observations as ice intervals

  2. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  3. Multifunctional Fe3O4/TaO(x) core/shell nanoparticles for simultaneous magnetic resonance imaging and X-ray computed tomography.

    Science.gov (United States)

    Lee, Nohyun; Cho, Hye Rim; Oh, Myoung Hwan; Lee, Soo Hong; Kim, Kangmin; Kim, Byung Hyo; Shin, Kwangsoo; Ahn, Tae-Young; Choi, Jin Woo; Kim, Young-Woon; Choi, Seung Hong; Hyeon, Taeghwan

    2012-06-27

    Multimodal imaging is highly desirable for accurate diagnosis because it can provide complementary information from each imaging modality. In this study, a sol-gel reaction of tantalum(V) ethoxide in a microemulsion containing Fe(3)O(4) nanoparticles (NPs) was used to synthesize multifunctional Fe(3)O(4)/TaO(x) core/shell NPs, which were biocompatible and exhibited a prolonged circulation time. When the NPs were intravenously injected, the tumor-associated vessel was observed using computed tomography (CT), and magnetic resonance imaging (MRI) revealed the high and low vascular regions of the tumor.

  4. Ab Initio Computations and Active Thermochemical Tables Hand in Hand: Heats of Formation of Core Combustion Species.

    Science.gov (United States)

    Klippenstein, Stephen J; Harding, Lawrence B; Ruscic, Branko

    2017-09-07

    The fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds to essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H2, CH4, H2O, and NH3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zero-point energy, core-valence, relativistic, and diagonal Born-Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0-1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.

  5. Magnetohydrodynamics and heat transfer benchmark problems for liquid-metal flow in rectangular ducts

    Energy Technology Data Exchange (ETDEWEB)

    Sidorenkov, S.I. [D.V. Efremov Scientific Research Institute of Electrophysical Apparatus, St. Petersburg (Russian Federation); Hua, T.Q. [Fusion Power Program, Technology Development Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Araseki, Hideo [Central Research Institute of the Electric Power Industry, 1646 Abiko, Abiko-shi, 270-11 (Japan)

    1995-03-01

    This paper describes four benchmark problems to validate magnetohydrodynamic and heat transfer computer codes. The problems include rectangular duct geometry with uniform and non-uniform magnetic fields, with and without surface heat flux, and various rectangular cross-sections. Two of the problems are based on experiments. Participants in this benchmarking activity come from three countries: Russia, USA and Japan. The solution methods to the problems are described. Results from the different computer codes are presented and compared. (orig.).

  6. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  7. Comparison of dynamical cores for NWP models: comparison of COSMO and Dune

    Science.gov (United States)

    Brdar, Slavko; Baldauf, Michael; Dedner, Andreas; Klöfkorn, Robert

    2013-06-01

    We present a range of numerical tests comparing the dynamical cores of the operationally used numerical weather prediction (NWP) model COSMO and the university code Dune, focusing on their efficiency and accuracy for solving benchmark test cases for NWP. The dynamical core of COSMO is based on a finite difference method whereas the Dune core is based on a Discontinuous Galerkin method. Both dynamical cores are briefly introduced stating possible advantages and pitfalls of the different approaches. Their efficiency and effectiveness is investigated, based on three numerical test cases, which require solving the compressible viscous and non-viscous Euler equations. The test cases include the density current (Straka et al. in Int J Numer Methods Fluids 17:1-22, 1993), the inertia gravity (Skamarock and Klemp in Mon Weather Rev 122:2623-2630, 1994), and the linear hydrostatic mountain waves of (Bonaventura in J Comput Phys 158:186-213, 2000).

  8. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  9. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  10. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  11. Analysis of Cloud Computing Architecture and Its Core Technology%云计算的架构及核心技术

    Institute of Scientific and Technical Information of China (English)

    薛慧丽

    2014-01-01

    云计算的核心技术主要包括云架构体系、云核心技术、云的未来走向等三各方面,其中云架构体系部分,主要包括 SaaS、PaaS、IaaS 在内的云服务层,以及包括用户层、机制层、检测层在内的云管理层。云核心技术主要包括 MAP -Reduce 编程模型、海量数据分存技术、海量数据管理技术、虚拟化技术、云计算平台管理技术等五大内容。“云计算”的未来走向目前仍存在着挑战与机遇。%The architecture of the system and its core technology of cloud computing are introduced and analyzed in the pa-per.Cloud architecture system is divided into two parts:cloud services,including SaaS,PaaS,IaaS layer and cloud man-agement,including user layer,mechanism layer,detection layer.After that,the paper clarifies the top five core technolo-gies of cloud computing,which are respectively:MAP -reduce programming model,mass data storage technology,massive data management technology,virtualization technology and cloud computing platform management technology.Finally, there still exist challenges and opportunities for the future of cloud computing.

  12. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Science.gov (United States)

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  13. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  14. Influence of Modelling Options in RELAP5/SCDAPSIM and MAAP4 Computer Codes on Core Melt Progression and Reactor Pressure Vessel Integrity

    Directory of Open Access Journals (Sweden)

    Siniša Šadek

    2010-01-01

    Full Text Available RELAP5/SCDAPSIM and MAAP4 are two widely used severe accident computer codes for the integral analysis of the core and the reactor pressure vessel behaviour following the core degradation. The objective of the paper is the comparison of code results obtained by application of different modelling options and the evaluation of influence of thermal hydraulic behaviour of the plant on core damage progression. The analysed transient was postulated station blackout in NPP Krško with a leakage from reactor coolant pump seals. Two groups of calculations were performed where each group had a different break area and, thus, a different leakage rate. Analyses have shown that MAAP4 results were more sensitive to varying thermal hydraulic conditions in the primary system. User-defined parameters had to be carefully selected when the MAAP4 model was developed, in contrast to the RELAP5/SCDAPSIM model where those parameters did not have any significant impact on final results.

  15. Numerical simulation of the RAMAC benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Leblanc, J.E.; Sugihara, M.; Fujiwara, T. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; Nusca, M. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; U.S. Army Research Lab., Ballistics and Weapons Concepts Div., AMSRL-WM-BE, Aberdeen Proving Ground, MD (United States); Wang, X. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; School of Mechanical and Production Engineering, Nanyang Technological Univ. (Singapore); Seiler, F. [Nagoya Univ. (Japan). Dept. of Aerospace Engineering; French-German Research Inst. of Saint-Louis, ISL, Saint-Louis (France)

    2000-11-01

    Numerical simulations of the same ramac geometry and boundary conditions by different numerical and physical models highlight the variety of solutions possible and the strong effect of the chemical kinetics model on the solution. The benchmark test was defined and announced within the community of ramac researchers. Three laboratories undertook the project. The numerical simulations include Navier-Stokes and Euler simulations with various levels of physical models and equations of state. The non-reactive part of the simulation produced similar steady state results in the three simulations. The chemically reactive part of the simulation produced widely different outcomes. The original experimental data and experimental conditions are presented. A description of each computer code and the resulting flowfield is included. A comparison between codes and results is achieved. The most critical choice for the simulation was the chemical kinetics model. (orig.)

  16. High performance computing software package for multitemporal Remote-Sensing computations

    Directory of Open Access Journals (Sweden)

    Asaad Chahboun

    2010-10-01

    Full Text Available With the huge satellite data actually stored, remote sensing multitemporal study is nowadays one of the most challenging fields of computer science. The multicore hardware support and Multithreading can play an important role in speeding up algorithm computations. In the present paper, a software package (called Multitemporal Software Package for Satellite Remote sensing data (MSPSRS has been developed for the multitemporal treatment of satellite remote sensing images in a standard format. Due to portability intend, the interface was developed using the QT application framework and the core wasdeveloped integrating C++ classes. MSP.SRS can run under different operating systems (i.e., Linux, Mac OS X, Windows, Embedded Linux, Windows CE, etc.. Final benchmark results, using multiple remote sensing biophysical indices, show a gain up to 6X on a quad core i7 personal computer.

  17. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  18. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  19. Stored energy in transformers: calculation by a computer program. [Computer code TFORMR calculates and prints the stored energy in a transformer with an iron core

    Energy Technology Data Exchange (ETDEWEB)

    Willmann, P.A.; Hooper, E.B. Jr.

    1977-02-01

    A computer program was written to calculate the stored energy in a transformer. This result easily yields the inductance and leakage reactance of the transformer and is estimated to be accurate to better than 5 percent. The program was used to calculate the leakage reactance of the main transformer for the LLL neutral beam High Voltage Test Stand.

  20. Designed armadillo repeat proteins as general peptide-binding scaffolds: consensus design and computational optimization of the hydrophobic core

    DEFF Research Database (Denmark)

    Parmeggiani, Fabio; Pellarin, Riccardo; Larsen, Anders Peter

    2007-01-01

    interactions with peptides or parts of proteins in extended conformation. The conserved binding mode of the peptide in extended form, observed for different targets, makes armadillo repeat proteins attractive candidates for the generation of modular peptide-binding scaffolds. Taking advantage of the large...... number of repeat sequences available, a consensus-based approach combined with a force field-based optimization of the hydrophobic core was used to derive soluble, highly expressed, stable, monomeric designed proteins with improved characteristics compared to natural armadillo proteins. These sequences...

  1. CMFD and GPU acceleration on method of characteristics for hexagonal cores

    Energy Technology Data Exchange (ETDEWEB)

    Han, Yu, E-mail: hanyu1203@gmail.com [School of Nuclear Science and Engineering, Shanghai Jiaotong University, Shanghai 200240 (China); Jiang, Xiaofeng [Shanghai NuStar Nuclear Power Technology Co., Ltd., No. 81 South Qinzhou Road, XuJiaHui District, Shanghai 200000 (China); Wang, Dezhong [School of Nuclear Science and Engineering, Shanghai Jiaotong University, Shanghai 200240 (China)

    2014-12-15

    Highlights: • A merged hex-mesh CMFD method solved via tri-diagonal matrix inversion. • Alternative hardware acceleration of using inexpensive GPU. • A hex-core benchmark with solution to confirm two acceleration methods. - Abstract: Coarse Mesh Finite Difference (CMFD) has been widely adopted as an effective way to accelerate the source iteration of transport calculation. However in a core with hexagonal assemblies there are non-hexagonal meshes around the edges of assemblies, causing a problem for CMFD if the CMFD equations are still to be solved via tri-diagonal matrix inversion by simply scanning the whole core meshes in different directions. To solve this problem, we propose an unequal mesh CMFD formulation that combines the non-hexagonal cells on the boundary of neighboring assemblies into non-regular hexagonal cells. We also investigated the alternative hardware acceleration of using graphics processing units (GPU) with graphics card in a personal computer. The tool CUDA is employed, which is a parallel computing platform and programming model invented by the company NVIDIA for harnessing the power of GPU. To investigate and implement these two acceleration methods, a 2-D hexagonal core transport code using the method of characteristics (MOC) is developed. A hexagonal mini-core benchmark problem is established to confirm the accuracy of the MOC code and to assess the effectiveness of CMFD and GPU parallel acceleration. For this benchmark problem, the CMFD acceleration increases the speed 16 times while the GPU acceleration speeds it up 25 times. When used simultaneously, they provide a speed gain of 292 times.

  2. A Benchmark for Management Effectiveness

    OpenAIRE

    Zimmermann, Bill; Chanaron, Jean-Jacques; Klieb, Leslie

    2007-01-01

    International audience; This study presents a tool to gauge managerial effectiveness in the form of a questionnaire that is easy to administer and score. The instrument covers eight distinct areas of organisational climate and culture of management inside a company or department. Benchmark scores were determined by administering sample-surveys to a wide cross-section of individuals from numerous firms in Southeast Louisiana, USA. Scores remained relatively constant over a seven-year timeframe...

  3. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  4. Reactor based plutonium disposition - physics and fuel behaviour benchmark studies of an OECD/NEA experts group

    Energy Technology Data Exchange (ETDEWEB)

    D' Hondt, P. [SCK.CEN, Mol (Belgium); Gehin, J. [ORNL, Oak Ridge, TN (United States); Na, B.C.; Sartori, E. [Organisation for Economic Co-Operation and Development, Nuclear Energy Agency, 92 - Issy les Moulineaux (France); Wiesenack, W. [Organisation for Economic Co-Operation and Development/HRP, Halden (Norway)

    2001-07-01

    One of the options envisaged for disposing of weapons grade plutonium, declared surplus for national defence in the Russian Federation and Usa, is to burn it in nuclear power reactors. The scientific/technical know-how accumulated in the use of MOX as a fuel for electricity generation is of great relevance for the plutonium disposition programmes. An Expert Group of the OECD/Nea is carrying out a series of benchmarks with the aim of facilitating the use of this know-how for meeting this objective. This paper describes the background that led to establishing the Expert Group, and the present status of results from these benchmarks. The benchmark studies cover a theoretical reactor physics benchmark on a VVER-1000 core loaded with MOX, two experimental benchmarks on MOX lattices and a benchmark concerned with MOX fuel behaviour for both solid and hollow pellets. First conclusions are outlined as well as future work. (author)

  5. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  6. Hybrid MPI/OpenMP parallelization of the explicit Volterra integral equation solver for multi-core computer architectures

    KAUST Repository

    Al Jarro, Ahmed

    2011-08-01

    A hybrid MPI/OpenMP scheme for efficiently parallelizing the explicit marching-on-in-time (MOT)-based solution of the time-domain volume (Volterra) integral equation (TD-VIE) is presented. The proposed scheme equally distributes tested field values and operations pertinent to the computation of tested fields among the nodes using the MPI standard; while the source field values are stored in all nodes. Within each node, OpenMP standard is used to further accelerate the computation of the tested fields. Numerical results demonstrate that the proposed parallelization scheme scales well for problems involving three million or more spatial discretization elements. © 2011 IEEE.

  7. A nanocomposite of Au-AgI core/shell dimer as a dual-modality contrast agent for x-ray computed tomography and photoacoustic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Orza, Anamaria; Wu, Hui; Li, Yuancheng; Mao, Hui, E-mail: hmao@emory.edu, E-mail: Xiangyang.Tang@emory.edu [Department of Radiology and Imaging Sciences and Center for Systems Imaging, Emory University School of Medicine, Atlanta, Georgia 30322 (United States); Yang, Yi; Tang, Xiangyang, E-mail: hmao@emory.edu, E-mail: Xiangyang.Tang@emory.edu [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30322 (United States); Feng, Ting; Wang, Xueding [Department of Biomedical Engineering, University of Michigan School of Medicine, Ann Arbor, Michigan 48109 (United States); Yang, Lily [Department of Surgery, Emory University School of Medicine, Atlanta, Georgia 30322 (United States)

    2016-01-15

    Purpose: To develop a core/shell nanodimer of gold (core) and silver iodine (shell) as a dual-modal contrast-enhancing agent for biomarker targeted x-ray computed tomography (CT) and photoacoustic imaging (PAI) applications. Methods: The gold and silver iodine core/shell nanodimer (Au/AgICSD) was prepared by fusing together components of gold, silver, and iodine. The physicochemical properties of Au/AgICSD were then characterized using different optical and imaging techniques (e.g., HR- transmission electron microscope, scanning transmission electron microscope, x-ray photoelectron spectroscopy, energy-dispersive x-ray spectroscopy, Z-potential, and UV-vis). The CT and PAI contrast-enhancing effects were tested and then compared with a clinically used CT contrast agent and Au nanoparticles. To confer biocompatibility and the capability for efficient biomarker targeting, the surface of the Au/AgICSD nanodimer was modified with the amphiphilic diblock polymer and then functionalized with transferrin for targeting transferrin receptor that is overexpressed in various cancer cells. Cytotoxicity of the prepared Au/AgICSD nanodimer was also tested with both normal and cancer cell lines. Results: The characterizations of prepared Au/AgI core/shell nanostructure confirmed the formation of Au/AgICSD nanodimers. Au/AgICSD nanodimer is stable in physiological conditions for in vivo applications. Au/AgICSD nanodimer exhibited higher contrast enhancement in both CT and PAI for dual-modality imaging. Moreover, transferrin functionalized Au/AgICSD nanodimer showed specific binding to the tumor cells that have a high level of expression of the transferrin receptor. Conclusions: The developed Au/AgICSD nanodimer can be used as a potential biomarker targeted dual-modal contrast agent for both or combined CT and PAI molecular imaging.

  8. An adaptive multi-spline refinement algorithm in simulation based sailboat trajectory optimization using onboard multi-core computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2016-06-01

    Full Text Available A new dynamic programming based parallel algorithm adapted to on-board heterogeneous computers for simulation based trajectory optimization is studied in the context of “high-performance sailing”. The algorithm uses a new discrete space of continuously differentiable functions called the multi-splines as its search space representation. A basic version of the algorithm is presented in detail (pseudo-code, time and space complexity, search space auto-adaptation properties. Possible extensions of the basic algorithm are also described. The presented experimental results show that contemporary heterogeneous on-board computers can be effectively used for solving simulation based trajectory optimization problems. These computers can be considered micro high performance computing (HPC platforms-they offer high performance while remaining energy and cost efficient. The simulation based approach can potentially give highly accurate results since the mathematical model that the simulator is built upon may be as complex as required. The approach described is applicable to many trajectory optimization problems due to its black-box represented performance measure and use of OpenCL.

  9. Statistical benchmark for BosonSampling

    Science.gov (United States)

    Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas

    2016-03-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.

  10. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  11. NRC-BNL Benchmark Program on Evaluation of Methods for Seismic Analysis of Coupled Systems

    Energy Technology Data Exchange (ETDEWEB)

    Chokshi, N.; DeGrassi, G.; Xu, J.

    1999-03-24

    A NRC-BNL benchmark program for evaluation of state-of-the-art analysis methods and computer programs for seismic analysis of coupled structures with non-classical damping is described. The program includes a series of benchmarking problems designed to investigate various aspects of complexities, applications and limitations associated with methods for analysis of non-classically damped structures. Discussions are provided on the benchmarking process, benchmark structural models, and the evaluation approach, as well as benchmarking ground rules. It is expected that the findings and insights, as well as recommendations from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems.

  12. NRC-BNL BENCHMARK PROGRAM ON EVALUATION OF METHODS FOR SEISMIC ANALYSIS OF COUPLED SYSTEMS.

    Energy Technology Data Exchange (ETDEWEB)

    XU,J.

    1999-08-15

    A NRC-BNL benchmark program for evaluation of state-of-the-art analysis methods and computer programs for seismic analysis of coupled structures with non-classical damping is described. The program includes a series of benchmarking problems designed to investigate various aspects of complexities, applications and limitations associated with methods for analysis of non-classically damped structures. Discussions are provided on the benchmarking process, benchmark structural models, and the evaluation approach, as well as benchmarking ground rules. It is expected that the findings and insights, as well as recommendations from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems.

  13. RESRAD benchmarking against six radiation exposure pathway models

    Energy Technology Data Exchange (ETDEWEB)

    Faillace, E.R.; Cheng, J.J.; Yu, C.

    1994-10-01

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, input parameters such as occupancy, shielding, and consumption factors.

  14. Benchmarking of neutron production of heavy-ion transport codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I. [Oak Ridge National Laboratory, Oak Ridge, TN 37831-6172 (United States); Ronningen, R. M. [Michigan State Univ., National Superconductiong Cyclotron Laboratory, East Lansing, MI 48824-1321 (United States); Heilbronn, L. [Univ. of Tennessee, 1004 Estabrook Rd., Knoxville, TN 37996-2300 (United States)

    2011-07-01

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)

  15. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2009-01-01

    The classic textbook for computer systems analysis and design, Computer Organization and Design, has been thoroughly updated to provide a new focus on the revolutionary change taking place in industry today: the switch from uniprocessor to multicore microprocessors. This new emphasis on parallelism is supported by updates reflecting the newest technologies with examples highlighting the latest processor designs, benchmarking standards, languages and tools. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, compu

  16. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    Science.gov (United States)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different

  17. Integrating the Nqueens Algorithm into a Parameterized Benchmark Suite

    Science.gov (United States)

    2016-02-01

    claim that autotuning is needed. However, they concentrate on a Message Passing Interface (MPI)/ OpenCL approach, whereas we are benchmarking using...only OpenCL . 4. Backtrack Branch and Bound The BBB algorithm is a way to search for a solution to a problem among a variety of potential solutions...heterogeneous computers. This is especially true when using a portable application program interface (API) such as OpenCL , which was used for this work. There

  18. CFD Simulation of Thermal-Hydraulic Benchmark V1000CT-2 Using ANSYS CFX

    Directory of Open Access Journals (Sweden)

    Thomas Höhne

    2009-01-01

    Full Text Available Plant measured data from VVER-1000 coolant mixing experiments were used within the OECD/NEA and AER coupled code benchmarks for light water reactors to test and validate computational fluid dynamic (CFD codes. The task is to compare the various calculations with measured data, using specified boundary conditions and core power distributions. The experiments, which are provided for CFD validation, include single loop cooling down or heating-up by disturbing the heat transfer in the steam generator through the steam valves at low reactor power and with all main coolant pumps in operation. CFD calculations have been performed using a numerical grid model of 4.7 million tetrahedral elements. The Best Practice Guidelines in using CFD in nuclear reactor safety applications has been used. Different advanced turbulence models were utilized in the numerical simulation. The results show a clear sector formation of the affected loop at the downcomer, lower plenum and core inlet, which corresponds to the measured values. The maximum local values of the relative temperature rise in the calculation are in the same range of the experiment. Due to this result, it is now possible to improve the mixing models which are usually used in system codes.

  19. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    Energy Technology Data Exchange (ETDEWEB)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex (France); Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex (France); Maday, Yvon, E-mail: maday@ann.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions and Institut Universitaire de France, F-75005, Paris (France); Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); Brown Univ, Division of Applied Maths, Providence, RI (United States); Riahi, Mohamed Kamel, E-mail: riahi@cmap.polytechnique.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CMAP, Inria-Saclay and X-Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex (France); Salomon, Julien, E-mail: salomon@ceremade.dauphine.fr [CEREMADE, Univ Paris-Dauphine, Pl. du Mal. de Lattre de Tassigny, F-75016, Paris (France)

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity of the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.

  20. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  1. Benchmarking of human resources management

    OpenAIRE

    David M. Akinnusi

    2008-01-01

    This paper reviews the role of human resource management (HRM) which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HR...

  2. Status of the MELTSPREAD-1 computer code for the analysis of transient spreading of core debris melts

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.; Chu, C.C.

    1992-01-01

    A transient, one dimensional, finite difference computer code (MELTSPREAD-1) has been developed to predict spreading behavior of high temperature melts flowing over concrete and/or steel surfaces submerged in water, or without the effects of water if the surface is initially dry. This paper provides a summary overview of models and correlations currently implemented in the code, code validation activities completed thus far, LWR spreading-related safety issues for which the code has been applied, and the status of documentation for the code.

  3. Status of the MELTSPREAD-1 computer code for the analysis of transient spreading of core debris melts

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.; Chu, C.C.

    1992-04-01

    A transient, one dimensional, finite difference computer code (MELTSPREAD-1) has been developed to predict spreading behavior of high temperature melts flowing over concrete and/or steel surfaces submerged in water, or without the effects of water if the surface is initially dry. This paper provides a summary overview of models and correlations currently implemented in the code, code validation activities completed thus far, LWR spreading-related safety issues for which the code has been applied, and the status of documentation for the code.

  4. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  5. An advanced coarse-grained nucleosome core particle model for computer simulations of nucleosome-nucleosome interactions under varying ionic conditions.

    Science.gov (United States)

    Fan, Yanping; Korolev, Nikolay; Lyubartsev, Alexander P; Nordenskiöld, Lars

    2013-01-01

    In the eukaryotic cell nucleus, DNA exists as chromatin, a compact but dynamic complex with histone proteins. The first level of DNA organization is the linear array of nucleosome core particles (NCPs). The NCP is a well-defined complex of 147 bp DNA with an octamer of histones. Interactions between NCPs are of paramount importance for higher levels of chromatin compaction. The polyelectrolyte nature of the NCP implies that nucleosome-nucleosome interactions must exhibit a great influence from both the ionic environment as well as the positively charged and highly flexible N-terminal histone tails, protruding out from the NCP. The large size of the system precludes a modelling analysis of chromatin at an all-atom level and calls for coarse-grained approximations. Here, a model of the NCP that include the globular histone core and the flexible histone tails described by one particle per each amino acid and taking into account their net charge is proposed. DNA wrapped around the histone core was approximated at the level of two base pairs represented by one bead (bases and sugar) plus four beads of charged phosphate groups. Computer simulations, using a Langevin thermostat, in a dielectric continuum with explicit monovalent (K(+)), divalent (Mg(2+)) or trivalent (Co(NH(3))(6) (3+)) cations were performed for systems with one or ten NCPs. Increase of the counterion charge results in a switch from repulsive NCP-NCP interaction in the presence of K(+), to partial aggregation with Mg(2+) and to strong mutual attraction of all 10 NCPs in the presence of CoHex(3+). The new model reproduced experimental results and the structure of the NCP-NCP contacts is in agreement with available data. Cation screening, ion-ion correlations and tail bridging contribute to the NCP-NCP attraction and the new NCP model accounts for these interactions.

  6. An advanced coarse-grained nucleosome core particle model for computer simulations of nucleosome-nucleosome interactions under varying ionic conditions.

    Directory of Open Access Journals (Sweden)

    Yanping Fan

    Full Text Available In the eukaryotic cell nucleus, DNA exists as chromatin, a compact but dynamic complex with histone proteins. The first level of DNA organization is the linear array of nucleosome core particles (NCPs. The NCP is a well-defined complex of 147 bp DNA with an octamer of histones. Interactions between NCPs are of paramount importance for higher levels of chromatin compaction. The polyelectrolyte nature of the NCP implies that nucleosome-nucleosome interactions must exhibit a great influence from both the ionic environment as well as the positively charged and highly flexible N-terminal histone tails, protruding out from the NCP. The large size of the system precludes a modelling analysis of chromatin at an all-atom level and calls for coarse-grained approximations. Here, a model of the NCP that include the globular histone core and the flexible histone tails described by one particle per each amino acid and taking into account their net charge is proposed. DNA wrapped around the histone core was approximated at the level of two base pairs represented by one bead (bases and sugar plus four beads of charged phosphate groups. Computer simulations, using a Langevin thermostat, in a dielectric continuum with explicit monovalent (K(+, divalent (Mg(2+ or trivalent (Co(NH(3(6 (3+ cations were performed for systems with one or ten NCPs. Increase of the counterion charge results in a switch from repulsive NCP-NCP interaction in the presence of K(+, to partial aggregation with Mg(2+ and to strong mutual attraction of all 10 NCPs in the presence of CoHex(3+. The new model reproduced experimental results and the structure of the NCP-NCP contacts is in agreement with available data. Cation screening, ion-ion correlations and tail bridging contribute to the NCP-NCP attraction and the new NCP model accounts for these interactions.

  7. Accelerating the SCE-UA Global Optimization Method Based on Multi-Core CPU and Many-Core GPU

    Directory of Open Access Journals (Sweden)

    Guangyuan Kan

    2016-01-01

    Full Text Available The famous global optimization SCE-UA method, which has been widely used in the field of environmental model parameter calibration, is an effective and robust method. However, the SCE-UA method has a high computational load which prohibits the application of SCE-UA to high dimensional and complex problems. In recent years, the hardware of computer, such as multi-core CPUs and many-core GPUs, improves significantly. These much more powerful new hardware and their software ecosystems provide an opportunity to accelerate the SCE-UA method. In this paper, we proposed two parallel SCE-UA methods and implemented them on Intel multi-core CPU and NVIDIA many-core GPU by OpenMP and CUDA Fortran, respectively. The Griewank benchmark function was adopted in this paper to test and compare the performances of the serial and parallel SCE-UA methods. According to the results of the comparison, some useful advises were given to direct how to properly use the parallel SCE-UA methods.

  8. PB@Au Core-Satellite Multifunctional Nanotheranostics for Magnetic Resonance and Computed Tomography Imaging in Vivo and Synergetic Photothermal and Radiosensitive Therapy.

    Science.gov (United States)

    Dou, Yan; Li, Xue; Yang, Weitao; Guo, Yanyan; Wu, Menglin; Liu, Yajuan; Li, Xiaodong; Zhang, Xuening; Chang, Jin

    2017-01-18

    To integrate multiple diagnostic and therapeutic strategies on a single particle through simple and effective methods is still challenging for nanotheranostics. Herein, we develop multifunctional nanotheranostic PB@Au core-satellite nanoparticles (CSNPs) based on Prussian blue nanoparticles (PBNPs) and gold nanoparticles (AuNPs), which are two kinds of intrinsic theranostic nanomaterials, for magnetic resonance (MR)-computed tomography (CT) imaging and synergistic photothermal and radiosensitive therapy (PTT-RT). PBNPs as cores enable T1- and T2-weighted MR contrast and strong photothermal effect, while AuNPs as satellites offer CT enhancement and radiosensitization. As revealed by both MR and CT imaging, CSNPs realized efficient tumor localization by passively targeted accumulation after intravenous injection. In vivo studies showed that CSNPs resulted in synergistic PTT-RT action to achieve almost entirely suppression of tumor growth without observable recurrence. Moreover, no obvious systemic toxicity of mice confirmed good biocompatibility of CSNPs. These results raise new possibilities for clinical nanotheranostics with multimodal diagnostic and therapeutic coalescent design.

  9. Benchmarking and accounting for the (private) cloud

    CERN Document Server

    Belleman, J

    2015-01-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible, the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to ...

  10. Influence of image acquisition settings on radiation dose and image quality in coronary angiography by 320-detector volume computed tomography: the CORE320 pilot experience

    Directory of Open Access Journals (Sweden)

    Armin Arbab-Zadeh

    2012-06-01

    Full Text Available The objective of this study was to investigate the impact of image acquisition settings and patients’ characteristics on image quality and radiation dose for coronary angiography by 320-row computed tomography (CT. CORE320 is a prospective study to investigate the diagnostic performance of 320-detector CT for detecting coronary artery disease and associated myocardial ischemia. A run-in phase in 65 subjects was conducted to test the adequacy of the computed tomography angiography (CTA acquisition protocol. Tube current, exposure window, and number of cardiac beats per acquisition were adjusted according to subjects’ gender, heart rate, and body mass index (BMI. Main outcome measures were image quality, assessed by contrast/noise measurements and qualitatively on a 4-point scale, and radiation dose, estimated by the dose-length-product. Average heart rate at image acquisition was 55.0±7.3 bpm. Median Agatston calcium score was 27.0 (interquartile range 1-330. All scans were prospectively triggered. Single heart beat image acquisition was obtained in 61 of 65 studies (94%. Sixty-one studies (94% and 437 of 455 arterial segments (96% were of diagnostic image quality. Estimated radiation dose was significantly greater in obese (5.3±0.4 mSv than normal weight (4.6±0.3 mSv or overweight (4.7±0.3 mSv subjects (P<0.001. BMI was the strongest factor influencing image quality (odds ratio=1.457, P=0.005. The CORE320 CTA image acquisition protocol achieved a good balance between image quality and radiation dose for a 320-detector CT system. However, image quality in obese subjects was reduced compared to normal weight subjects, possibly due to tube voltage/current restrictions mandated by the study protocol.

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  12. [Benchmarking in health care: conclusions and recommendations].

    Science.gov (United States)

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  13. An Effective Approach for Benchmarking Implementation

    OpenAIRE

    B. M. Deros; Tan, J.; M.N.A. Rahman; N. A.Q.M. Daud

    2011-01-01

    Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty res...

  14. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  15. Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples

    Science.gov (United States)

    Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.

    2012-12-01

    The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  18. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  19. Developing Benchmarks for Solar Radio Bursts

    Science.gov (United States)

    Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.

    2016-12-01

    Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.

  20. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  1. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  2. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  3. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  4. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  5. Benchmarking Implementations of Functional Languages with "Pseudoknot", a float-intensive benchmark

    NARCIS (Netherlands)

    Hartel, Pieter H.; Feeley, M.; Alt, M.; Augustsson, L.

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  6. Laser anemometer measurements and computations for transonic flow conditions in an annular cascade of high turning core turbine vanes

    Science.gov (United States)

    Goldman, Louis J.

    1993-01-01

    An advanced laser anemometer (LA) was used to measure the axial and tangential velocity components in an annular cascade of turbine stator vanes operating at transonic flow conditions. The vanes tested were based on a previous redesign of the first-stage stator in a two-stage turbine for a high-bypass-ratio engine. The vanes produced 75 deg of flow turning. Tests were conducted on a 0.771-scale model of the engine-sized stator. The advanced LA fringe system employed an extremely small 50-micron diameter probe volume. Window correction optics were used to ensure that the laser beams did not uncross in passing through the curved optical access port. Experimental LA measurements of velocity and turbulence were obtained at the mean radius upstream of, within, and downstream of the stator vane row at an exit critical velocity ratio of 1.050 at the hub. Static pressures were also measured on the vane surface. The measurements are compared, where possible, with calculations from a three-dimensional inviscid flow analysis. Comparisons were also made with the results obtained previously when these same vanes were tested at the design exit critical velocity ratio of 0.896 at the hub. The data are presented in both graphical and tabulated form so that they can be readily compared against other turbomachinery computations.

  7. Development of solutions to benchmark piping problems. [EPIPE code

    Energy Technology Data Exchange (ETDEWEB)

    Reich, M.; Chang, T.Y.; Prachuktam, S.

    1976-01-01

    Piping analysis is one of the most extensive engineering efforts required for the design of nuclear reactors. Such analysis is normally carried out by use of computer programs which can handle complex piping geometries and various loading conditions, (static or dynamic). A brief outline is presented of the theoretical background for the EPIPE program, together with four benchmark problems: two for the static case and two for the dynamic case. The results obtained from EPIPE runs compare well with those available from known analytical solutions or from other independent computer programs.

  8. GPU in Physics Computation: Case Geant4 Navigation

    CERN Document Server

    Seiskari, Otto; Niemi, Tapio

    2012-01-01

    General purpose computing on graphic processing units (GPU) is a potential method of speeding up scientific computation with low cost and high energy efficiency. We experimented with the particle physics simulation toolkit Geant4 used at CERN to benchmark its geometry navigation functionality on a GPU. The goal was to find out whether Geant4 physics simulations could benefit from GPU acceleration and how difficult it is to modify Geant4 code to run in a GPU. We ported selected parts of Geant4 code to C99 & CUDA and implemented a simple gamma physics simulation utilizing this code to measure efficiency. The performance of the program was tested by running it on two different platforms: NVIDIA GeForce 470 GTX GPU and a 12-core AMD CPU system. Our conclusion was that GPUs can be a competitive alternate for multi-core computers but porting existing software in an efficient way is challenging.

  9. EVALUATION OF VARIOUS COMPILER OPTIMIZATION TECHNIQUES RELATED TO MIBENCH BENCHMARK APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Jeyaraj Andrews

    2013-01-01

    Full Text Available Tuning compiler optimization for a given application of particular computer architecture is not an easy task, because modern computer architecture reaches higher levels of compiler optimization. These modern compilers usually provide a larger number of optimization techniques. By applying all these techniques to a given application degrade the program performance as well as more time consuming. The performance of the program measured by time and space depends on the machine architecture, problem domain and the settings of the compiler. The brute-force method of trying all possible combinations would be infeasible, as it’s complexity O(2n even for “n” on-off optimizations. Even though many existing techniques are available to search the space of compiler options to find optimal settings, most of those approaches can be expensive and time consuming. In this study, machine learning algorithm has been modified and used to reduce the complexity of selecting suitable compiler options for programs running on a specific hardware platform. This machine learning algorithm is compared with advanced combined elimination strategy to determine tuning time and normalized tuning time. The experiment is conducted on core i7 processor. These algorithms are tested with different mibench benchmark applications. It has been observed that performance achieved by a machine learning algorithm is better than advanced combined elimination strategy algorithm.

  10. High precision quantum-chemical treatment of adsorption: Benchmarking physisorption of molecular hydrogen on graphane

    Science.gov (United States)

    Usvyat, Denis

    2015-09-01

    A multilevel hierarchical ab initio protocol for calculating adsorption on non-conducting surfaces is presented. It employs fully periodic treatment, which reaches local Møller-Plesset perturbation theory of second order (MP2) with correction for the basis set incompleteness via the local F12 technique. Post-MP2 corrections are calculated using finite clusters. That includes the coupled cluster treatment in the local and canonical frameworks (up to perturbative quadruples) and correlated core (with MP2). Using this protocol, the potential surface of hydrogen molecules adsorbed on graphane was computed. According to the calculations, hydrogen molecules are adsorbed on graphane in a perpendicular to the surface orientation with the minimum of the potential surface of around -3.6 kJ/mol located at the distance of 3.85 Å between the bond center of the hydrogen molecule and the mid-plane of graphane. The adsorption sites along the path from the downward-pointing carbon to the ring center of the graphane are energetically virtually equally preferable, which can enable nearly free translations of hydrogen molecules along these paths. Consequently, the hydrogen molecules on graphane most likely form a non-commensurate monolayer. The analysis of the remaining errors reveals a very high accuracy of the computed potential surface with an error bar of a few tenths of a kJ/mol. The obtained results are a high-precision benchmark for further theoretical and experimental studies of hydrogen molecules interacting with graphane.

  11. High precision quantum-chemical treatment of adsorption: Benchmarking physisorption of molecular hydrogen on graphane

    Energy Technology Data Exchange (ETDEWEB)

    Usvyat, Denis, E-mail: denis.usvyat@chemie.uni-regensburg.de [Institute for Physical and Theoretical Chemistry, Universität Regensburg, Universitätsstrasse 31, D-93040 Regensburg (Germany)

    2015-09-14

    A multilevel hierarchical ab initio protocol for calculating adsorption on non-conducting surfaces is presented. It employs fully periodic treatment, which reaches local Møller-Plesset perturbation theory of second order (MP2) with correction for the basis set incompleteness via the local F12 technique. Post-MP2 corrections are calculated using finite clusters. That includes the coupled cluster treatment in the local and canonical frameworks (up to perturbative quadruples) and correlated core (with MP2). Using this protocol, the potential surface of hydrogen molecules adsorbed on graphane was computed. According to the calculations, hydrogen molecules are adsorbed on graphane in a perpendicular to the surface orientation with the minimum of the potential surface of around −3.6 kJ/mol located at the distance of 3.85 Å between the bond center of the hydrogen molecule and the mid-plane of graphane. The adsorption sites along the path from the downward-pointing carbon to the ring center of the graphane are energetically virtually equally preferable, which can enable nearly free translations of hydrogen molecules along these paths. Consequently, the hydrogen molecules on graphane most likely form a non-commensurate monolayer. The analysis of the remaining errors reveals a very high accuracy of the computed potential surface with an error bar of a few tenths of a kJ/mol. The obtained results are a high-precision benchmark for further theoretical and experimental studies of hydrogen molecules interacting with graphane.

  12. Benchmarking the next generation of homology inference tools.

    Science.gov (United States)

    Saripella, Ganapathi Varma; Sonnhammer, Erik L L; Forslund, Kristoffer

    2016-09-01

    Over the last decades, vast numbers of sequences were deposited in public databases. Bioinformatics tools allow homology and consequently functional inference for these sequences. New profile-based homology search tools have been introduced, allowing reliable detection of remote homologs, but have not been systematically benchmarked. To provide such a comparison, which can guide bioinformatics workflows, we extend and apply our previously developed benchmark approach to evaluate the 'next generation' of profile-based approaches, including CS-BLAST, HHSEARCH and PHMMER, in comparison with the non-profile based search tools NCBI-BLAST, USEARCH, UBLAST and FASTA. We generated challenging benchmark datasets based on protein domain architectures within either the PFAM + Clan, SCOP/Superfamily or CATH/Gene3D domain definition schemes. From each dataset, homologous and non-homologous protein pairs were aligned using each tool, and standard performance metrics calculated. We further measured congruence of domain architecture assignments in the three domain databases. CSBLAST and PHMMER had overall highest accuracy. FASTA, UBLAST and USEARCH showed large trade-offs of accuracy for speed optimization. Profile methods are superior at inferring remote homologs but the difference in accuracy between methods is relatively small. PHMMER and CSBLAST stand out with the highest accuracy, yet still at a reasonable computational cost. Additionally, we show that less than 0.1% of Swiss-Prot protein pairs considered homologous by one database are considered non-homologous by another, implying that these classifications represent equivalent underlying biological phenomena, differing mostly in coverage and granularity. Benchmark datasets and all scripts are placed at (http://sonnhammer.org/download/Homology_benchmark). forslund@embl.de Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  14. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  15. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  16. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  17. Benchmarking ICRF simulations for ITER

    Energy Technology Data Exchange (ETDEWEB)

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  18. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  19. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  20. COG validation: SINBAD Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  1. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  2. Heterogeneous Distributed Computing for Computational Aerosciences

    Science.gov (United States)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  3. Steady-state benchmarks of DK4D: A time-dependent, axisymmetric drift-kinetic equation solver

    Energy Technology Data Exchange (ETDEWEB)

    Lyons, B. C. [Princeton University, Princeton, New Jersey 08544 (United States); Jardin, S. C. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Ramos, J. J. [Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139-4307 (United States)

    2015-05-15

    The DK4D code has been written to solve a set of time-dependent, axisymmetric, finite-Larmor-radius drift-kinetic equations (DKEs) for the non-Maxwellian part of the electron and ion distribution functions using the full, linearized Fokker–Planck–Landau collision operator. The plasma is assumed to be in the low- to finite-collisionality regime, as is found in the cores of modern and future magnetic confinement fusion experiments. Each DKE is formulated such that the perturbed distribution function carries no net density, parallel momentum, or kinetic energy. Rather, these quantities are contained within the background Maxwellians and would be evolved by an appropriate set of extended magnetohydrodynamic (MHD) equations. This formulation allows for straight-forward coupling of DK4D to existing extended MHD time evolution codes. DK4D uses a mix of implicit and explicit temporal representations and finite element and spectral spatial representations. These, along with other computational methods used, are discussed extensively. Steady-state benchmarks are then presented comparing the results of DK4D to expected analytic results at low collisionality, qualitatively, and to the Sauter analytic fits for the neoclassical conductivity and bootstrap current, quantitatively. These benchmarks confirm that DK4D is capable of solving for the correct, gyroaveraged distribution function in stationary magnetic equilibria. Furthermore, the results presented demonstrate how the exact drift-kinetic solution varies with collisionality as a function of the magnetic moment and the poloidal angle.

  4. Quantitative Performance Analysis of the SPEC OMPM2001 Benchmarks

    Directory of Open Access Journals (Sweden)

    Vishal Aslot

    2003-01-01

    Full Text Available The state of modern computer systems has evolved to allow easy access to multiprocessor systems by supporting multiple processors on a single physical package. As the multiprocessor hardware evolves, new ways of programming it are also developed. Some inventions may merely be adopting and standardizing the older paradigms. One such evolving standard for programming shared-memory parallel computers is the OpenMP API. The Standard Performance Evaluation Corporation (SPEC has created a suite of parallel programs called SPEC OMP to compare and evaluate modern shared-memory multiprocessor systems using the OpenMP standard. We have studied these benchmarks in detail to understand their performance on a modern architecture. In this paper, we present detailed measurements of the benchmarks. We organize, summarize, and display our measurements using a Quantitative Model. We present a detailed discussion and derivation of the model. Also, we discuss the important loops in the SPEC OMPM2001 benchmarks and the reasons for less than ideal speedup on our platform.

  5. VERIFIKASI PAKET PROGRAM MVP-II DAN SRAC2006 PADA KASUS TERAS REAKTOR VERA BENCHMARK.

    Directory of Open Access Journals (Sweden)

    Jati Susilo

    2015-03-01

    Full Text Available Dalam penelitian ini dilakukan verifikasi perhitungan benchmark VERA pada kasus Zero Power Physical Test (ZPPT teras reaktor Watts Bar 1. Reaktor tersebut merupakan jenis PWR kelas 1000 MWe yang didesain oleh Westinghouse, tersusun dari 193 perangkat bahan bakar 17×17 dengan 3 jenis pengkayaan UO2 yaitu 2,1wt%, 2,619wt% dan 3,1wt%. Perhitungan nilai k-eff dan distribusi faktor daya dilakukan pada siklus operasi pertama teras dengan kondisi beginning of cycle (BOC dan hot zero power (HZP. Posisi batang kendali dibedakan menjadi uncontrolled (semua batang kendali berada di luar teras, dan controlled (batang kendali Bank D didalam teras. Paket program komputer yang digunakan dalam perhitungan adalah MVP-II dan SRAC2006 modul CITATION dengan data pustaka tampang lintang ENDF/B-VII.0. Hasil perhitungan menunjukkan bahwa perbedaan nilai k-eff teras pada kondisi controlled dan uncontrolled antara referensi dengan MVP-II (-0,07% dan -0,014% dan SRAC2006 (0,92% dan 0,99% sangat kecil atau masih dibawah 1%. Perbedaan faktor daya maksimum teras pada kondisi controlled dan uncontrolled dengan referensi dengan MVP-II adalah 0,38% dan 1,53%, sedangkan dengan SRAC2006 adalah 1,13% dan -2,45%. Dapat dikatakan bahwa kedua paket program komputer menunjukkan hasil perhitungan yang sesuai dengan nilai referensi. Dalam hal penentuan kekritisan teras, maka hasil perhitungan MVP-II lebih konservatif dibandingkan dengan SRAC2006. Kata kunci : MVP-II, SRAC2006, PWR, VERA   In this research, verification calculation for VERA core physics benchmark on the Zero Power Physical Test (ZPPT of the nuclear reactor Watts Bar 1. The reactor is a 1000 MWe class of PWR designed by Westinghouse, arranged from 193 unit of 17×17 fuel assembly consisting 3 type enrichment of UO2 that are 2.1wt%, 2.619wt% and 3.1wt%. Core power factor distribution and k-eff calculation has been done for the first cycle operation of the core at beginning of cycle (BOC and hot zero power (HZP. In this

  6. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  7. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  9. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  10. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  11. Synergetic effect of benchmarking competitive advantages

    Directory of Open Access Journals (Sweden)

    N.P. Tkachova

    2011-12-01

    Full Text Available It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  12. Synergetic effect of benchmarking competitive advantages

    OpenAIRE

    N.P. Tkachova; P.G. Pererva

    2011-01-01

    It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  13. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  14. Machines are benchmarked by code, not algorithms

    NARCIS (Netherlands)

    Poss, R.

    2013-01-01

    This article highlights how small modifications to either the source code of a benchmark program or the compilation options may impact its behavior on a specific machine. It argues that for evaluating machines, benchmark providers and users be careful to ensure reproducibility of results based on th

  15. Benchmark analysis of railway networks and undertakings

    NARCIS (Netherlands)

    Hansen, I.A.; Wiggenraad, P.B.L.; Wolff, J.W.

    2013-01-01

    Benchmark analysis of railway networks and companies has been stimulated by the European policy of deregulation of transport markets, the opening of national railway networks and markets to new entrants and separation of infrastructure and train operation. Recent international railway benchmarking s

  16. Benchmark Assessment for Improved Learning. AACC Report

    Science.gov (United States)

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  17. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  18. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  19. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  20. Melcor benchmarking against integral severe fuel damage tests

    Energy Technology Data Exchange (ETDEWEB)

    Madni, I.K. [Brookhaven National Lab., Upton, NY (United States)

    1995-09-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the U.S. Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC to provide independent assessment of MELCOR, and a very important part of this program is to benchmark MELCOR against experimental data from integral severe fuel damage tests and predictions of that data from more mechanistic codes such as SCDAP or SCDAP/RELAP5. Benchmarking analyses with MELCOR have been carried out at BNL for five integral severe fuel damage tests, namely, PBF SFD 1-1, SFD 14, and NRU FLHT-2, analyses, and their role in identifying areas of modeling strengths and weaknesses in MELCOR.

  1. Gaia FGK Benchmark Stars: Effective temperatures and surface gravities

    CERN Document Server

    Heiter, U; Gustafsson, B; Korn, A J; Soubiran, C; Thévenin, F

    2015-01-01

    Large Galactic stellar surveys and new generations of stellar atmosphere models and spectral line formation computations need to be subjected to careful calibration and validation and to benchmark tests. We focus on cool stars and aim at establishing a sample of 34 Gaia FGK Benchmark Stars with a range of different metallicities. The goal was to determine the effective temperature and the surface gravity independently from spectroscopy and atmospheric models as far as possible. Fundamental determinations of Teff and logg were obtained in a systematic way from a compilation of angular diameter measurements and bolometric fluxes, and from a homogeneous mass determination based on stellar evolution models. The derived parameters were compared to recent spectroscopic and photometric determinations and to gravity estimates based on seismic data. Most of the adopted diameter measurements have formal uncertainties around 1%, which translate into uncertainties in effective temperature of 0.5%. The measurements of bol...

  2. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  3. In situ visualization on cores with different boundary conditions through X-ray computed tomography scanner (CT-Scanner) during spontaneous imbibition

    Science.gov (United States)

    Kim, T.; Kovscek, A. R.

    2013-12-01

    Spontaneous imbibition (SI) is defined as displacement of non-wetting phase by wetting phase through the action of capillary forces in porous media. Spontaneous imbibition may occur as countercurrent or cocurrent multiphase flow. SI is an important test of rock wettability and is relevant to oil recovery from rocks of many different types of wettability. The rate of SI depends on permeability and water/oil relative permeability, medium shapes and boundary conditions, fluid viscosity, interfacial tension, and wettability, among other factors. This study investigates the effect of characteristic length (CL), boundary conditions (BC), and initial water saturation on the rate of spontaneous imbibition. We conduct countercurrent and cocurrent SI tests using cylindrical Berea sandstone (water-wet) and Indiana limestone (weakly wetting) through an X-ray computed tomography scanner and an imbibition cell with different boundary conditions and initial water saturations. Brine (1 wt% NaCl) is used as the wetting fluid. Also, decane (n-C10) and Blandol are used as non-wetting fluids, respectively to compare the effect of mobility ratio. The observed 2-D and 3-D saturation profile histories within each rock show clearly different imbibition patterns for each boundary condition. Also, low permeability limestones have more heterogeneous features than sandstones. The effect of characteristic length (CL) on the imbibition recovery curve was investigated using dimensionless time (tD). CL had an inverse effect on the rate of spontaneous imbibition within the same core samples. In addition, we used three different boundary conditions (BC) including (1) all faces open (AFO), (2) two ends open (TEO, i.e., inlet and outlet face), and (3) one end open (OEO, i.e., one face of the core) systems. BC experiments showed the effect of total open surface area for the oil production rate of spontaneous imbibition with different Swi. In addition, the generalized correlation (Aronofsy's equation

  4. Nonlinear Resonance Benchmarking Experiment at the CERN Proton Synchrotron

    CERN Document Server

    Hofmann, I; Giovannozzi, Massimo; Martini, M; Métral, Elias

    2003-01-01

    As a first step of a space charge - nonlinear resonance benchmarking experiment over a large number of turns, beam loss and emittance evolution were measured over 1 s on a 1.4 GeV kinetic energy flat-bottom in the presence of a single octupole. By lowering the working point towards the resonance a gradual transition from a loss-free core emittance blow-up to a regime dominated by continuous loss was found. Our 3D simulations with analytical space charge show that trapping on the resonance due to synchrotron oscillation causes the observed core emittance growth as well as halo formation, where the latter is explained as the source of the observed loss.

  5. Parallel Structure Based on Multi-Core Computing for Radar System Simulation%基于多核计算的雷达并行仿真结构

    Institute of Scientific and Technical Information of China (English)

    王磊; 卢显良; 陈明燕; 张伟; 张顺生

    2014-01-01

    针对顺序仿真结构下回波生成与信号处理环节软件仿真速度慢等瓶颈问题,提出一种基于多核处理器共享内存的多数据链路计算模型,通过构建多数据链路并行仿真的方法提升软件仿真效率。根据同一调度间隔内各雷达事件相互独立的特性,从数据划分、任务分配、时间同步及负载监测与度量等层面上进行阐述。仿真结果表明,该方法与传统的雷达串行仿真相比,数据帧处理平均时间可以降低37.5%,数据帧处理加速比曲线表现出良好的仿真加速特性,大大缩减雷达系统仿真时间。%To solve the bottle-neck problem of lower efficiency existed in radar echo generation and signal processing with serial simulation architecture, a multi-data links computing model based on multi-core memory-shared platform is proposed. This method could greatly promote simulation efficiency by taking advantage of multi-core. According to the independent characteristic between radar tasks in the same scheduling interval, the model takes data division, task allocation, time synchronization, and load monitoring with measurement into account to discuss its parallel characteristic. The Pentium(R) Dual-Core E5200 CPU with 2 GB memory is used to test the target scene with 20 batches. Simulation results demonstrate that, compared with serial simulation, the data frame average processing time of parallel model decreases 37.5% and the data frame processing speedup ratio curve has good acceleration performance. This parallel algorithm can reduce the simulation time greatly.

  6. Proteomics Core

    Data.gov (United States)

    Federal Laboratory Consortium — Proteomics Core is the central resource for mass spectrometry based proteomics within the NHLBI. The Core staff help collaborators design proteomics experiments in a...

  7. Proteomics Core

    Data.gov (United States)

    Federal Laboratory Consortium — Proteomics Core is the central resource for mass spectrometry based proteomics within the NHLBI. The Core staff help collaborators design proteomics experiments in...

  8. Performance implications from sizing a VM on multi-core systems: A Data analytic application s view

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Seung-Hwan [ORNL; Horey, James L [ORNL; Begoli, Edmon [ORNL; Yao, Yanjun [University of Tennessee, Knoxville (UTK); Cao, Qing [University of Tennessee, Knoxville (UTK)

    2013-01-01

    In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloud Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.

  9. OECD/NEA benchmark for time-dependent neutron transport calculations without spatial homogenization

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Jason, E-mail: jason.hou@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Ivanov, Kostadin N. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Boyarinov, Victor F.; Fomichenko, Peter A. [National Research Centre “Kurchatov Institute”, Kurchatov Sq. 1, Moscow (Russian Federation)

    2017-06-15

    Highlights: • A time-dependent homogenization-free neutron transport benchmark was created. • The first phase, known as the kinetics phase, was described in this work. • Preliminary results for selected 2-D transient exercises were presented. - Abstract: A Nuclear Energy Agency (NEA), Organization for Economic Co-operation and Development (OECD) benchmark for the time-dependent neutron transport calculations without spatial homogenization has been established in order to facilitate the development and assessment of numerical methods for solving the space-time neutron kinetics equations. The benchmark has been named the OECD/NEA C5G7-TD benchmark, and later extended with three consecutive phases each corresponding to one modelling stage of the multi-physics transient analysis of the nuclear reactor core. This paper provides a detailed introduction of the benchmark specification of Phase I, known as the “kinetics phase”, including the geometry description, supporting neutron transport data, transient scenarios in both two-dimensional (2-D) and three-dimensional (3-D) configurations, as well as the expected output parameters from the participants. Also presented are the preliminary results for the initial state 2-D core and selected transient exercises that have been obtained using the Monte Carlo method and the Surface Harmonic Method (SHM), respectively.

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  11. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  15. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  16. Surveying and benchmarking techniques to analyse DNA gel fingerprint images.

    Science.gov (United States)

    Heras, Jónathan; Domínguez, César; Mata, Eloy; Pascual, Vico

    2016-11-01

    DNA fingerprinting is a genetic typing technique that allows the analysis of the genomic relatedness between samples, and the comparison of DNA patterns. The analysis of DNA gel fingerprint images usually consists of five consecutive steps: image pre-processing, lane segmentation, band detection, normalization and fingerprint comparison. In this article, we firstly survey the main methods that have been applied in the literature in each of these stages. Secondly, we focus on lane-segmentation and band-detection algorithms-as they are the steps that usually require user-intervention-and detect the seven core algorithms used for both tasks. Subsequently, we present a benchmark that includes a data set of images, the gold standards associated with those images and the tools to measure the performance of lane-segmentation and band-detection algorithms. Finally, we implement the core algorithms used both for lane segmentation and band detection, and evaluate their performance using our benchmark. As a conclusion of that study, we obtain that the average profile algorithm is the best starting point for lane segmentation and band detection.

  17. Benchmarking of methods for genomic taxonomy.

    Science.gov (United States)

    Larsen, Mette V; Cosentino, Salvatore; Lukjancenko, Oksana; Saputra, Dhany; Rasmussen, Simon; Hasman, Henrik; Sicheritz-Pontén, Thomas; Aarestrup, Frank M; Ussery, David W; Lund, Ole

    2014-05-01

    One of the first issues that emerges when a prokaryotic organism of interest is encountered is the question of what it is--that is, which species it is. The 16S rRNA gene formed the basis of the first method for sequence-based taxonomy and has had a tremendous impact on the field of microbiology. Nevertheless, the method has been found to have a number of shortcomings. In the current study, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene; (ii) Reads2Type that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species-specific functional protein domain profiles; and finally (v) KmerFinder, which examines the number of cooccurring k-mers (substrings of k nucleotides in DNA sequence data). The performances of the methods were subsequently evaluated on three data sets of short sequence reads or draft genomes from public databases. In total, the evaluation sets constituted sequence data from more than 11,000 isolates covering 159 genera and 243 species. Our results indicate that methods that sample only chromosomal, core genes have difficulties in distinguishing closely related species which only recently diverged. The KmerFinder method had the overall highest accuracy and correctly identified from 93% to 97% of the isolates in the evaluations sets.

  18. Computer code and users' guide for the preliminary analysis of dual-mode space nuclear fission solid core power and propulsion systems, NUROC3A. AMS report No. 1239b

    Energy Technology Data Exchange (ETDEWEB)

    Nichols, R.A.; Smith, W.W.

    1976-06-30

    The three-volume report describes a dual-mode nuclear space power and propulsion system concept that employs an advanced solid-core nuclear fission reactor coupled via heat pipes to one of several electric power conversion systems. The second volume describes the computer code and users' guide for the preliminary analysis of the system.

  19. Core-seis: a code for LMFBR core seismic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chellapandi, P.; Ravi, R.; Chetal, S.C.; Bhoje, S.B. [Indira Gandhi Centre for Atomic Research, Kalpakkam (India). Reactor Group

    1995-12-31

    This paper deals with a computer code CORE-SEIS specially developed for seismic analysis of LMFBR core configurations. For demonstrating the prediction capability of the code, results are presented for one of the MONJU reactor core mock ups which deals with a cluster of 37 subassemblies kept in water. (author). 3 refs., 7 figs., 2 tabs.

  20. Combining Coronary Angiography and Myocardial Perfusion by Computed Tomography in the Identification of Flow-Limiting Stenosis – The CORE320 study

    Science.gov (United States)

    Magalhães, Tiago A.; Kishi, Satoru; George, Richard; Arbab-Zadeh, Armin; Vavere, Andrea; Cox, Christopher; Matheson, Matthew B.; Miller, Julie; Brinker, Jeffrey; Di Carli, Marcelo; Rybicki, Frank J.; Rochitte, Carlos E.; Clouse, Melvin; Lima, João A.C.

    2015-01-01

    Background The combination of coronary computed tomography angiography (CTA) and myocardial CT perfusion (CTP) is gaining increasing acceptance, but a standardized approach to be implemented in the clinical setting is necessary. Objectives To investigate the accuracy of a combined coronary CTA and myocardial CTP comprehensive protocol compared to coronary CTA alone, using a combination of invasive coronary angiography (ICA) and Single-Photon Emission Computed Tomography (SPECT) as reference. Methods Three-hundred eighty-one patients included in CORE320 trial were analyzed in this study. Flow-limiting stenosis was defined as the presence of ≥50% stenosis by ICA with a related perfusion deficit by SPECT. The combined CTA+CTP definition of disease was the presence of a ≥50% stenosis with a related perfusion deficit. All data sets were analyzed by two experienced readers, aligning anatomical findings by CTA with perfusion deficits by CTP. Results Mean patient age was 62±6 years (66% male), 27% with prior history of myocardial infarction. In a per-patient analysis, sensitivity for CTA alone was 93% specificity was 54%, positive predictive value (PPV) was 55%; negative predictive value (NPV) 93% and overall accuracy was 69%. After combining CTA and CTP, sensitivity was 78%, specificity 73%, NPV 64%; PPV 0.85% and overall accuracy was 75%. In a per-vessel analysis, overall accuracy of CTA alone was 73%as compared to 79% for the combination of CTA and CTP (pcoronary CTA and myocardial CTP findings through a comprehensive protocol is feasible. While sensitivity is lower, specificity and overall accuracy are higher than assessment by coronary CTA when compared against a reference standard of stenosis with an associated perfusion deficit. PMID:25977111

  1. Benchmark solutions for transport in $d$-dimensional Markov binary mixtures

    CERN Document Server

    Larmier, Coline; Malvagi, Fausto; Mazzolo, Alain; Zoia, Andrea

    2016-01-01

    Linear particle transport in stochastic media is key to such relevant applications as neutron diffusion in randomly mixed immiscible materials, light propagation through engineered optical materials, and inertial confinement fusion, only to name a few. We extend the pioneering work by Adams, Larsen and Pomraning \\cite{benchmark_adams} (recently revisited by Brantley \\cite{brantley_benchmark}) by considering a series of benchmark configurations for mono-energetic and isotropic transport through Markov binary mixtures in dimension $d$. The stochastic media are generated by resorting to Poisson random tessellations in $1d$ slab, $2d$ extruded, and full $3d$ geometry. For each realization, particle transport is performed by resorting to the Monte Carlo simulation. The distributions of the transmission and reflection coefficients on the free surfaces of the geometry are subsequently estimated, and the average values over the ensemble of realizations are computed. Reference solutions for the benchmark have never be...

  2. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  3. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  4. Plans to update benchmarking tool.

    Science.gov (United States)

    Stokoe, Mark

    2013-02-01

    The use of the current AssetMark system by hospital health facilities managers and engineers (in Australia) has decreased to a point of no activity occurring. A number of reasons have been cited, including cost, time to do, slow process, and level of information required. Based on current levels of activity, it would not be of any value to IHEA, or to its members, to continue with this form of AssetMark. For AssetMark to remain viable, it needs to be developed as a tool seen to be of value to healthcare facilities managers, and not just healthcare facility engineers. Benchmarking is still a very important requirement in the industry, and AssetMark can fulfil this need provided that it remains abreast of customer needs. The proposed future direction is to develop an online version of AssetMark with its current capabilities regarding capturing of data (12 Key Performance Indicators), reporting, and user interaction. The system would also provide end-users with access to live reporting features via a user-friendly web nterface linked through the IHEA web page.

  5. Academic Benchmarks for Otolaryngology Leaders.

    Science.gov (United States)

    Eloy, Jean Anderson; Blake, Danielle M; D'Aguillo, Christine; Svider, Peter F; Folbe, Adam J; Baredes, Soly

    2015-08-01

    This study aimed to characterize current benchmarks for academic otolaryngologists serving in positions of leadership and identify factors potentially associated with promotion to these positions. Information regarding chairs (or division chiefs), vice chairs, and residency program directors was obtained from faculty listings and organized by degree(s) obtained, academic rank, fellowship training status, sex, and experience. Research productivity was characterized by (a) successful procurement of active grants from the National Institutes of Health and prior grants from the American Academy of Otolaryngology-Head and Neck Surgery Foundation Centralized Otolaryngology Research Efforts program and (b) scholarly impact, as measured by the h-index. Chairs had the greatest amount of experience (32.4 years) and were the least likely to have multiple degrees, with 75.8% having an MD degree only. Program directors were the most likely to be fellowship trained (84.8%). Women represented 16% of program directors, 3% of chairs, and no vice chairs. Chairs had the highest scholarly impact (as measured by the h-index) and the greatest external grant funding. This analysis characterizes the current picture of leadership in academic otolaryngology. Chairs, when compared to their vice chair and program director counterparts, had more experience and greater research impact. Women were poorly represented among all academic leadership positions. © The Author(s) 2015.

  6. Benchmarking Measures of Network Influence

    Science.gov (United States)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  7. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  8. The analysis of the OECD/NEA/NSC PBMR-400 benchmark problem using PARCS-DIREKT

    Energy Technology Data Exchange (ETDEWEB)

    Seker, V.; Downar, T. J. [Purdue Univ., 400 Central Drive, West Lafayette, IN 47907 (United States)

    2006-07-01

    The OECD/NEA/NSC PBMR-400 benchmark problem was developed to support the validation and verification efforts for the PBMR design. This paper describes the analysis of this problem using the PARCS-DIREKT coupled code system. The benchmark problem involved the use of two different cross-section libraries, one which was generated from a VSOP equilibrium core calculation and has no dependence on core conditions. The second library provides for dependence on five state parameters and was designed for transient analysis. The paper here reports the steady-state cases using the VSOP set of cross-sections. The results are shown to be in good agreement with those of VSOP. Also reported here are the results of the steady-state thermal-hydraulic DIRECKT solution with a given power profile obtained from VSOP equilibrium core calculation. This analysis provides some insight as to the most important parameters in the design of PBMR-400. (authors)

  9. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  11. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  12. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  14. COMPUTING

    CERN Document Server

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  16. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 9 & 10: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:1 MODERATOR-TO-FUEL PEBBLE RATIO

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2014-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen critical configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.

  17. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 9 & 10: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:1 MODERATOR-TO-FUEL PEBBLE RATIO

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen critical configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.

  18. Performance Evaluation and Benchmarking of Next Intelligent Systems

    Energy Technology Data Exchange (ETDEWEB)

    del Pobil, Angel [Jaume-I University; Madhavan, Raj [ORNL; Bonsignorio, Fabio [Heron Robots, Italy

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  20. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    these issues, and describes how effects are closely connected to the perception of benchmarking, the intended users of the system and the application of the benchmarking results. The fundamental basis of this paper is taken from the development of benchmarking in the Danish construction sector. Two distinct...... perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind...... of benchmarking. In conclusion it is argued that clients and the Danish government are the intended users of the benchmarking system. The benchmarking results are primarily used by the government for monitoring and regulation of the construction sector and by clients for contractor selection. The dominating use...