WorldWideScience

Sample records for assembly computational benchmark

  1. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  2. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  3. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  4. Method and system for benchmarking computers

    Science.gov (United States)

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  5. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    ' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... state during the computation. We ran the system with two servers doing the secure computation using a database with information on about 2500 users. Answers arrived in about 25 seconds....

  6. Benchmark Calculations For A VVER-1000 Assembly Using SRAC

    International Nuclear Information System (INIS)

    This work presents the neutronic calculation results of a VVER-1000 assembly using SRAC with 107 energy groups in comparison with the benchmark values in the OECD/NEA report. The main neutronic characteristics which were calculated in this comparison include infinite multiplication factors (k-inf), nuclide densities as the function of burnup and pin-wise power distribution. Calculations were conducted with various conditions of fuel, coolant and boron content in coolant. (author)

  7. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Science.gov (United States)

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  8. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Science.gov (United States)

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE. PMID:25247298

  9. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Directory of Open Access Journals (Sweden)

    Seyhan Yazar

    Full Text Available A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR on Amazon EC2 instances and Google Compute Engine (GCE, using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2 for E.coli and 53.5% (95% CI: 34.4-72.6 for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1 and 173.9% (95% CI: 134.6-213.1 more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  10. Benchmarking computational fluid dynamics models for lava flow simulation

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi

    2016-04-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.

  11. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Rahul Ravindrudu

    2004-12-19

    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access

  12. Randomized benchmarking in measurement-based quantum computing

    Science.gov (United States)

    Alexander, Rafael N.; Turner, Peter S.; Bartlett, Stephen D.

    2016-09-01

    Randomized benchmarking is routinely used as an efficient method for characterizing the performance of sets of elementary logic gates in small quantum devices. In the measurement-based model of quantum computation, logic gates are implemented via single-site measurements on a fixed universal resource state. Here we adapt the randomized benchmarking protocol for a single qubit to a linear cluster state computation, which provides partial, yet efficient characterization of the noise associated with the target gate set. Applying randomized benchmarking to measurement-based quantum computation exhibits an interesting interplay between the inherent randomness associated with logic gates in the measurement-based model and the random gate sequences used in benchmarking. We consider two different approaches: the first makes use of the standard single-qubit Clifford group, while the second uses recently introduced (non-Clifford) measurement-based 2-designs, which harness inherent randomness to implement gate sequences.

  13. Computed results on the IAEA benchmark problems at JAERI

    International Nuclear Information System (INIS)

    The outline of the computer code system of JAERI for analysing research reactors is presented and the results of check calculations to validate the code system are evaluated by the experimental data. Using this computer code system, some of the IAEA benchmark problems are solved and the results are compared with those of ANL. (author)

  14. Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation

    Science.gov (United States)

    Scott, James R.

    2004-01-01

    NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.

  15. Benchmarking neuromorphic vision: lessons learnt from computer vision.

    Science.gov (United States)

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  16. Computer-aided lens assembly.

    Science.gov (United States)

    Tomlinson, Richard; Alcock, Rob; Petzing, Jon; Coupland, Jeremy

    2004-01-20

    We propose a computer-aided method of lens manufacture that allows assembly, adjustment, and test phases to be run concurrently until an acceptable level of optical performance is reached. Misalignment of elements within a compound lens is determined by a comparison of the results of physical ray tracing by use of an array of Gaussian laser beams with numerically obtained geometric ray traces. An estimate of misalignment errors is made, and individual elements are adjusted in an iterative manner until performance criteria are achieved. The method is illustrated for the alignment of an air-spaced doublet. PMID:14765916

  17. Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    Science.gov (United States)

    Dahl, Milo D. (Editor)

    2004-01-01

    This publication contains the proceedings of the Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems. In this workshop, as in previous workshops, the problems were devised to gauge the technological advancement of computational techniques to calculate all aspects of sound generation and propagation in air directly from the fundamental governing equations. A variety of benchmark problems have been previously solved ranging from simple geometries with idealized acoustic conditions to test the accuracy and effectiveness of computational algorithms and numerical boundary conditions; to sound radiation from a duct; to gust interaction with a cascade of airfoils; to the sound generated by a separating, turbulent viscous flow. By solving these and similar problems, workshop participants have shown the technical progress from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The fourth CAA workshop emphasized the application of CAA methods to the solution of realistic problems. The workshop was held at the Ohio Aerospace Institute in Cleveland, Ohio, on October 20 to 22, 2003. At that time, workshop participants presented their solutions to problems in one or more of five categories. Their solutions are presented in this proceedings along with the comparisons of their solutions to the benchmark solutions or experimental data. The five categories for the benchmark problems were as follows: Category 1:Basic Methods. The numerical computation of sound is affected by, among other issues, the choice of grid used and by the boundary conditions. Category 2:Complex Geometry. The ability to compute the sound in the presence of complex geometric surfaces is important in practical applications of CAA. Category 3:Sound Generation by Interacting With a Gust. The practical application of CAA for computing noise generated by turbomachinery involves the modeling of the noise source mechanism as a

  18. Computer organization and assembly language programming

    CERN Document Server

    Peterson, James L

    1978-01-01

    Computer Organization and Assembly Language Programming deals with lower level computer programming-machine or assembly language, and how these are used in the typical computer system. The book explains the operations of the computer at the machine language level. The text reviews basic computer operations, organization, and deals primarily with the MIX computer system. The book describes assembly language programming techniques, such as defining appropriate data structures, determining the information for input or output, and the flow of control within the program. The text explains basic I/O

  19. Benchmarking Severe Accident Computer Codes for Heavy Water Reactor Applications

    International Nuclear Information System (INIS)

    Requests for severe accident investigations and assurance of mitigation measures have increased for operating nuclear power plants and the design of advanced nuclear power plants. Severe accident analysis investigations necessitate the analysis of the very complex physical phenomena that occur sequentially during various stages of accident progression. Computer codes are essential tools for understanding how the reactor and its containment might respond under severe accident conditions. The IAEA organizes coordinated research projects (CRPs) to facilitate technology development through international collaboration among Member States. The CRP on Benchmarking Severe Accident Computer Codes for HWR Applications was planned on the advice and with the support of the IAEA Nuclear Energy Department's Technical Working Group on Advanced Technologies for HWRs (the TWG-HWR). This publication summarizes the results from the CRP participants. The CRP promoted international collaboration among Member States to improve the phenomenological understanding of severe core damage accidents and the capability to analyse them. The CRP scope included the identification and selection of a severe accident sequence, selection of appropriate geometrical and boundary conditions, conduct of benchmark analyses, comparison of the results of all code outputs, evaluation of the capabilities of computer codes to predict important severe accident phenomena, and the proposal of necessary code improvements and/or new experiments to reduce uncertainties. Seven institutes from five countries with HWRs participated in this CRP

  20. Nuclear fuel assembly identification using computer vision

    International Nuclear Information System (INIS)

    This report describes an improved method of remotely identifying irradiated nuclear fuel assemblies. The method uses existing in-cell TV cameras to input an image of the notch-coded top of the fuel assemblies into a computer vision system, which then produces the identifying number for that assembly. This system replaces systems that use either a mechanical mechanism to feel the notches or use human operators to locate notches visually. The system was developed for identifying fuel assemblies from the Fast Flux Test Facility (FFTF) and the Clinch River Breeder Reactor, but could be used for other reactor assembly identification, as appropriate

  1. Benchmark Problems Used to Assess Computational Aeroacoustics Codes

    Science.gov (United States)

    Dahl, Milo D.; Envia, Edmane

    2005-01-01

    The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.

  2. Uncertainty Analysis for OECD-NEA-UAM Benchmark Problem of TMI-1 PWR Fuel Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Hyuk; Kim, S. J.; Seo, K.W.; Hwang, D. H. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    A quantification of code uncertainty is one of main questions that is continuously asked by the regulatory body like KINS. Utility and code developers solve the issue case by case because the general answer about this question is still opened. Under the circumference, OECD-NEA has attracted the global consensus on the uncertainty quantification through the UAM benchmark program. OECD-NEA benchmark II-2 problem is a problem on the uncertainty quantification of subchannel code. It is a problem that the uncertainty of fuel temperature and ONB location on the TMI-1 fuel assembly are estimated on the transient and steady condition. In this study, the uncertainty quantification of MATRA code is performed on the problem. Workbench platform is developed to produce the large set of inputs that is needed to estimate the uncertainty quantification on the benchmark problem. Direct Monte Carlo sampling is used to the random sampling from sample PDF. Uncertainty analysis of MATRA code on OECD-NEA benchmark problem is estimated using the developed tool and MATRA code. Uncertainty analysis on OECD-NEA benchmark II-2 problem was performed to quantify the uncertainty of MATRA code. Direct Monte Carlo sampling is used to extract 2000 random parameters. Workbench program is developed to generate input files and post process of calculation results. Uncertainty affected by input parameters was estimated on the DNBR, the cladding and the coolant temperatures.

  3. Benchmark experiment on vanadium assembly with D-T neutrons. Leakage neutron spectrum measurement

    Energy Technology Data Exchange (ETDEWEB)

    Kokooo; Murata, I.; Nakano, D.; Takahashi, A. [Osaka Univ., Suita (Japan); Maekawa, F.; Ikeda, Y.

    1998-03-01

    The fusion neutronics benchmark experiments have been done for vanadium and vanadium alloy by using the slab assembly and time-of-flight (TOF) method. The leakage neutron spectra were measured from 50 keV to 15 MeV and comparison were done with MCNP-4A calculations which was made by using evaluated nuclear data of JENDL-3.2, JENDL-Fusion File and FENDL/E-1.0. (author)

  4. OECD/NEA burnup credit criticality benchmarks phase IIIA: Criticality calculations of BWR spent fuel assemblies in storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ando, Yoshihira [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    2000-09-01

    The report describes the final results of Phase IIIA Benchmarks conducted by the Burnup Credit Criticality Calculation Working Group under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD/NEA). The benchmarks are intended to confirm the predictive capability of the current computer code and data library combinations for the neutron multiplication factor (k{sub eff}) of a layer of irradiated BWR fuel assembly array model. In total 22 benchmark problems are proposed for calculations of k{sub eff}. The effects of following parameters are investigated: cooling time, inclusion/exclusion of FP nuclides and axial burnup profile, and inclusion of axial profile of void fraction or constant void fractions during burnup. Axial profiles of fractional fission rates are further requested for five cases out of the 22 problems. Twenty-one sets of results are presented, contributed by 17 institutes from 9 countries. The relative dispersion of k{sub eff} values calculated by the participants from the mean value is almost within the band of {+-}1%{delta}k/k. The deviations from the averaged calculated fission rate profiles are found to be within {+-}5% for most cases. (author)

  5. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  6. A benchmark on computational simulation of a CT fracture experiment

    International Nuclear Information System (INIS)

    For a better understanding of the fracture behavior of cracked welds in piping, FRAMATOME, EDF and CEA have launched an important analytical research program. This program is mainly based on the analysis of the effects of the geometrical parameters (the crack size and the welded joint dimensions) and the yield strength ratio on the fracture behavior of several cracked configurations. Two approaches have been selected for the fracture analyses: on one hand, the global approach based on the concept of crack driving force J and on the other hand, a local approach of ductile fracture. In this approach the crack initiation and growth are modelized by the nucleation, growth and coalescence of cavities in front of the crack tip. The model selected in this study estimates only the growth of the cavities using the RICE and TRACEY relationship. The present study deals with a benchmark on computational simulation of CT fracture experiments using three computer codes : ALIBABA developed by EDF the CEA's code CASTEM 2000 and the FRAMATOME's code SYSTUS. The paper is split into three parts. At first, the authors present the experimental procedure for high temperature toughness testing of two CT specimens taken from a welded pipe, characteristic of pressurized water reactor primary piping. Secondly, considerations are outlined about the Finite Element analysis and the application procedure. A detailed description is given on boundary and loading conditions, on the mesh characteristics, on the numerical scheme involved and on the void growth computation. Finally, the comparisons between numerical and experimental results are presented up to the crack initiation, the tearing process being not taken into account in the present study. The variations of J and of the local variables used to estimate the damage around the crack tip (triaxiality and hydrostatic stresses, plastic deformations, void growth ...) are computed as a function of the increasing load

  7. OECD/NEA burnup credit criticality benchmarks phase IIIB. Burnup calculations of BWR fuel assemblies for storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-02-01

    The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of {+-}10% relative to the average, although some results, esp. {sup 155}Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k{sub {infinity}} also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)

  8. Computational benchmark problem for deep penetration in iron

    International Nuclear Information System (INIS)

    A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab

  9. Nuclear fuel assembly identification using computer vision

    International Nuclear Information System (INIS)

    A new method of identifying fuel assemblies has been developed. The method uses existing in-cell TV cameras to read the notch-coded handling sockets of Fast Flux Test Facility (FFTF) assemblies. A computer looks at the TV image, locates the notches, decodes the notch pattern, and produces the identification number. A TV camera is the only in-cell equipment required, thus avoiding complex mechanisms in the hot cell. Assemblies can be identified in any location where the handling socket is visible from the camera. Other advantages include low cost, rapid identification, low maintenance, and ease of use

  10. Beyond the NAS Parallel Benchmarks: Measuring Dynamic Program Performance and Grid Computing Applications

    Science.gov (United States)

    VanderWijngaart, Rob F.; Biswas, Rupak; Frumkin, Michael; Feng, Huiyu; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The contents include: 1) A brief history of NPB; 2) What is (not) being measured by NPB; 3) Irregular dynamic applications (UA Benchmark); and 4) Wide area distributed computing (NAS Grid Benchmarks-NGB). This paper is presented in viewgraph form.

  11. Computation of flow through a block assembly

    International Nuclear Information System (INIS)

    A simple procedure is presented for computation of flow through gaps in the assembly block. This procedure enables estimation of bypass flows through the reflector of a gas cooled reactor. The method is based on a simplified channel network representation of the gap configuration. Using computer program the procedure was applied for verification on an experimental model. The results of the computation were in good agreement with the experimental data. A typical three dimensional model of a gas cooled reflector was also computed. (authors) 2 refs, 3 figs

  12. Solutions to NEANSC benchmark problems on 'Power Distribution within Assemblies (PDWA)' using the SRAC and GMVP

    International Nuclear Information System (INIS)

    An advancement or variety of PWR cores by introducing MOX fuel, burnable poison and so on, increases a heterogeneity in a core or an assembly. For the evaluation of the pin power distribution, the fine mesh flux reconstruction is required with the combination of an assembly calculation and a three dimensional core calculation with coarse mesh, instead of the combination of a two dimensional X-Y core calculation with fine mesh and a one dimensional axial core calculation for the conventional PWR core. The main purpose of the NEANSC benchmark problems entitled 'Power Distribution within Assemblies' is to compare the technique of the fine mesh flux reconstruction based on coarse mesh core calculation. In this report, we examine the validity of the reconstruction technique based on the coarse mesh core calculation using the Spline function, assembly calculation and heterogeneous fine mesh core calculation by built-in programs in the SRAC code using the groupwise Monte Carlo calculation with the GMVP code as reference. (author)

  13. Benchmark calculations of power distribution within fuel assemblies. Phase 2: comparison of data reduction and power reconstruction methods in production codes

    International Nuclear Information System (INIS)

    Systems loaded with plutonium in the form of mixed-oxide (MOX) fuel show somewhat different neutronic characteristics compared with those using conventional uranium fuels. In order to maintain adequate safety standards, it is essential to accurately predict the characteristics of MOX-fuelled systems and to further validate both the nuclear data and the computation methods used. A computation benchmark on power distribution within fuel assemblies to compare different techniques used in production codes for fine flux prediction in systems partially loaded with MOX fuel was carried out at an international level. It addressed first the numerical schemes for pin power reconstruction, then investigated the global performance including cross-section data reduction methods. This report provides the detailed results of this second phase of the benchmark. The analysis of the results revealed that basic data still need to be improved, primarily for higher plutonium isotopes and minor actinides. (author)

  14. Effects of Exciting Evaluated Nuclear Date Files on Nuclear Parameters of the BFS-62-3A Assembly Benchmark Model

    OpenAIRE

    Mikhail

    2002-01-01

    This report is continuation of studying of the experiments performed on BFS-62-3A critical assembly in Russia. The objective of work is definition of the cross section uncertainties on reactor neutronics parameters as applied to the hybrid core of the BN-600 reactor of Beloyarskaya NPP. Two-dimensional benchmark model of BFS-62-3A was created specially for these purposes and experimental values were reduced to it. Benchmark characteristics for this assembly are (1)criticality; (2)central fiss...

  15. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    Energy Technology Data Exchange (ETDEWEB)

    Orii, Shigeo [Japan Atomic Energy Research Inst., Tokyo (Japan)

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  16. Numerics of High Performance Computers and Benchmark Evaluation of Distributed Memory Computers

    Directory of Open Access Journals (Sweden)

    H. S. Krishna

    2004-07-01

    Full Text Available The internal representation of numerical data, their speed of manipulation to generate the desired result through efficient utilisation of central processing unit, memory, and communication links are essential steps of all high performance scientific computations. Machine parameters, in particular, reveal accuracy and error bounds of computation, required for performance tuning of codes. This paper reports diagnosis of machine parameters, measurement of computing power of several workstations, serial and parallel computers, and a component-wise test procedure for distributed memory computers. Hierarchical memory structure is illustrated by block copying and unrolling techniques. Locality of reference for cache reuse of data is amply demonstrated by fast Fourier transform codes. Cache and register-blocking technique results in their optimum utilisation with consequent gain in throughput during vector-matrix operations. Implementation of these memory management techniques reduces cache inefficiency loss, which is known to be proportional to the number of processors. Of the two Linux clusters-ANUP16, HPC22 and HPC64, it has been found from the measurement of intrinsic parameters and from application benchmark of multi-block Euler code test run that ANUP16 is suitable for problems that exhibit fine-grained parallelism. The delivered performance of ANUP16 is of immense utility for developing high-end PC clusters like HPC64 and customised parallel computers with added advantage of speed and high degree of parallelism.

  17. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  18. Computational Benchmark for Estimation of Reactivity Margin from Fission Products and Minor Actinides in PWR Burnup Credit

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, J.C.

    2001-08-02

    This report proposes and documents a computational benchmark problem for the estimation of the additional reactivity margin available in spent nuclear fuel (SNF) from fission products and minor actinides in a burnup-credit storage/transport environment, relative to SNF compositions containing only the major actinides. The benchmark problem/configuration is a generic burnup credit cask designed to hold 32 pressurized water reactor (PWR) assemblies. The purpose of this computational benchmark is to provide a reference configuration for the estimation of the additional reactivity margin, which is encouraged in the U.S. Nuclear Regulatory Commission (NRC) guidance for partial burnup credit (ISG8), and document reference estimations of the additional reactivity margin as a function of initial enrichment, burnup, and cooling time. Consequently, the geometry and material specifications are provided in sufficient detail to enable independent evaluations. Estimates of additional reactivity margin for this reference configuration may be compared to those of similar burnup-credit casks to provide an indication of the validity of design-specific estimates of fission-product margin. The reference solutions were generated with the SAS2H-depletion and CSAS25-criticality sequences of the SCALE 4.4a package. Although the SAS2H and CSAS25 sequences have been extensively validated elsewhere, the reference solutions are not directly or indirectly based on experimental results. Consequently, this computational benchmark cannot be used to satisfy the ANS 8.1 requirements for validation of calculational methods and is not intended to be used to establish biases for burnup credit analyses.

  19. Computational Benchmark for Estimation of Reactivity Margin from Fission Products and Minor Actinides in PWR Burnup Credit

    International Nuclear Information System (INIS)

    This report proposes and documents a computational benchmark problem for the estimation of the additional reactivity margin available in spent nuclear fuel (SNF) from fission products and minor actinides in a burnup-credit storage/transport environment, relative to SNF compositions containing only the major actinides. The benchmark problem/configuration is a generic burnup credit cask designed to hold 32 pressurized water reactor (PWR) assemblies. The purpose of this computational benchmark is to provide a reference configuration for the estimation of the additional reactivity margin, which is encouraged in the U.S. Nuclear Regulatory Commission (NRC) guidance for partial burnup credit (ISG8), and document reference estimations of the additional reactivity margin as a function of initial enrichment, burnup, and cooling time. Consequently, the geometry and material specifications are provided in sufficient detail to enable independent evaluations. Estimates of additional reactivity margin for this reference configuration may be compared to those of similar burnup-credit casks to provide an indication of the validity of design-specific estimates of fission-product margin. The reference solutions were generated with the SAS2H-depletion and CSAS25-criticality sequences of the SCALE 4.4a package. Although the SAS2H and CSAS25 sequences have been extensively validated elsewhere, the reference solutions are not directly or indirectly based on experimental results. Consequently, this computational benchmark cannot be used to satisfy the ANS 8.1 requirements for validation of calculational methods and is not intended to be used to establish biases for burnup credit analyses

  20. Benchmark Numerical Toolkits for High Performance Computing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  1. Benchmarking of computer codes and approaches for modeling exposure scenarios

    International Nuclear Information System (INIS)

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  2. Benchmarking of computer codes and approaches for modeling exposure scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  3. Benchmarking neuromorphic vision: lessons learnt from computer vision

    OpenAIRE

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part...

  4. Models of natural computation : gene assembly and membrane systems

    NARCIS (Netherlands)

    Brijder, Robert

    2008-01-01

    This thesis is concerned with two research areas in natural computing: the computational nature of gene assembly and membrane computing. Gene assembly is a process occurring in unicellular organisms called ciliates. During this process genes are transformed through cut-and-paste operations. We stud

  5. Analysis of Network Performance for Computer Communication Systems with Benchmark

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper introduced a performance evaluating approach of computer communication system based on the simulation and measurement technology, and discussed its evaluating models. The result of our experiment showed that the outcome of practical measurement on Ether-LAN fitted in well with the theoreticai analysis. The approach we presented can be used to define various kinds of artificially simulated load models conveniently, build all kinds of network application environments in a flexible way, and exert sufficientiy the widely-used and high-precision features of the traditional simulation technology and the reality,reliability, adaptability features of measurement technology.

  6. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Woodworth, Ken [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kuruganti, Teja [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-17

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the software for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.

  7. Benchmark experiment on vanadium assembly with D-T neutrons. In-situ measurement

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Kasugai, Yoshimi; Konno, Chikara; Wada, Masayuki; Oyama, Yukio; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Murata, Isao; Kokooo; Takahashi, Akito

    1998-03-01

    Fusion neutronics benchmark experimental data on vanadium were obtained for neutrons in almost entire energies as well as secondary gamma-rays. Benchmark calculations for the experiment were performed to investigate validity of recent nuclear data files, i.e., JENDL Fusion File, FENDL/E-1.0 and EFF-3. (author)

  8. COSA II Further benchmark exercises to compare geomechanical computer codes for salt

    International Nuclear Information System (INIS)

    Project COSA (COmputer COdes COmparison for SAlt) was a benchmarking exercise involving the numerical modelling of the geomechanical behaviour of heated rock salt. Its main objective was to assess the current European capability to predict the geomechanical behaviour of salt, in the context of the disposal of heat-producing radioactive waste in salt formations. Twelve organisations participated in the exercise in which their solutions to a number of benchmark problems were compared. The project was organised in two distinct phases: The first, from 1984-1986, concentrated on the verification of the computer codes. The second, from 1986-1988 progressed to validation, using three in-situ experiments at the Asse research facility in West Germany as a basis for comparison. This document reports the activities of the second phase of the project and presents the results, assessments and conclusions

  9. Experimental study of the neutronics of the first gas cooled fast reactor benchmark assembly (GCFR phase I assembly)

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharyya, S.K.

    1976-12-01

    The Gas Cooled Fast Reactor (GCFR) Phase I Assembly is the first in a series of ZPR-9 critical assemblies designed to provide a reference set of reactor physics measurements in support of the 300 MW(e) GCFR Demonstration Plant designed by General Atomic Company. The Phase I Assembly was the first complete mockup of a GCFR core ever built. A set of basic reactor physics measurements were performed in the assembly to characterize the neutronics of the assembly and assess the impact of the neutron streaming on the various integral parameters. The analysis of the experiments was carried out using ENDF/B-IV based data and two-dimensional diffusion theory methods. The Benoist method of using directional diffusion coefficients was used to treat the anisotropic effects of neutron streaming within the framework of diffusion theory. Calculated predictions of most integral parameters in the GCFR showed the same kinds of agreements with experiment as in earlier LMFBR assemblies.

  10. Frances: A Tool for Understanding Computer Architecture and Assembly Language

    Science.gov (United States)

    Sondag, Tyler; Pokorny, Kian L.; Rajan, Hridesh

    2012-01-01

    Students in all areas of computing require knowledge of the computing device including software implementation at the machine level. Several courses in computer science curricula address these low-level details such as computer architecture and assembly languages. For such courses, there are advantages to studying real architectures instead of…

  11. Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flapper, Joris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry – including four dairy processes – cheese, fluid milk, butter, and milk powder.

  12. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Science.gov (United States)

    Kahler, A. C.; MacFarlane, R. E.; Mosteller, R. D.; Kiedrowski, B. C.; Frankle, S. C.; Chadwick, M. B.; McKnight, R. D.; Lell, R. M.; Palmiotti, G.; Hiruta, H.; Herman, M.; Arcilla, R.; Mughabghab, S. F.; Sublet, J. C.; Trkov, A.; Trumbull, T. H.; Dunn, M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., "ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data," Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected

  13. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    International Nuclear Information System (INIS)

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 418 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as 236U capture. Other deficiencies, such as the overprediction of Pu solution system critical eigenvalues and a decreasing trend in calculated eigenvalue for

  14. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A. [Los Alamos National Laboratory (LANL); Macfarlane, R E [Los Alamos National Laboratory (LANL); Mosteller, R D [Los Alamos National Laboratory (LANL); Kiedrowski, B C [Los Alamos National Laboratory (LANL); Frankle, S C [Los Alamos National Laboratory (LANL); Chadwick, M. B. [Los Alamos National Laboratory (LANL); Mcknight, R D [Argonne National Laboratory (ANL); Lell, R M [Argonne National Laboratory (ANL); Palmiotti, G [Idaho National Laboratory (INL); Hiruta, h [Idaho National Laboratory (INL); Herman, Micheal W [Brookhaven National Laboratory (BNL); Arcilla, r [Brookhaven National Laboratory (BNL); Mughabghab, S F [Brookhaven National Laboratory (BNL); Sublet, J C [Culham Science Center, Abington, UK; Trkov, A. [Jozef Stefan Institute, Slovenia; Trumbull, T H [Knolls Atomic Power Laboratory; Dunn, Michael E [ORNL

    2011-01-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical

  15. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also

  16. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  17. On computational properties of gene assembly in ciliates

    Directory of Open Access Journals (Sweden)

    Vladimir Rogojin

    2010-11-01

    Full Text Available Gene assembly in stichotrichous ciliates happening during sexual reproduction is one of the most involved DNA manipulation processes occurring in biology. This biological process is of high interest from the computational and mathematical points of view due to its close analogy with such concepts and notions in theoretical computer science as permutation and linked list sorting and string rewriting. Studies on computational properties of gene assembly in ciliates represent a good example of interdisciplinary research contributing to both computer science and biology. We review here a number of general results related both to the development of different computational methods enhancing our understanding on the nature of gene assembly, as well as to the development of new biologically motivated computational and mathematical models and paradigms. Those paradigms contribute in particular to combinatorics, formal languages and computability theories.

  18. Research on Three Dimensional Computer Assistance Assembly Process Design System

    Institute of Scientific and Technical Information of China (English)

    HOU Wenjun; YAN Yaoqi; DUAN Wenjia; SUN Hanxu

    2006-01-01

    The computer aided process planning will certainly play a significant role in the success of enterprise informationization. 3-dimensional design will promote Tri-dimensional process planning. This article analysis nowadays situation and problems of assembly process planning, gives a 3-dimensional computer aided process planning system (3D-VAPP), and researches on the product information extraction, assembly sequence and path planning in visual interactive assembly process design, dynamic emulation of assembly and process verification, assembly animation outputs and automatic exploding view generation, interactive craft filling and craft knowledge management, etc. It also gives a multi-layer collision detect and multi-perspective automatic camera switching algorithm. Experiments were done to validate the feasibility of such technology and algorithm, which established the foundation of tri-dimensional computer aided process planning.

  19. Benchmark and partial validation testing of the FLASH computer code, Version 3.0

    Energy Technology Data Exchange (ETDEWEB)

    Martian, P.; Smith, C.S.

    1993-09-01

    This document presents methods and results of benchmark testing (i.e., code-to-code comparisons) and partial validation testing (i.e., tests which compare field data to the computer generated solutions) of the FLASH computer code, Version 3.0, which were conducted to determine if the code is ready for performance assessment studies of the Radioactive Waste Management Complex. Three test problems are presented that were designed to check computational efficiency, accuracy of the numerical algorithms, and the capability of the code to simulate diverse hydrological conditions. These test problems were designed to specifically test the code`s ability to simulate, (a) seasonal infiltration in response to meteorological conditions, (b) changing watertable elevations due to a transient areal source of water, (i.e., influx from spreading basins), and (c) infiltration into fractured basalt as a result of seasonal water in drainage ditches. The FLASH simulations generally compared well with the benchmark codes, indicating good stability and acceptable computational efficiency while simulating a wide range of conditions. The code appears operational for modeling both unsaturated and saturated flow in fractured, heterogeneous porous media. However, the code failed to converge when a unsaturated to saturated transition occurred. Consequently, the code should not be used when this condition occurs or is expected to occur, i.e. when perched water is present or when infiltration rates exceed the saturated conductivity of the soil.

  20. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  1. ESTABLISHING A METHODOLOGY FOR BENCHMARKING SPEECH SYNTHESIS FOR COMPUTER-ASSISTED LANGUAGE LEARNING (CALL

    Directory of Open Access Journals (Sweden)

    Zöe Handley

    2005-09-01

    Full Text Available Despite the new possibilities that speech synthesis brings about, few Computer-Assisted Language Learning (CALL applications integrating speech synthesis have found their way onto the market. One potential reason is that the suitability and benefits of the use of speech synthesis in CALL have not been proven. One way to do this is through evaluation. Yet, very few formal evaluations of speech synthesis for CALL purposes have been conducted. One possible reason for the neglect of evaluation in this context is the fact that it is expensive in terms of time and resources. An important concern given that there are several levels of evaluation from which such applications would benefit. Benchmarking, the comparison of the score obtained by a system with that obtained by one which is known, to guarantee user satisfaction in a standard task or set of tasks, is introduced as a potential solution to this problem. In this article, we report on our progress towards the development of one of these benchmarks, namely a benchmark for determining the adequacy of speech synthesis systems for use in CALL. We do so by presenting the results of a case study which aimed to identify the criteria which determine the adequacy of the output of speech synthesis systems for use in its various roles in CALL with a view to the selection of benchmark tests which will address these criteria. These roles (reading machine, pronunciation model, and conversational partner are also discussed here. An agenda for further research and evaluation is proposed in the conclusion.

  2. The Use of Hebbian Cell Assemblies for Nonlinear Computation

    DEFF Research Database (Denmark)

    Tetzlaff, Christian; Dasgupta, Sakyasingha; Kulvicius, Tomas;

    2015-01-01

    preserving a rich diversity of neural dynamics needed for computation is still unknown. Here we show that the combination of synaptic plasticity with the slower process of synaptic scaling achieves (i) the formation of cell assemblies and (ii) enhances the diversity of neural dynamics facilitating......When learning a complex task our nervous system self-organizes large groups of neurons into coherent dynamic activity patterns. During this, a network with multiple, simultaneously active, and computationally powerful cell assemblies is created. How such ordered structures are formed while...... the learning of complex calculations. Due to synaptic scaling the dynamics of different cell assemblies do not interfere with each other. As a consequence, this type of self-organization allows executing a difficult, six degrees of freedom, manipulation task with a robot where assemblies need to learn...

  3. DNA Computing by Self-Assembly

    OpenAIRE

    Winfree, Erik

    2003-01-01

    Information and algorithms appear to be central to biological organization and processes, from the storage and reproduction of genetic information to the control of developmental processes to the sophisticated computations performed by the nervous system. Much as human technology uses electronic microprocessors to control electromechanical devices, biological organisms use biochemical circuits to control molecular and chemical events. The engineering and programming of bioch...

  4. Hybrid Numerical Solvers for Massively Parallel Eigenvalue Computation and Their Benchmark with Electronic Structure Calculations

    CERN Document Server

    Imachi, Hiroto

    2015-01-01

    Optimally hybrid numerical solvers were constructed for massively parallel generalized eigenvalue problem (GEP).The strong scaling benchmark was carried out on the K computer and other supercomputers for electronic structure calculation problems in the matrix sizes of M = 10^4-10^6 with upto 105 cores. The procedure of GEP is decomposed into the two subprocedures of the reducer to the standard eigenvalue problem (SEP) and the solver of SEP. A hybrid solver is constructed, when a routine is chosen for each subprocedure from the three parallel solver libraries of ScaLAPACK, ELPA and EigenExa. The hybrid solvers with the two newer libraries, ELPA and EigenExa, give better benchmark results than the conventional ScaLAPACK library. The detailed analysis on the results implies that the reducer can be a bottleneck in next-generation (exa-scale) supercomputers, which indicates the guidance for future research. The code was developed as a middleware and a mini-application and will appear online.

  5. Neutronics benchmark for the Quad Cities-1 (Cycle 2) mixed oxide assembly irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, S.E.; Difilippo, F.C.

    1998-04-01

    Reactor physics computer programs are important tools that will be used to estimate mixed oxide fuel (MOX) physics performance in support of weapons grade plutonium disposition in US and Russian Federation reactors. Many of the computer programs used today have not undergone calculational comparisons to measured data obtained during reactor operation. Pin power, the buildup of transuranics, and depletion of gadolinium measurements were conducted (under Electric Power Research Institute sponsorship) on uranium and MOX pins irradiated in the Quad Cities-1 reactor in the 1970`s. These measurements are compared to modern computational models for the HELIOS and SCALE computer codes. Good agreement on pin powers was obtained for both MOX and uranium pins. The agreement between measured and calculated values of transuranic isotopes was mixed, depending on the particular isotope.

  6. Benchmarking the CRBLASTER Computational Framework on the 350-MHz 49-core Maestro Development Board

    Science.gov (United States)

    Mighell, K. J.

    2012-09-01

    I describe the performance of the CRBLASTER computational framework on a 350-MHz 49-core Maestro Development Board (MBD). The 49-core Interim Test Chip (ITC) was developed by the U.S. Government and is based on the intellectual property of the 64-core TILE64 processor of the Tilera Corporation. The Maestro processor is intended for use in the high radiation environments found in space; the ITC was fabricated using IBM 90-nm CMOS 9SF technology and Radiation-Hardening-by-Design (RHDB) rules. CRBLASTER is a parallel-processing cosmic-ray rejection application based on a simple computational framework that uses the high-performance computing industry standard Message Passing Interface (MPI) library. CRBLASTER was designed to be used by research scientists to easily port image-analysis programs based on embarrassingly-parallel algorithms to a parallel-processing environment such as a multi-node Beowulf cluster or multi-core processors using MPI. I describe my experience of porting CRBLASTER to the 64-core TILE64 processor, the Maestro simulator, and finally the 49-core Maestro processor itself. Performance comparisons using the ITC are presented between emulating all floating-point operations in software and doing all floating point operations with hardware assist from an IEEE-754 compliant Aurora FPU (floating point unit) that is attached to each of the 49 cores. Benchmarking of the CRBLASTER computational framework using the memory-intensive L.A.COSMIC cosmic ray rejection algorithm and a computational-intensive Poisson noise generator reveal subtleties of the Maestro hardware design. Lastly, I describe the importance of using real scientific applications during the testing phase of next-generation computer hardware; complex real-world scientific applications can stress hardware in novel ways that may not necessarily be revealed while executing simple applications or unit tests.

  7. Quantum computing applied to calculations of molecular energies: CH2 benchmark.

    Science.gov (United States)

    Veis, Libor; Pittner, Jiří

    2010-11-21

    Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.

  8. An Easily Assembled Laboratory Exercise in Computed Tomography

    Science.gov (United States)

    Mylott, Elliot; Klepetka, Ryan; Dunlap, Justin C.; Widenhorn, Ralf

    2011-01-01

    In this paper, we present a laboratory activity in computed tomography (CT) primarily composed of a photogate and a rotary motion sensor that can be assembled quickly and partially automates data collection and analysis. We use an enclosure made with a light filter that is largely opaque in the visible spectrum but mostly transparent to the near…

  9. Tolerance Verification of an Industrial Assembly using Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo; Regi, Francesco

    2016-01-01

    This paper reports on results of tolerance verification of a multi-material assembly by using Computed Tomography (CT). The workpiece comprises three parts which are made out of different materials. Five different measurands were inspected. The calculation of measurement uncertainties was attempt...

  10. A critical assembly designed to measure neutronic benchmarks in support of the space nuclear thermal propulsion program

    Science.gov (United States)

    Parma, Edward J.; Ball, Russell M.; Hoovler, Gary S.; Selcow, Elizabeth C.; Cerbone, Ralph J.

    1993-01-01

    A reactor designed to perform criticality experiments in support of the Space Nuclear Thermal Propulsion program is currently in operation at the Sandia National Laboratories' reactor facility. The reactor is a small, water-moderated system that uses highly enriched uranium particle fuel in a 19-element configuration. Its purpose is to obtain neutronic measurements under a variety of experimental conditions that are subsequently used to benchmark rector-design computer codes. Brookhaven National Laboratory, Babcock & Wilcox, and Sandia National Laboratories participated in determining the reactor's performance requirements, design, follow-on experimentation, and in obtaining the licensing approvals. Brookhaven National Laboratory is primarily responsible for the analytical support, Babcock & Wilcox the hardware design, and Sandia National Laboratories the operational safety. All of the team members participate in determining the experimentation requirements, performance, and data reduction. Initial criticality was achieved in October 1989. An overall description of the reactor is presented along with key design features and safety-related aspects.

  11. Computationally designed peptides for self-assembly of nanostructured lattices.

    Science.gov (United States)

    Zhang, Huixi Violet; Polzer, Frank; Haider, Michael J; Tian, Yu; Villegas, Jose A; Kiick, Kristi L; Pochan, Darrin J; Saven, Jeffery G

    2016-09-01

    Folded peptides present complex exterior surfaces specified by their amino acid sequences, and the control of these surfaces offers high-precision routes to self-assembling materials. The complexity of peptide structure and the subtlety of noncovalent interactions make the design of predetermined nanostructures difficult. Computational methods can facilitate this design and are used here to determine 29-residue peptides that form tetrahelical bundles that, in turn, serve as building blocks for lattice-forming materials. Four distinct assemblies were engineered. Peptide bundle exterior amino acids were designed in the context of three different interbundle lattices in addition to one design to produce bundles isolated in solution. Solution assembly produced three different types of lattice-forming materials that exhibited varying degrees of agreement with the chosen lattices used in the design of each sequence. Transmission electron microscopy revealed the nanostructure of the sheetlike nanomaterials. In contrast, the peptide sequence designed to form isolated, soluble, tetrameric bundles remained dispersed and did not form any higher-order assembled nanostructure. Small-angle neutron scattering confirmed the formation of soluble bundles with the designed size. In the lattice-forming nanostructures, the solution assembly process is robust with respect to variation of solution conditions (pH and temperature) and covalent modification of the computationally designed peptides. Solution conditions can be used to control micrometer-scale morphology of the assemblies. The findings illustrate that, with careful control of molecular structure and solution conditions, a single peptide motif can be versatile enough to yield a wide range of self-assembled lattice morphologies across many length scales (1 to 1000 nm). PMID:27626071

  12. Intrinsic universality and the computational power of self-assembly.

    Science.gov (United States)

    Woods, Damien

    2015-07-28

    Molecular self-assembly, the formation of large structures by small pieces of matter sticking together according to simple local interactions, is a ubiquitous phenomenon. A challenging engineering goal is to design a few molecules so that large numbers of them can self-assemble into desired complicated target objects. Indeed, we would like to understand the ultimate capabilities and limitations of this bottom-up fabrication process. We look to theoretical models of algorithmic self-assembly, where small square tiles stick together according to simple local rules in order to carry out a crystal growth process. In this survey, we focus on the use of simulation between such models to classify and separate their computational and expressive powers. Roughly speaking, one model simulates another if they grow the same structures, via the same dynamical growth processes. Our journey begins with the result that there is a single intrinsically universal tile set that, with appropriate initialization and spatial scaling, simulates any instance of Winfree's abstract Tile Assembly Model. This universal tile set exhibits something stronger than Turing universality: it captures the geometry and dynamics of any simulated system in a very direct way. From there we find that there is no such tile set in the more restrictive non-cooperative model, proving it weaker than the full Tile Assembly Model. In the two-handed model, where large structures can bind together in one step, we encounter an infinite set of infinite hierarchies of strictly increasing simulation power. Towards the end of our trip, we find one tile to rule them all: a single rotatable flipable polygonal tile that simulates any tile assembly system. We find another tile that aperiodically tiles the plane (but with small gaps). These and other recent results show that simulation is giving rise to a kind of computational complexity theory for self-assembly. It seems this could be the beginning of a much longer journey

  13. Computational design of co-assembling protein-DNA nanowires

    Science.gov (United States)

    Mou, Yun; Yu, Jiun-Yann; Wannier, Timothy M.; Guo, Chin-Lin; Mayo, Stephen L.

    2015-09-01

    Biomolecular self-assemblies are of great interest to nanotechnologists because of their functional versatility and their biocompatibility. Over the past decade, sophisticated single-component nanostructures composed exclusively of nucleic acids, peptides and proteins have been reported, and these nanostructures have been used in a wide range of applications, from drug delivery to molecular computing. Despite these successes, the development of hybrid co-assemblies of nucleic acids and proteins has remained elusive. Here we use computational protein design to create a protein-DNA co-assembling nanomaterial whose assembly is driven via non-covalent interactions. To achieve this, a homodimerization interface is engineered onto the Drosophila Engrailed homeodomain (ENH), allowing the dimerized protein complex to bind to two double-stranded DNA (dsDNA) molecules. By varying the arrangement of protein-binding sites on the dsDNA, an irregular bulk nanoparticle or a nanowire with single-molecule width can be spontaneously formed by mixing the protein and dsDNA building blocks. We characterize the protein-DNA nanowire using fluorescence microscopy, atomic force microscopy and X-ray crystallography, confirming that the nanowire is formed via the proposed mechanism. This work lays the foundation for the development of new classes of protein-DNA hybrid materials. Further applications can be explored by incorporating DNA origami, DNA aptamers and/or peptide epitopes into the protein-DNA framework presented here.

  14. On computational and behavioral evidence regarding Hebbian transcortical cell assemblies.

    OpenAIRE

    Spivey, M. J.; Andrews, M. W.; Richardson, D. C.

    1999-01-01

    Pulvermuller restricts himself to an unnecessarily narrow range of evidence to support his claims. Evidence from neural modeling and behavioral experiments provides further support for an account of words encoded as transcortical cell assemblies. A cognitive neuroscience of language must include a range of methodologies (e.g., neural, computational, and behavioral) and will need to focus on the on-line processes of real-time language processing in more natural contexts.

  15. COMPUTER-AIDED BLOCK ASSEMBLY PROCESS PLANNING IN SHIPBUILD-ING BASED ON RULE-REASONING

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhiying; LI Zhen; JIANG Zhibin

    2008-01-01

    Computer-aided block assembly process planning based on rule-reasoning are developed in order to improve the assembly efficiency and implement the automated block assembly process planning generation in shipbuilding. First, weighted directed liaison graph (WDLG) is proposed to represent the model of block assembly process according to the characteristics of assembly relation, and edge list (EL) is used to describe assembly sequences. Shapes and assembly attributes of block parts are analyzed to determine the assembly position and matched parts of parts used frequently. Then, a series of assembly rules are generalized, and assembly sequences for block are obtained by means of rule reasoning. Final, a prototype system of computer-aided block assembly process planning is built. The system has been tested on actual block, and the results were found to be quite efficiency. Meanwhile, the fundament for the automation of block assembly process generation and integration with other systems is established.

  16. Genome Assembly and Computational Analysis Pipelines for Bacterial Pathogens

    KAUST Repository

    Rangkuti, Farania Gama Ardhina

    2011-06-01

    Pathogens lie behind the deadliest pandemics in history. To date, AIDS pandemic has resulted in more than 25 million fatal cases, while tuberculosis and malaria annually claim more than 2 million lives. Comparative genomic analyses are needed to gain insights into the molecular mechanisms of pathogens, but the abundance of biological data dictates that such studies cannot be performed without the assistance of computational approaches. This explains the significant need for computational pipelines for genome assembly and analyses. The aim of this research is to develop such pipelines. This work utilizes various bioinformatics approaches to analyze the high-­throughput genomic sequence data that has been obtained from several strains of bacterial pathogens. A pipeline has been compiled for quality control for sequencing and assembly, and several protocols have been developed to detect contaminations. Visualization has been generated of genomic data in various formats, in addition to alignment, homology detection and sequence variant detection. We have also implemented a metaheuristic algorithm that significantly improves bacterial genome assemblies compared to other known methods. Experiments on Mycobacterium tuberculosis H37Rv data showed that our method resulted in improvement of N50 value of up to 9697% while consistently maintaining high accuracy, covering around 98% of the published reference genome. Other improvement efforts were also implemented, consisting of iterative local assemblies and iterative correction of contiguated bases. Our result expedites the genomic analysis of virulent genes up to single base pair resolution. It is also applicable to virtually every pathogenic microorganism, propelling further research in the control of and protection from pathogen-­associated diseases.

  17. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flapper, Joris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variables affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and

  18. A Computer Model for Analyzing Volatile Removal Assembly

    Science.gov (United States)

    Guo, Boyun

    2010-01-01

    A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.

  19. Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C

    OpenAIRE

    Blaha, Stephen

    2002-01-01

    We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.

  20. Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C Language

    OpenAIRE

    Blaha, Stephen

    2002-01-01

    We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.

  1. Summary of results for the uranium benchmark problem of the ANS Ad Hoc Committee on Reactor Physics Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Parish, T.A. [Texas A and M Univ., College Station, TX (United States). Nuclear Engineering Dept.; Mosteller, R.D. [Los Alamos National Lab., NM (United States); Diamond, D.J. [Brookhaven National Lab., Upton, NY (United States); Gehin, J.C. [Oak Ridge National Lab., TN (United States)

    1998-12-31

    This paper presents a summary of the results obtained by all of the contributors to the Uranium Benchmark Problem of the ANS Ad hoc Committee on Reactor Physics Benchmarks. The benchmark problem was based on critical experiments which mocked-up lattices typical of PWRs. Three separate cases constituted the benchmark problem. These included a uniform lattice, an assembly-type lattice with water holes and an assembly-type lattice with pyrex rods. Calculated results were obtained from eighteen separate organizations from all over the world. Some organizations submitted more than one set of results based on different calculational methods and cross section data. Many of the most widely used assembly physics and core analysis computer codes and neutron cross section data libraries were applied by the contributors.

  2. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    Science.gov (United States)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  3. PATHWAY ASSEMBLY ASSISTED BY COMPUTER: TEACHING ANAEROBIC GLYCOLYSIS

    Directory of Open Access Journals (Sweden)

    F. M Sarraipa

    2008-05-01

    Full Text Available The  knowledge on  metabolic pathways  is required in  the higher education courses on biological  field.  This  work  presents  a  computer assisted  approach for metabolic pathways self study, based  on their assembly, reaction-by-reaction.  Anaerobic glycolysis was used as a model.  The software was designed to users who have basic knowledge on enzymatic catalysis,  and  to be used with or without teacher’s help. Every reaction is detailed, and the student can move forward only after having assembled each reaction correctly. The software  contains a tutorial  to help users both  on its use, and  on the  correct assembly of each reaction.  The software was field tested  in the basics biochemistry disciplines to the students of Physical Education, Nursing, Medicine and Biology from the State University of Campinas  –  UNICAMP, and in the physiology discipline to the students of Physical Education from the Institute Adventist Sao Paulo – IASP. A database using MySQL was structured to collect data on the software using . Every action taken by the students were recorded. The statistical analysis showed that the number of tries decreases as the students move forward on the pathway assembly. The most difficult reaction besides the first one, were the ones that presented  pattern changes, for example, the sixth reaction was the  first oxidation-reduction reaction. In the first reaction the most frequent mistakes  were using the phosphohexose isomerase as enzyme or having forgotten to include ATP among the substrates. In the sixth reaction the most frequent mistakes was having forgotten to include NAD+ among the substrates. The recorded data analysis can be used by the teachers to give in their lectures, special attention to the reactions were the students made more mistakes.

  4. Computer simulation of Masurca critical and subcritical experiments. Muse-4 benchmark. Final report

    International Nuclear Information System (INIS)

    The efficient and safe management of spent fuel produced during the operation of commercial nuclear power plants is an important issue. In this context, partitioning and transmutation (P and T) of minor actinides and long-lived fission products can play an important role, significantly reducing the burden on geological repositories of nuclear waste and allowing their more effective use. Various systems, including existing reactors, fast reactors and advanced systems have been considered to optimise the transmutation scheme. Recently, many countries have shown interest in accelerator-driven systems (ADS) due to their potential for transmutation of minor actinides. Much R and D work is still required in order to demonstrate their desired capability as a whole system, and the current analysis methods and nuclear data for minor actinide burners are not as well established as those for conventionally-fuelled systems. Recognizing a need for code and data validation in this area, the Nuclear Science Committee of the OECD/NEA has organised various theoretical benchmarks on ADS burners. Many improvements and clarifications concerning nuclear data and calculation methods have been achieved. However, some significant discrepancies for important parameters are not fully understood and still require clarification. Therefore, this international benchmark based on MASURCA experiments, which were carried out under the auspices of the EC 5. Framework Programme, was launched in December 2001 in co-operation with the CEA (France) and CIEMAT (Spain). The benchmark model was oriented to compare simulation predictions based on available codes and nuclear data libraries with experimental data related to TRU transmutation, criticality constants and time evolution of the neutronic flux following source variation, within liquid metal fast subcritical systems. A total of 16 different institutions participated in this first experiment based benchmark, providing 34 solutions. The large number

  5. Neutron and gamma spectra measurements and calculations in benchmark spherical iron assemblies with sup 2 sup 5 sup 2 Cf neutron source in the centre

    CERN Document Server

    Jansky, B; Turzik, Z; Kyncl, J; Cvachovec, F; Trykov, L A; Volkov, V S

    2002-01-01

    The neutron and gamma spectra measurements have been made for benchmark iron spherical assemblies with the diameter of 30, 50 and 100 cm. The sup 2 sup 5 sup 2 Cf neutron sources with different emissions were placed into the centre of iron spheres. In the first stage of the project, independent laboratories took part in the leakage spectra measurements. The proton recoil method was used with stilbene crystals and hydrogen proportional counters. The working range of spectrometers for neutrons is in energy range from 0.01 to 16 MeV, and for gamma from 0.40 to 12 MeV. Some adequate calculations have been carried out. The propose to carefully analyse the leakage mixed neutron and gamma spectrum from iron sphere of diameter 50 cm and then adopt that field as standard.

  6. Benchmark physics experiment of metallic-fueled LMFBR at FCA. 2; Experiments of FCA assembly XVI-1 and their analyses

    Energy Technology Data Exchange (ETDEWEB)

    Iijima, Susumu; Oigawa, Hiroyuki; Ohno, Akio; Sakurai, Takeshi; Nemoto, Tatsuo; Osugi, Toshitaka; Satoh, Kunio; Hayasaka, Katsuhisa [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Bando, Masaru

    1993-10-01

    An availability of data and method for a design of metallic-fueled LMFBR is examined by using the experiment results of FCA assembly XVI-1. Experiment included criticality and reactivity coefficients such as Doppler, sodium void, fuel shifting and fuel expansion. Reaction rate ratios, sample worth and control rod worth were also measured. Analysis was made by using three-dimensional diffusion calculations and JENDL-2 cross sections. Predictions of assembly XVI-1 reactor physics parameters agree reasonably well with the measured values, but for some reactivity coefficients such as Doppler, large zone sodium void and fuel shifting further improvement of calculation method was need. (author).

  7. Computational benchmarking of fast neutron transport throughout large water thicknesses; Benchmark theorique du transport de neutrons rapides a travers de larges epaisseurs d`eau

    Energy Technology Data Exchange (ETDEWEB)

    Risch, P.; Dekens, O.; Ait Abderrahim, H. [SCK-CEN, Fuel Research Department, (Belgium); Wouters, R. de [Tractebel, Energy Engineering, (Belgium)

    1997-10-01

    Neutron dosimetry experiments seem to point our difficulties in the treatment of large water thickness like those encountered between the core baffle and the pressure vessel. This paper describes the theoretical benchmark undertaken by EDF, SCK/CEN and TRACTEBEL ENERGY ENGINEERING, concerning the transport of fast neutrons throughout a one meter cube of water, located after a U-235 fission sources plate. The results showed no major discrepancies between the calculations up to 50 cm from the source, accepting that a P3 development of the Legendre polynomials is necessary for the Sn calculations. The main differences occurred after 50 cm, reaching 20 % at the end of the water cube. This results lead us to consider an experimental benchmark, dedicated to the problem of fast neutron deep penetration in water, which has been launched at SCK/CEN. (authors). 7 refs.

  8. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  9. Benchmarking Experimental and Computational Thermochemical Data: A Case Study of the Butane Conformers.

    Science.gov (United States)

    Barna, Dóra; Nagy, Balázs; Csontos, József; Császár, Attila G; Tasi, Gyula

    2012-02-14

    Due to its crucial importance, numerous studies have been conducted to determine the enthalpy difference between the conformers of butane. However, it is shown here that the most reliable experimental values are biased due to the statistical model utilized during the evaluation of the raw experimental data. In this study, using the appropriate statistical model, both the experimental expectation values and the associated uncertainties are revised. For the 133-196 and 223-297 K temperature ranges, 668 ± 20 and 653 ± 125 cal mol(-1), respectively, are recommended as reference values. Furthermore, to show that present-day quantum chemistry is a favorable alternative to experimental techniques in the determination of enthalpy differences of conformers, a focal-point analysis, based on coupled-cluster electronic structure computations, has been performed that included contributions of up to perturbative quadruple excitations as well as small correction terms beyond the Born-Oppenheimer and nonrelativistic approximations. For the 133-196 and 223-297 K temperature ranges, in exceptional agreement with the corresponding revised experimental data, our computations yielded 668 ± 3 and 650 ± 6 cal mol(-1), respectively. The most reliable enthalpy difference values for 0 and 298.15 K are also provided by the computational approach, 680.9 ± 2.5 and 647.4 ± 7.0 cal mol(-1), respectively.

  10. Exploring the marketing challenges faced by assembled computer dealers

    OpenAIRE

    Kallimani, Rashmi

    2010-01-01

    There has been a great competition in computer market these days for obtaining higher market share. Computer market consisting of many branded and non branded players have been using various methods for matching the supply and demand in best possible way for attaining market dominance. Branded companies are seen to be investing large amount in aggressive marketing techniques for reaching the customers and obtaining higher market share. Due to this many small companies and non branded computer...

  11. Hydraulic benchmark data for PWR mixing vane grid

    International Nuclear Information System (INIS)

    The purpose of the present study is to present new hydraulic benchmark data obtained for PWR rod bundles for the purpose of benchmarking Computational Fluid Dynamics (CFD) models of the rod bundle. The flow field in a PWR fuel assembly downstream of structural grids which have mixing vane grids attached is very complex due to the geometry of the subchannel and the high axial component of the velocity field relative to the secondary flows which are used to enhance the heat transfer performance of the rod bundle. Westinghouse has a CFD methodology to model PWR rod bundles that was developed with prior benchmark test data. As improvements in testing techniques have become available, further PWR rod bundle testing is being performed to obtain advanced data which has high spatial and temporal resolution. This paper presents the advanced testing and benchmark data that has been obtained by Westinghouse through collaboration with Texas A&M University. (author)

  12. Experience in programming Assembly language of CDC CYBER 170/750 computer

    International Nuclear Information System (INIS)

    Aiming to optimize processing time of BCG computer code in the CDC CYBER 170/750 computer, the FORTRAN-V language of INTERP subroutine was converted to Assembly language. The BCG code was developed for solving neutron transport equation by iterative method, and the INTERP subroutine is innermost loop of the code carrying out 5 interpolation types. The central processor unit Assembly language of the CDC CYBER 170/750 computer and its application in implementing the interpolation subroutine of BCG code are described. (M.C.K.)

  13. Theory of Connectivity: Nature and Nurture of Cell Assemblies and Cognitive Computation.

    Science.gov (United States)

    Li, Meng; Liu, Jun; Tsien, Joe Z

    2016-01-01

    Richard Semon and Donald Hebb are among the firsts to put forth the notion of cell assembly-a group of coherently or sequentially-activated neurons-to represent percept, memory, or concept. Despite the rekindled interest in this century-old idea, the concept of cell assembly still remains ill-defined and its operational principle is poorly understood. What is the size of a cell assembly? How should a cell assembly be organized? What is the computational logic underlying Hebbian cell assemblies? How might Nature vs. Nurture interact at the level of a cell assembly? In contrast to the widely assumed randomness within the mature but naïve cell assembly, the Theory of Connectivity postulates that the brain consists of the developmentally pre-programmed cell assemblies known as the functional connectivity motif (FCM). Principal cells within such FCM is organized by the power-of-two-based mathematical principle that guides the construction of specific-to-general combinatorial connectivity patterns in neuronal circuits, giving rise to a full range of specific features, various relational patterns, and generalized knowledge. This pre-configured canonical computation is predicted to be evolutionarily conserved across many circuits, ranging from these encoding memory engrams and imagination to decision-making and motor control. Although the power-of-two-based wiring and computational logic places a mathematical boundary on an individual's cognitive capacity, the fullest intellectual potential can be brought about by optimized nature and nurture. This theory may also open up a new avenue to examining how genetic mutations and various drugs might impair or improve the computational logic of brain circuits.

  14. Theory of Connectivity: Nature and Nurture of Cell Assemblies and Cognitive Computation

    Directory of Open Access Journals (Sweden)

    Meng eLi

    2016-04-01

    Full Text Available Richard Semon and Donald Hebb are among the firsts to put forth the notion of cell assembly – a group of coherently or sequentially-activated neurons– to represent percept, memory, or concept. Despite the rekindled interest in this age-old idea, the concept of cell assembly still remains ill-defined and its operational principle is poorly understood. What is the size of a cell assembly? How should a cell assembly be organized? What is the computational logic underlying Hebbian cell assemblies? How might Nature vs Nurture interact at the level of a cell assembly? In contrast to the widely assumed local randomness within the mature but naïve cell assembly, the recent Theory of Connectivity postulates that the brain consists of the developmentally pre-programmed cell assemblies known as the functional connectivity motif (FCM. Principal cells within such FCM is organized by the power-of-two-based mathematical principle that guides the construction of specific-to-general combinatorial connectivity patterns in neuronal circuits, giving rise to a full range of specific features, various relational patterns, and generalized knowledge. This pre-configured canonical computation is predicted to be evolutionarily conserved across many circuits, ranging from these encoding memory engrams and imagination to decision-making and motor control. Although the power-of-two-based wiring and computational logic places a mathematical boundary on an individual’s cognitive capacity, the fullest intellectual potential can be brought about by optimized nature and nurture. This theory may also open up a new avenue to examining how genetic mutations and various drugs might impair or enhance the computational logic of brain circuits.

  15. Self-assembly of amphiphilic molecules:A review on the recent computer simulation results

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    We provided a short review on the recent progresses in computer simulations of adsorption and self-assembly of amphiphilic molecules.Owing to the extensive applications of amphiphilic molecules,it is very important to understand thoroughly the effects of the detailed chemistry,solid surfaces and the degree of confinement on the aggregate morphologies and kinetics of self-assembly for amphiphilic systems.In this review we paid special attention on(i) morphologies of adsorbed surfactants on solid surfaces,(ii) self-assembly in confined systems,and(iii) kinetic processes involving amphiphilic molecules.

  16. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  17. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  18. Arithmetic computation using self-assembly of DNA tiles:subtraction and division

    Institute of Scientific and Technical Information of China (English)

    Xuncai Zhang; Yanfeng Wang; Zhihua Chen; Jin Xu; Guangzhao Cui

    2009-01-01

    Recently,experiments have demonstrated that simple binary arithmetic and logical operations can be computed by the process of selfassembly of DNA tiles.In this paper,we show how the tile assembly process can be used for subtraction and division.In order to achieve this aim,four systems,including the comparator system,the duplicator system,the subtraction system,and the division system,are proposed to compute the difference and quotient of two input numbers using the tile assembly model.This work indicates that these systems can be carried out in polynomial time with optimal O(1)distinct tile types in parallel and at very low cost.Furthermore,we provide a scheme to factor the product of two prime numbers,and it is a breakthrough in basic biological operations using a molecular computer by self-assembly.

  19. M4D: a powerful tool for structured programming at assembly level for MODCOMP computers

    International Nuclear Information System (INIS)

    Structured programming techniques offer numerous benefits for software designers and form the basis of the current high level languages. However, these techniques are generally not available to assembly programmers. The M4D package was therefore developed for a large project to enable the use of structured programming constructs such as DO.WHILE-ENDDO and IF-ORIF-ORIF...-ELSE-ENDIF in the assembly code for MODCOMP computers. Programs can thus be produced that have clear semantics and are considerably easier to read than normal assembly code, resulting in reduced program development and testing effort, and in improved long-term maintainability of the code. This paper describes the M4D structured programming tool as implemented for MODCOMP'S MAX III and MAX IV assemblers, and illustrates the use of the facility with a number of examples

  20. Computational Design of Self-Assembling Protein Nanomaterials with Atomic Level Accuracy

    Energy Technology Data Exchange (ETDEWEB)

    King, Neil P.; Sheffler, William; Sawaya, Michael R.; Vollmar, Breanna S.; Sumida, John P.; André, Ingemar; Gonen, Tamir; Yeates, Todd O.; Baker, David (UWASH); (UCLA); (HHMI); (Lund)

    2015-09-17

    We describe a general computational method for designing proteins that self-assemble to a desired symmetric architecture. Protein building blocks are docked together symmetrically to identify complementary packing arrangements, and low-energy protein-protein interfaces are then designed between the building blocks in order to drive self-assembly. We used trimeric protein building blocks to design a 24-subunit, 13-nm diameter complex with octahedral symmetry and a 12-subunit, 11-nm diameter complex with tetrahedral symmetry. The designed proteins assembled to the desired oligomeric states in solution, and the crystal structures of the complexes revealed that the resulting materials closely match the design models. The method can be used to design a wide variety of self-assembling protein nanomaterials.

  1. Computer Aided Design of the Link-Fork Head-Piston Assembly of the Kaplan Turbine with Solidworks

    Directory of Open Access Journals (Sweden)

    Camelia Jianu

    2010-10-01

    Full Text Available The paper presents the steps for 3D computer aided design (CAD of the link-fork head-piston assembly of the Kaplan turbine made in SolidWorks.The present paper is a tutorial for a Kaplan turbine assembly 3D geometry, which is dedicated to the Assembly design and Drawing Geometry and Drawing Annotation.

  2. MULTI-AGENT COMPUTER AIDED ASSEMBLY PROCESS PLANNING SYSTEM FOR SHIP HULL

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A multi-agent computer aided assembly process planning system (MCAAPP) for ship hull is presented. The system includes system framework, global facilitator, the macro agent structure, agent communication language, agent-oriented programming language, knowledge representation and reasoning strategy. The system can produce the technological file and technological quota, which can satisfy the production needs of factory.

  3. DNA Self-Assembly and Computation Studied with a Coarse-grained Dynamic Bonded Model

    DEFF Research Database (Denmark)

    Svaneborg, Carsten; Fellermann, Harold; Rasmussen, Steen

    2012-01-01

    We utilize a coarse-grained directional dynamic bonding DNA model [C. Svaneborg, Comp. Phys. Comm. (In Press DOI:10.1016/j.cpc.2012.03.005)] to study DNA self-assembly and DNA computation. In our DNA model, a single nucleotide is represented by a single interaction site, and complementary sites can...

  4. A Solar Powered Wireless Computer Mouse: Design, Assembly and Preliminary Testing of 15 Prototypes

    NARCIS (Netherlands)

    van Sark, W.G.J.H.M.; Reich, N.H.; Alsema, E.A.; Netten, M.P.; Veefkind, M.; Silvester, S.; Elzen, B.; Verwaal, M.

    2007-01-01

    The concept and design of a solar powered wireless computer mouse has been completed, and 15 prototypes have been successfully assembled. After necessary cutting, the crystalline silicon cells show satisfactory efficiency: up to 14% when implemented into the mouse device. The implemented voltage con

  5. Combining Self-Explaining with Computer Architecture Diagrams to Enhance the Learning of Assembly Language Programming

    Science.gov (United States)

    Hung, Y.-C.

    2012-01-01

    This paper investigates the impact of combining self explaining (SE) with computer architecture diagrams to help novice students learn assembly language programming. Pre- and post-test scores for the experimental and control groups were compared and subjected to covariance (ANCOVA) statistical analysis. Results indicate that the SE-plus-diagram…

  6. The solution of the LEU and MOX WWER-1000 calculation benchmark with the CARATE - multicell code

    International Nuclear Information System (INIS)

    Preparations for disposition of weapons grade plutonium in WWER-1000 reactors are in progress. Benchmark: Defined by the Kurchatov Institute (S. Bychkov, M. Kalugin, A. Lazarenko) to assess the applicability of computer codes for weapons grade MOX assembly calculations. Framework: 'Task force on reactor-based plutonium disposition' of OECD Nuclear Energy Agency. (Authors)

  7. A computational technique to identify the optimal stiffness matrix for a discrete nuclear fuel assembly model

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nam-Gyu, E-mail: nkpark@knfc.co.kr [R and D Center, KEPCO Nuclear Fuel Co., LTD., 493 Deokjin-dong, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Kim, Kyoung-Joo, E-mail: kyoungjoo@knfc.co.kr [R and D Center, KEPCO Nuclear Fuel Co., LTD., 493 Deokjin-dong, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Kim, Kyoung-Hong, E-mail: kyounghong@knfc.co.kr [R and D Center, KEPCO Nuclear Fuel Co., LTD., 493 Deokjin-dong, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Suh, Jung-Min, E-mail: jmsuh@knfc.co.kr [R and D Center, KEPCO Nuclear Fuel Co., LTD., 493 Deokjin-dong, Yuseong-gu, Daejeon 305-353 (Korea, Republic of)

    2013-02-15

    Highlights: ► An identification method of the optimal stiffness matrix for a fuel assembly structure is discussed. ► The least squares optimization method is introduced, and a closed form solution of the problem is derived. ► The method can be expanded to the system with the limited number of modes. ► Identification error due to the perturbed mode shape matrix is analyzed. ► Verification examples show that the proposed procedure leads to a reliable solution. -- Abstract: A reactor core structural model which is used to evaluate the structural integrity of the core contains nuclear fuel assembly models. Since the reactor core consists of many nuclear fuel assemblies, the use of a refined fuel assembly model leads to a considerable amount of computing time for performing nonlinear analyses such as the prediction of seismic induced vibration behaviors. The computational time could be reduced by replacing the detailed fuel assembly model with a simplified model that has fewer degrees of freedom, but the dynamic characteristics of the detailed model must be maintained in the simplified model. Such a model based on an optimal design method is proposed in this paper. That is, when a mass matrix and a mode shape matrix are given, the optimal stiffness matrix of a discrete fuel assembly model can be estimated by applying the least squares minimization method. The verification of the method is completed by comparing test results and simulation results. This paper shows that the simplified model's dynamic behaviors are quite similar to experimental results and that the suggested method is suitable for identifying reliable mathematical model for fuel assemblies.

  8. Dynamics of nuclear fuel assemblies in vertical flow channels: computer modelling and associated studies

    International Nuclear Information System (INIS)

    A computer model, designed to predict the dynamic behaviour of nuclear fuel assemblies in axial flow, is described in this report. The numerical methods used to construct and solve the matrix equations of motion in the model are discussed together with an outline of the method used to interpret the fuel assembly stability data. The mathematics developed for forced response calculations are described in detail. Certain structural and hydrodynamic modelling parameters must be determined by experiment. These parameters are identified and the methods used for their evaluation are briefly described. Examples of typical applications of the dynamic model are presented towards the end of the report. (author)

  9. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  10. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  11. Assembly of a 3D Cellular Computer Using Folded E-Blocks

    Directory of Open Access Journals (Sweden)

    Shivendra Pandey

    2016-04-01

    Full Text Available The assembly of integrated circuits in three dimensions (3D provides a possible solution to address the ever-increasing demands of modern day electronic devices. It has been suggested that by using the third dimension, devices with high density, defect tolerance, short interconnects and small overall form factors could be created. However, apart from pseudo 3D architecture, such as monolithic integration, die, or wafer stacking, the creation of paradigms to integrate electronic low-complexity cellular building blocks in architecture that has tile space in all three dimensions has remained elusive. Here, we present software and hardware foundations for a truly 3D cellular computational devices that could be realized in practice. The computing architecture relies on the scalable, self-configurable and defect-tolerant cell matrix. The hardware is based on a scalable and manufacturable approach for 3D assembly using folded polyhedral electronic blocks (E-blocks. We created monomers, dimers and 2 × 2 × 2 assemblies of polyhedral E-blocks and verified the computational capabilities by implementing simple logic functions. We further show that 63.2% more compact 3D circuits can be obtained with our design automation tools compared to a 2D architecture. Our results provide a proof-of-concept for a scalable and manufacture-ready process for constructing massive-scale 3D computational devices.

  12. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  13. A computational investigation on the connection between dynamics properties of ribosomal proteins and ribosome assembly.

    Directory of Open Access Journals (Sweden)

    Brittany Burton

    Full Text Available Assembly of the ribosome from its protein and RNA constituents has been studied extensively over the past 50 years, and experimental evidence suggests that prokaryotic ribosomal proteins undergo conformational changes during assembly. However, to date, no studies have attempted to elucidate these conformational changes. The present work utilizes computational methods to analyze protein dynamics and to investigate the linkage between dynamics and binding of these proteins during the assembly of the ribosome. Ribosomal proteins are known to be positively charged and we find the percentage of positive residues in r-proteins to be about twice that of the average protein: Lys+Arg is 18.7% for E. coli and 21.2% for T. thermophilus. Also, positive residues constitute a large proportion of RNA contacting residues: 39% for E. coli and 46% for T. thermophilus. This affirms the known importance of charge-charge interactions in the assembly of the ribosome. We studied the dynamics of three primary proteins from E. coli and T. thermophilus 30S subunits that bind early in the assembly (S15, S17, and S20 with atomic molecular dynamic simulations, followed by a study of all r-proteins using elastic network models. Molecular dynamics simulations show that solvent-exposed proteins (S15 and S17 tend to adopt more stable solution conformations than an RNA-embedded protein (S20. We also find protein residues that contact the 16S rRNA are generally more mobile in comparison with the other residues. This is because there is a larger proportion of contacting residues located in flexible loop regions. By the use of elastic network models, which are computationally more efficient, we show that this trend holds for most of the 30S r-proteins.

  14. Track 3: growth of nuclear technology and research numerical and computational aspects of the coupled three-dimensional core/plant simulations: organization for economic cooperation and development/U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-I. 6. CEA-IPSN Participation in the MSLB Benchmark

    International Nuclear Information System (INIS)

    The OECD/NEA Main-Steam-Line-Break (MSLB) Benchmark lets us compare state-of-the-art and best-estimate models used to compute reactivity accidents.A comprehensive study has been carried out by CEA and IPSN with the CATHARE, CRONOS2, and FLICA4 codes to assess the three-dimensional (3-D) effects in the MSLB accident and to explain the return-to-power (RTP) occurrence. The three exercises of the MSLB benchmark are defined with the aim of analyzing the space and time effects in the core and their modeling with computational tools. Point kinetics (exercise 1) simulation results in an RTP after scram, whereas 3-D kinetics (exercises 2 and 3) does not display any RTP. Our objective is to understand the reasons for the conservative solution of point kinetics and to assess the benefits of best-estimate models. First, the core vessel mixing model is analyzed; second, sensitivity studies on point kinetics are compared to 3-D kinetics; third, the core thermal-hydraulics model and coupling with neutronics is presented; finally, RTP and a suitable model for MSLB are discussed. Modeling of the vessel mixing is identified as a major concern for an accurate computation of MSLB. On one hand, the RTP in exercise 1 is driven by the mixing between primary loops, and on the other hand, the hot assembly power in exercise 3 depends on the inlet temperature map at assembly level. Vessel mixing between primary loops is defined by the ratio of the hot-leg temperature difference over the cold-leg temperature difference. Specifications indicate a ratio of 50%. Sensitivity studies on this ratio were conducted with CATHARE and point kinetics. Full mixing of the primary loops leads to a sooner and higher RTP, while no mixing results in a later and weaker RTP. Indeed, the intact steam generator (SG) is used to cool down the broken SG when both loops are mixed in the vessel, and the primary temperature decreases faster. In the extreme case of no mixing, only one-half of the primary circuit is

  15. The use of computer vision and force sensing for tight tolerance assembly

    Energy Technology Data Exchange (ETDEWEB)

    Bayliss, J.D. [California State Univ., Fresno, CA (United States)

    1993-05-19

    Computer vision and force control provide feedback for robot manipulation during the assembly of objects. Both techniques have weaknesses, but their complementary strengths enable them to work well together, achieving assembly with tight tolerances. For instance, camera resolution limits the accuracy of computer vision, but it can locate approximately where the part should be placed and is an excellent choice for coarse placement of the part. Force control senses the force induced by object contact and if used extensively could damage a delicate part, but when used for fine placement of an object, it compensates for the error in coarse placement. It is our goal to utilize the best features of force sensing and computer vision to reduce the error in placement of an object. The results of placing a peg in a 0.15mm tolerance hole with different camera resolutions will be presented. We have chosen to use computer vision to move the peg as close to its correct placement point as possible and force control to make minor adjustments, achieving the correct positioning of the peg.

  16. Benchmark calculations of sodium fast critical experiments

    International Nuclear Information System (INIS)

    The high expectations from fast critical experiments impose the additional requirements on reliability of final reconstructed values, obtained in experiments at critical facility. Benchmark calculations of critical experiments are characterized by impossibility of complete experiment reconstruction, the large amounts of input data (dependent and independent) with very different reliability. It should also take into account different sensitivity of the measured and appropriate calculated characteristics to the identical changes of geometry parameters, temperature, and isotopic composition of individual materials. The calculations of critical facility experiments are produced for the benchmark models, generated by the specific reconstructing codes with its features when adjusting model parameters, and using the nuclear data library. The generated benchmark model, providing the agreed calculated and experimental values for one or more neutronic characteristics can lead to considerable differences for other key characteristics. The sensitivity of key neutronic characteristics to the extra steel allocation in the core, and ENDF/B nuclear data sources is performed using a few calculated models of BFS-62-3A and BFS1-97 critical assemblies. The comparative analysis of the calculated effective multiplication factor, spectral indices, sodium void reactivity, and radial fission-rate distributions leads to quite different models, providing the best agreement the calculated and experimental neutronic characteristics. This fact should be considered during the refinement of computational models and code-verification purpose. (author)

  17. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  18. Computational design of a self-assembling symmetrical β-propeller protein

    Science.gov (United States)

    Voet, Arnout R. D.; Noguchi, Hiroki; Addy, Christine; Simoncini, David; Terada, Daiki; Unzai, Satoru; Park, Sam-Yong; Zhang, Kam Y. J.; Tame, Jeremy R. H.

    2014-01-01

    The modular structure of many protein families, such as β-propeller proteins, strongly implies that duplication played an important role in their evolution, leading to highly symmetrical intermediate forms. Previous attempts to create perfectly symmetrical propeller proteins have failed, however. We have therefore developed a new and rapid computational approach to design such proteins. As a test case, we have created a sixfold symmetrical β-propeller protein and experimentally validated the structure using X-ray crystallography. Each blade consists of 42 residues. Proteins carrying 2–10 identical blades were also expressed and purified. Two or three tandem blades assemble to recreate the highly stable sixfold symmetrical architecture, consistent with the duplication and fusion theory. The other proteins produce different monodisperse complexes, up to 42 blades (180 kDa) in size, which self-assemble according to simple symmetry rules. Our procedure is suitable for creating nano-building blocks from different protein templates of desired symmetry. PMID:25288768

  19. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Science.gov (United States)

    Tanabe, Akifumi S; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate

  20. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Directory of Open Access Journals (Sweden)

    Akifumi S Tanabe

    Full Text Available Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need

  1. Two New Computational Methods for Universal DNA Barcoding: A Benchmark Using Barcode Sequences of Bacteria, Archaea, Animals, Fungi, and Land Plants

    Science.gov (United States)

    Tanabe, Akifumi S.; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used “1-nearest-neighbor” (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to

  2. Applications of the theory of computation to nanoscale self-assembly

    Science.gov (United States)

    Doty, David Samuel

    This thesis applies the theory of computing to the theory of nanoscale self-assembly, to explore the ability -- and under certain conditions, the inability -- of molecules to automatically arrange themselves in computationally sophisticated ways. In particular, we investigate a model of molecular self-assembly known as the abstract Tile Assembly Model (aTAM), in which different types of square "tiles" represent molecules that, through the interaction of highly specific binding sites on their four sides, can automatically assemble into larger and more elaborate structures. We investigate the possibility of using the inherent randomness of sampling different tiles in a well-mixed solution to drive selection of random numbers from a finite set, and explore the tradeoff between the uniformity of the imposed distribution and the size of structures necessary to process the sampled tiles. We then show that the inherent randomness of the competition of different types of molecules for binding can be exploited in a different way. By adjusting the relative concentrations of tiles, the structure assembled by a tile set is shown to be programmable to a high precision, in the following sense. There is a single tile set that can be made to assemble a square of arbitrary width with high probability, by setting the concentrations of the tiles appropriately, so that all the information about the square's width is "learned" from the concentrations by sampling the tiles. Based on these constructions, and those of other researchers, which have been completely implemented in a simulated environment, we design a high-level domain-specific "visual language" for implementing complex constructions in the aTAM. This language frees the implementer of an aTAM construction from many low-level and tedious details of programming and, together with a visual software tool that directly implements the basic operations of the language, frees the implementer from almost any programming at all

  3. Comparative Neutronics Analysis of DIMPLE S06 Criticality Benchmark with Contemporary Reactor Core Analysis Computer Code Systems

    Directory of Open Access Journals (Sweden)

    Wonkyeong Kim

    2015-01-01

    Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.

  4. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  5. Benchmark ab initio thermochemistry of the isomers of diimide, $N_{2}H_2$, using accurate computed structures and anharmonic force fields

    CERN Document Server

    Martin, J M L; Martin, Jan M.L.; Taylor, Peter R.

    1999-01-01

    A benchmark ab initio study on the thermochemistry of the trans-HNNH, cis-HNNH, and H$_2$NN isomers of diazene has been carried out using the CCSD(T) coupled cluster method, basis sets as large as $[7s6p5d4f3g2h/5s4p3d2f1g]$, and extrapolations towards the 1-particle basis set limit. The effects on inner-shell correlation and of anharmonicity in the zero-point energy were taken into account: accurate geometries and anharmonic force fields were thus obtained as by-products. Our best computed $\\Delta H^\\circ_{f,0}$ for trans-HNNH, 49.2 \\pm 0.3 kcal/mol, is in very good agreement with a recent experimental lower limit of 48.8 \\pm 0.5 kcal/mol. CCSD(T) basis set limit values for the isomerization energies at 0 K are 5.2 \\pm 0.2 kcal/mol (cis-trans) and 24.1 \\pm 0.2 kcal/mol (iso-trans). Our best computed geometry for trans-HNNH, $r_e$(NN)=1.2468 Å, $r_e$(NH)=1.0283 Å, and rotational constants of trans-HNNH to within better than 0.1 %. The rotation-vibration spectra of both cis-HNNH and H$_2$NN are dominated by ...

  6. A benchmark exercise on the use of CFD codes for containment issues using best practice guidelines: A computational challenge

    International Nuclear Information System (INIS)

    In the framework of the 5th EU-FWP project ECORA the capabilities of CFD software packages for simulating flows in the containment of nuclear reactors was evaluated. Four codes were assessed using two basic tests in the PANDA facility addressing the transport of gases in a multi-compartment geometry. The assessment included a first attempt to use Best Practice Guidelines (BPGs) for the analysis of long, large-scale, transient problems. Due to the large computational overhead of the analysis, the BPGs could not fully be applied. It was thus concluded that the application of the BPGs to full containment analysis is out of reach with the currently available computer power. On the other hand, CFD codes used with a sufficiently detailed mesh seem to be capable to give reliable answers on issues relevant for containment simulation using standard two-equation turbulence models. Development on turbulence models is constantly ongoing. If it turns out that advanced (and more computationally intensive) turbulence models may not be needed, the use of the BPGs for 'certified' simulations could become feasible within a relatively short time

  7. Benchmark analysis of fission-rate distributions in a series of spherical depleted-uranium assemblies for hybrid-reactor design

    International Nuclear Information System (INIS)

    Highlights: • We do simulations using MCNP code and ENDF/B-V.0 library. • The fission rate distribution on depleted uranium assemblies was analyzed. • The calculations overestimate the measured fission rates. • The observed differences are discussed. - Abstract: The nuclear performance of a fission blanket in a hybrid reactor has been validated by analyzing fission-rate experiments with a series of spherical depleted-uranium assemblies. Calculations were made with the Monte–Carlo transport code MCNP5 and the ENDF/B-V.0 continuous-energy cross sections and compared to the measured results. The ratios of calculated to experimental values (C/E) for the fission rate and the fission-rate ratio of 238U to 235U are presented along with a discussion of the validation of the ENDF/B-V.0 library regarding its use in the design of the fission blanket. Overestimations are observed in the calculation of the 238U and 235U fission rates at all positions, except the ones near the outer surfaces of the assemblies, and the C/Es of the fission rate decreased as the thickness of the depleted-uranium (DU) layer increased, while most of the C/Es of the fission-rate ratio of 238U to 235U were close to unity, being within the range of 0.95–1.05

  8. Coupled computational fluid dynamics and MOC neutronic simulations of Westinghouse PWR fuel assemblies with grid spacers

    International Nuclear Information System (INIS)

    Neutronic coupling with Computational Fluid Dynamics (CFD) has been under development within the US DOE sponsored “Nuclear Simulation Hub”. The method of characteristics (MOC) neutronics code DeCART ([Joo, 2004], [Kochunas, 2009]) under development at the University of Michigan was coupled with the CFD code STAR-CCM+ to achieve more accurate predictions of fuel assembly performance. At Westinghouse, lower order, neutronic codes such as the nodal code ANC have been coupled to thermal-hydraulics codes such the subchannel code VIPRE to predict the heat flux and fuel nuclear behavior. However, a more detailed neutronics and temperature / fluid field simulation of fuel assembly models which includes explicit representation of spacer grids would considerably improve the design and assessment of new fuel assembly designs. Coupled STAR-CCM+ / DeCART calculations have been performed for various representative three-dimensional models with explicit representation of spacer grids with mixing vanes. The high fidelity results have been compared to lower order simulations. The coupled CFD/MOC solution has provided a more truthful model which includes a more accurate representation of all the important physics such as fission energy, heat convection, heat conduction, and turbulence. Of particular significance is the ability to assess the effects of the mixing grid on the coolant temperature and density distribution using coupled thermal/fluids and neutronic solutions. A more precise cladding temperature can be derived by this approach which will also enable more accurate prediction of departure from nucleate boiling (DNB), as well as a better understanding of DNB margin and crud build up on the fuel rod. (author)

  9. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  10. The PRISM Benchmark Suite

    OpenAIRE

    Kwiatkowsa, Marta; Norman, Gethin; Parker, David

    2012-01-01

    We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.

  11. Polymer GARD: computer simulation of covalent bond formation in reproducing molecular assemblies.

    Science.gov (United States)

    Shenhav, Barak; Bar-Even, Arren; Kafri, Ran; Lancet, Doron

    2005-04-01

    The basic Graded Autocatalysis Replication Domain (GARD) model consists of a repertoire of small molecules, typically amphiphiles, which join and leave a non-covalent micelle-like assembly. Its replication behavior is due to occasional fission, followed by a homeostatic growth process governed by the assembly's composition. Limitations of the basic GARD model are its small finite molecular repertoire and the lack of a clear path from a 'monomer world' towards polymer-based living entities. We have now devised an extension of the model (polymer GARD or P-GARD), where a monomer-based GARD serves as a 'scaffold' for oligomer formation, as a result of internal chemical rules. We tested this concept with computer simulations of a simple case of monovalent monomers, whereby more complex molecules (dimers) are formed internally, in a manner resembling biosynthetic metabolism. We have observed events of dimer 'take-over' - the formation of compositionally stable, replication-prone quasi stationary states (composomes) that have appreciable dimer content. The appearance of novel metabolism-like networks obeys a time-dependent power law, reminiscent of evolution under punctuated equilibrium. A simulation under constant population conditions shows the dynamics of takeover and extinction of different composomes, leading to the generation of different population distributions. The P-GARD model offers a scenario whereby biopolymer formation may be a result of rather than a prerequisite for early life-like processes. PMID:16010993

  12. Computer simulation of reaction-induced self-assembly of cellulose via enzymatic polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Kawakatsu, Toshihiro [Department of Physics, Faculty of Science, Tohoku University, Sendai 980-8578 (Japan); Tanaka, Hirokazu [Advanced Science Research Center (ASRC), Japan Atomic Energy Agency (JAEA), Tokai, Ibaraki 319-1195, Japan (Japan); Koizumi, Satoshi [Advanced Science Research Center (ASRC), Japan Atomic Energy Agency (JAEA), Tokai, Ibaraki 319-1195 (Japan); Hashimoto, Takeji [Advanced Science Research Center (ASRC), Japan Atomic Energy Agency (JAEA), Tokai, Ibaraki 319-1195 (Japan)

    2006-09-13

    We present a comparison between results of computer simulations and neutron scattering/electron microscopy observations on reaction-induced self-assembly of cellulose molecules synthesized via in vitro polymerization at specific sites of enzymes in an aqueous reaction medium. The experimental results, obtained by using a combined small-angle scattering (SAS) analysis of USANS (ultra-SANS), USAXS (ultra-SAXS), SANS (small-angle neutron scattering), and SAXS (small-angle x-ray scattering) methods over an extremely wide range of wavenumber q (as wide as four orders of magnitude) and of a real-space analysis with field-emission scanning electron microscopy elucidated that: (i) the surface structure of the self-assembly in the medium is characterized by a surface fractal dimension of D{sub s} = 2.3 over a wide length scale ({approx}30 nm to {approx}30 {mu}m); (ii) its internal structure is characterized by crystallized cellulose fibrils spatially arranged with a mass fractal dimension of D{sub m} = 2.1. These results were analysed by Monte Carlo simulation based on the diffusion-limited aggregation of rod-like molecules that model the cellulose molecules. The simulations show similar surface fractal dimensions to those observed in the experiments.

  13. 47 CFR 69.108 - Transport rate benchmark.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  14. Computational analysis of hole placement errors for directed self-assembly

    Science.gov (United States)

    Yamamoto, K.; Nakano, T.; Muramatsu, M.; Tomita, T.; Matsuzaki, K.; Kitano, T.

    2015-03-01

    We report computational study for directed self-assembly (DSA) on morphologies' dislocation caused by block copolymers' (BCPs') thermal fluctuation in grapho-epitaxial cylindrical guides. The dislocation factor expressed as DSA-oriented placement errors (DSA-PEs) was numerically evaluated by historical data acquisition utilizing dissipative particle dynamics simulation. Calculated DSA-PEs was compared with experimental results on two kinds of guide pattern, resist guide with no surface modifications (REF guide) and resist guide with polystyrene coated (PS-brush guide). Vertical distribution of DSA-PEs within the cylindrical guides was calculated and relatively high DSA-PEs near top region was deduced particularly in REF guide. The tendency of experimental DSA-PEs was well explained by the calculation including a fluctuation parameter on the wall particles. In PS-brush guide, calculated DSA-PEs was drastically increased with becoming the guide more fluctuating. This result indicates to fabricate hard and steady guide condition in PS-brush guide so as to achieve better placements. From the variety of guide critical dimension (CD) computation, it is suggested that smaller guide CD is better to obtain good placements. The smallest DSA-PE value in this study was observed in PS-brush guide with smaller guide CD because of the strong restriction of BCP arrangement flexibility.

  15. Computational modelling of genome-wide [corrected] transcription assembly networks using a fluidics analogy.

    Directory of Open Access Journals (Sweden)

    Yousry Y Azmy

    Full Text Available Understanding how a myriad of transcription regulators work to modulate mRNA output at thousands of genes remains a fundamental challenge in molecular biology. Here we develop a computational tool to aid in assessing the plausibility of gene regulatory models derived from genome-wide expression profiling of cells mutant for transcription regulators. mRNA output is modelled as fluid flow in a pipe lattice, with assembly of the transcription machinery represented by the effect of valves. Transcriptional regulators are represented as external pressure heads that determine flow rate. Modelling mutations in regulatory proteins is achieved by adjusting valves' on/off settings. The topology of the lattice is designed by the experimentalist to resemble the expected interconnection between the modelled agents and their influence on mRNA expression. Users can compare multiple lattice configurations so as to find the one that minimizes the error with experimental data. This computational model provides a means to test the plausibility of transcription regulation models derived from large genomic data sets.

  16. COXPRO-II: a computer program for calculating radiation and conduction heat transfer in irradiated fuel assemblies

    International Nuclear Information System (INIS)

    This report describes the computer program COXPRO-II, which was written for performing thermal analyses of irradiated fuel assemblies in a gaseous environment with no forced cooling. The heat transfer modes within the fuel pin bundle are radiation exchange among fuel pin surfaces and conduction by the stagnant gas. The array of parallel cylindrical fuel pins may be enclosed by a metal wrapper or shroud. Heat is dissipated from the outer surface of the fuel pin assembly by radiation and convection. Both equilateral triangle and square fuel pin arrays can be analyzed. Steady-state and unsteady-state conditions are included. Temperatures predicted by the COXPRO-II code have been validated by comparing them with experimental measurements. Temperature predictions compare favorably to temperature measurements in pressurized water reactor (PWR) and liquid-metal fast breeder reactor (LMFBR) simulated, electrically heated fuel assemblies. Also, temperature comparisons are made on an actual irradiated Fast-Flux Test Facility (FFTF) LMFBR fuel assembly

  17. Benchmarking of photon and coupled neutron and photon process of SuperMC 2.0

    International Nuclear Information System (INIS)

    Super Monte Carlo Calculation Program for Nuclear and Radiation Process (SuperMC), developed by FDS Team in China, is a multi-functional simulation program mainly based on Monte Carlo (MC) method and advanced computer technology. This paper focuses on the benchmarking of physical process of photon and coupled neutron-photon of SuperMC2.0. Integral leakage rate of photon in the spherical and hemispherical shell experiment was tested to verify the physical process of photon and coupled neutron and photon transport. Vanadium assembly experiment and ADS benchmark were given as comprehensive benchmarks. The correctness was preliminarily verified by comparing calculation results of SuperMC with experimental results and MCNP calculation results. (author)

  18. Polymer Gard: Computer Simulation of Covalent Bond Formation in Reproducing Molecular Assemblies

    Science.gov (United States)

    Shenhav, Barak; Bar-Even, Arren; Kafri, Ran; Lancet, Doron

    2005-04-01

    The basic Graded Autocatalysis Replication Domain (GARD) model consists of a repertoire of small molecules, typically amphiphiles, which join and leave a non-covalent micelle-like assembly. Its replication behavior is due to occasional fission, followed by a homeostatic growth process governed by the assembly’ s composition. Limitations of the basic GARD model are its small finite molecular repertoire and the lack of a clear path from a ‘monomer world’ towards polymer-based living entities.We have now devised an extension of the model (polymer GARD or P-GARD), where a monomer-based GARD serves as a ‘scaffold’ for oligomer formation, as a result of internal chemical rules. We tested this concept with computer simulations of a simple case of monovalent monomers, whereby more complex molecules (dimers) are formed internally, in a manner resembling biosynthetic metabolism. We have observed events of dimer ‘take-over’ the formation of compositionally stable, replication-prone quasi stationary states (composomes) that have appreciable dimer content. The appearance of novel metabolism-like networks obeys a time-dependent power law, reminiscent of evolution under punctuated equilibrium. A simulation under constant population conditions shows the dynamics of takeover and extinction of different composomes, leading to the generation of different population distributions. The P-GARD model offers a scenario whereby biopolymer formation may be a result of rather than a prerequisite for early life-like processes.

  19. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  20. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in...

  1. Transcriptator: An Automated Computational Pipeline to Annotate Assembled Reads and Identify Non Coding RNA.

    Directory of Open Access Journals (Sweden)

    Kumar Parijat Tripathi

    Full Text Available RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool, QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery tools. It offers a report on statistical analysis of functional and Gene Ontology (GO annotation's enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein-protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA by ab initio methods helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is

  2. Single PWR spent fuel assembly heat transfer data for computer code evaluations

    International Nuclear Information System (INIS)

    The descriptions and results of two separate heat transfer tests designed to investigate the dry storage of commercial PWR spent fuel assemblies are presented. Presented first are descriptions and selected results from the Fuel Temperature Test performed at the Engine Maintenance and Disassembly facility on the Nevada Test Site. An actual spent fuel assembly from the Turkey Point Unit Number 3 Reactor with a decay heat level of 1.17 KW, was installed vertically in a test stand mounted canister/liner assembly. The boundary temperatures were controlled and the canister backfill gases were alternated between air, helium and vacuum to investigate the primary heat transfer mechanisms of convection, conduction and radiation. The assembly temperature profiles were experimentally measured using installed thermocouple instrumentation. Also presented are the results from the Single Assembly Heat Transfer Test designed and fabricated by Allied General Nuclear Services, under contract to the Department of Energy, and ultimately conducted by the Pacific Northwest Laboratory. For this test, an electrically heated 15 x 15 rod assembly was used to model a single PWR spent fuel assembly. The electrically heated model fuel assembly permitted various ''decay heat'', levels to be tested; 1.0 KW and 0.5 KW were used for these tests. The model fuel assembly was positioned within a prototypic fuel tube and in turn placed within a double-walled sealed cask. The complete test assembly could be positioned at any desired orientation (horizontal, vertical, and 250 from horizontal for the present work) and backfilled as desired (air, helium, or vacuum). Tests were run for all combinations of ''decay heat,'' backfill, and orientation. Boundary conditions were imposed by temperature controlled guard heaters installed on the cask exterior surface

  3. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  4. Performance Evaluation and Benchmarking of Next Intelligent Systems

    Energy Technology Data Exchange (ETDEWEB)

    del Pobil, Angel [Jaume-I University; Madhavan, Raj [ORNL; Bonsignorio, Fabio [Heron Robots, Italy

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  5. OECD NEA Benchmark Database of Spent Nuclear Fuel Isotopic Compositions for World Reactor Designs

    Energy Technology Data Exchange (ETDEWEB)

    Gauld, Ian C [ORNL; Sly, Nicholas C [ORNL; Michel-Sendis, Franco [OECD Nuclear Energy Agency

    2014-01-01

    Experimental data on the isotopic concentrations in irradiated nuclear fuel represent one of the primary methods for validating computational methods and nuclear data used for reactor and spent fuel depletion simulations that support nuclear fuel cycle safety and safeguards programs. Measurement data have previously not been available to users in a centralized or searchable format, and the majority of accessible information has been, for the most part, limited to light-water-reactor designs. This paper describes a recent initiative to compile spent fuel benchmark data for additional reactor designs used throughout the world that can be used to validate computer model simulations that support nuclear energy and nuclear safeguards missions. Experimental benchmark data have been expanded to include VVER-440, VVER-1000, RBMK, graphite moderated MAGNOX, gas cooled AGR, and several heavy-water moderated CANDU reactor designs. Additional experimental data for pressurized light water and boiling water reactor fuels has also been compiled for modern assembly designs and more extensive isotopic measurements. These data are being compiled and uploaded to a recently revised structured and searchable database, SFCOMPO, to provide the nuclear analysis community with a centrally-accessible resource of spent fuel compositions that can be used to benchmark computer codes, models, and nuclear data. The current version of SFCOMPO contains data for eight reactor designs, 20 fuel assembly designs, more than 550 spent fuel samples, and measured isotopic data for about 80 nuclides.

  6. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  7. Cluster computing as an assembly process: coordination with S-Net

    NARCIS (Netherlands)

    C. Grelck; J. Julku; F. Penczek; A. Shafarenko

    2010-01-01

    This poster will present a coordination language for distributed computing and will discuss its application to cluster computing. It will introduce a programming technique of cluster computing whereby application components are completely dissociated from the communication/coordination infrastructur

  8. Molecular design driving tetraporphyrin self-assembly on graphite: a joint STM, electrochemical and computational study

    Science.gov (United States)

    El Garah, M.; Santana Bonilla, A.; Ciesielski, A.; Gualandi, A.; Mengozzi, L.; Fiorani, A.; Iurlo, M.; Marcaccio, M.; Gutierrez, R.; Rapino, S.; Calvaresi, M.; Zerbetto, F.; Cuniberti, G.; Cozzi, P. G.; Paolucci, F.; Samorì, P.

    2016-07-01

    Tuning the intermolecular interactions among suitably designed molecules forming highly ordered self-assembled monolayers is a viable approach to control their organization at the supramolecular level. Such a tuning is particularly important when applied to sophisticated molecules combining functional units which possess specific electronic properties, such as electron/energy transfer, in order to develop multifunctional systems. Here we have synthesized two tetraferrocene-porphyrin derivatives that by design can selectively self-assemble at the graphite/liquid interface into either face-on or edge-on monolayer-thick architectures. The former supramolecular arrangement consists of two-dimensional planar networks based on hydrogen bonding among adjacent molecules whereas the latter relies on columnar assembly generated through intermolecular van der Waals interactions. Scanning Tunneling Microscopy (STM) at the solid-liquid interface has been corroborated by cyclic voltammetry measurements and assessed by theoretical calculations to gain multiscale insight into the arrangement of the molecule with respect to the basal plane of the surface. The STM analysis allowed the visualization of these assemblies with a sub-nanometer resolution, and cyclic voltammetry measurements provided direct evidence of the interactions of porphyrin and ferrocene with the graphite surface and offered also insight into the dynamics within the face-on and edge-on assemblies. The experimental findings were supported by theoretical calculations to shed light on the electronic and other physical properties of both assemblies. The capability to engineer the functional nanopatterns through self-assembly of porphyrins containing ferrocene units is a key step toward the bottom-up construction of multifunctional molecular nanostructures and nanodevices.Tuning the intermolecular interactions among suitably designed molecules forming highly ordered self-assembled monolayers is a viable approach to

  9. Molecular design driving tetraporphyrin self-assembly on graphite: a joint STM, electrochemical and computational study.

    Science.gov (United States)

    El Garah, M; Santana Bonilla, A; Ciesielski, A; Gualandi, A; Mengozzi, L; Fiorani, A; Iurlo, M; Marcaccio, M; Gutierrez, R; Rapino, S; Calvaresi, M; Zerbetto, F; Cuniberti, G; Cozzi, P G; Paolucci, F; Samorì, P

    2016-07-14

    Tuning the intermolecular interactions among suitably designed molecules forming highly ordered self-assembled monolayers is a viable approach to control their organization at the supramolecular level. Such a tuning is particularly important when applied to sophisticated molecules combining functional units which possess specific electronic properties, such as electron/energy transfer, in order to develop multifunctional systems. Here we have synthesized two tetraferrocene-porphyrin derivatives that by design can selectively self-assemble at the graphite/liquid interface into either face-on or edge-on monolayer-thick architectures. The former supramolecular arrangement consists of two-dimensional planar networks based on hydrogen bonding among adjacent molecules whereas the latter relies on columnar assembly generated through intermolecular van der Waals interactions. Scanning Tunneling Microscopy (STM) at the solid-liquid interface has been corroborated by cyclic voltammetry measurements and assessed by theoretical calculations to gain multiscale insight into the arrangement of the molecule with respect to the basal plane of the surface. The STM analysis allowed the visualization of these assemblies with a sub-nanometer resolution, and cyclic voltammetry measurements provided direct evidence of the interactions of porphyrin and ferrocene with the graphite surface and offered also insight into the dynamics within the face-on and edge-on assemblies. The experimental findings were supported by theoretical calculations to shed light on the electronic and other physical properties of both assemblies. The capability to engineer the functional nanopatterns through self-assembly of porphyrins containing ferrocene units is a key step toward the bottom-up construction of multifunctional molecular nanostructures and nanodevices. PMID:27376633

  10. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  11. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  12. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  13. Process-directed self-assembly of block copolymers: a computer simulation study

    International Nuclear Information System (INIS)

    The free-energy landscape of self-assembling block copolymer systems is characterized by a multitude of metastable minima and concomitant protracted relaxation times of the morphology. Tailoring rapid changes (quench) of thermodynamic conditions, one can reproducibly trap the ensuing kinetics of self-assembly in a specific metastable state. To this end, it is necessary to (1) control the generation of well-defined, highly unstable states and (2) design the unstable state such that the ensuing spontaneous kinetics of structure formation reaches the desired metastable morphology. This process-directed self-assembly provides an alternative to fine-tuning molecular architecture by synthesis or blending, for instance, in order to fabricate complex network structures. Comparing our simulation results to recently developed free-energy techniques, we highlight the importance of non-equilibrium molecular conformations in the starting state and motivate the significance of the local conservation of density. (paper)

  14. Exploring Programmable Self-Assembly in Non-DNA based Molecular Computing

    CERN Document Server

    Terrazas, German; Krasnogor, Natalio

    2013-01-01

    Self-assembly is a phenomenon observed in nature at all scales where autonomous entities build complex structures, without external influences nor centralised master plan. Modelling such entities and programming correct interactions among them is crucial for controlling the manufacture of desired complex structures at the molecular and supramolecular scale. This work focuses on a programmability model for non DNA-based molecules and complex behaviour analysis of their self-assembled conformations. In particular, we look into modelling, programming and simulation of porphyrin molecules self-assembly and apply Kolgomorov complexity-based techniques to classify and assess simulation results in terms of information content. The analysis focuses on phase transition, clustering, variability and parameter discovery which as a whole pave the way to the notion of complex systems programmability.

  15. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  16. Aeroelastic Benchmark Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the...

  17. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  18. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  19. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  20. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  1. Sequence assembly

    DEFF Research Database (Denmark)

    Scheibye-Alsing, Karsten; Hoffmann, S.; Frankel, Annett Maria;

    2009-01-01

    and plays an important role in processing the information generated by these methods. Here, we provide a comprehensive overview of the current publicly available sequence assembly programs. We describe the basic principles of computational assembly along with the main concerns, such as repetitive sequences...

  2. Structural, nanomechanical, and computational characterization of D,L-cyclic peptide assemblies.

    Science.gov (United States)

    Rubin, Daniel J; Amini, Shahrouz; Zhou, Feng; Su, Haibin; Miserez, Ali; Joshi, Neel S

    2015-03-24

    The rigid geometry and tunable chemistry of D,L-cyclic peptides makes them an intriguing building-block for the rational design of nano- and microscale hierarchically structured materials. Herein, we utilize a combination of electron microscopy, nanomechanical characterization including depth sensing-based bending experiments, and molecular modeling methods to obtain the structural and mechanical characteristics of cyclo-[(Gln-D-Leu)4] (QL4) assemblies. QL4 monomers assemble to form large, rod-like structures with diameters up to 2 μm and lengths of tens to hundreds of micrometers. Image analysis suggests that large assemblies are hierarchically organized from individual tubes that undergo bundling to form larger structures. With an elastic modulus of 11.3 ± 3.3 GPa, hardness of 387 ± 136 MPa and strength (bending) of 98 ± 19 MPa the peptide crystals are among the most robust known proteinaceous micro- and nanofibers. The measured bending modulus of micron-scale fibrils (10.5 ± 0.9 GPa) is in the same range as the Young's modulus measured by nanoindentation indicating that the robust nanoscale network from which the assembly derives its properties is preserved at larger length-scales. Materials selection charts are used to demonstrate the particularly robust properties of QL4 including its specific flexural modulus in which it outperforms a number of biological proteinaceous and nonproteinaceous materials including collagen and enamel. The facile synthesis, high modulus, and low density of QL4 fibers indicate that they may find utility as a filler material in a variety of high efficiency, biocompatible composite materials. PMID:25757883

  3. TRUMP-BD: A computer code for the analysis of nuclear fuel assemblies under severe accident conditions

    International Nuclear Information System (INIS)

    TRUMP-BD (Boil Down) is an extension of the TRUMP (Edwards 1972) computer program for the analysis of nuclear fuel assemblies under severe accident conditions. This extension allows prediction of the heat transfer rates, metal-water oxidation rates, fission product release rates, steam generation and consumption rates, and temperature distributions for nuclear fuel assemblies under core uncovery conditions. The heat transfer processes include conduction in solid structures, convection across fluid-solid boundaries, and radiation between interacting surfaces. Metal-water reaction kinetics are modeled with empirical relationships to predict the oxidation rates of steam-exposed Zircaloy and uranium metal. The metal-water oxidation models are parabolic in form with an Arrhenius temperature dependence. Uranium oxidation begins when fuel cladding failure occurs; Zircaloy oxidation occurs continuously at temperatures above 13000 degree F when metal and steam are available. From the metal-water reactions, the hydrogen generation rate, total hydrogen release, and temporal and spatial distribution of oxide formations are computed. Consumption of steam from the oxidation reactions and the effect of hydrogen on the coolant properties is modeled for independent coolant flow channels. Fission product release from exposed uranium metal Zircaloy-clad fuel is modeled using empirical time and temperature relationships that consider the release to be subject to oxidation and volitization/diffusion (''bake-out'') release mechanisms. Release of the volatile species of iodine (I), tellurium (Te), cesium (Ce), ruthenium (Ru), strontium (Sr), zirconium (Zr), cerium (Cr), and barium (Ba) from uranium metal fuel may be modeled

  4. Computed isotopic inventory and dose assessment for SRS fuel and target assemblies

    International Nuclear Information System (INIS)

    Past studies have identified and evaluated important radionuclide contributors to dose from reprocessed spent fuel sent to waste for Mark 16B and 22 fuel assemblies and for Mark 31 A and 31B target assemblies. Fission-product distributions after a 5- and 15-year decay time were calculated for a ''representative'' set of irradiation conditions (i.e., reactor power, irradiation time, and exposure) for each type of assembly. The numerical calculations were performed using the SHIELD/GLASS system of codes. The sludge and supernate source terms for dose were studied separately with the significant radionuclide contributors for each identified and evaluated. Dose analysis considered both inhalation and ingestion pathways: The inhalation pathway was analyzed for both evaporative and volatile releases. Analysis of evaporative releases utilized release fractions for the individual radionuclides as defined in the ICRP-30 by DOE guidance. A release fraction of unity was assumed for each radionuclide under volatile-type releases, which would encompass internally initiated events (e.g., fires, explosions), process-initiated events, and externally initiated events. Radionuclides which contributed at least 1% to the overall dose were designated as significant contributors. The present analysis extends and complements the past analyses through considering a broader spectrum of fuel types and a wider range of irradiation conditions. The results provide for a more thorough understanding of the influences of fuel composition and irradiation parameters on fission product distributions (at 2 years or more). Additionally, the present work allows for a more comprehensive evaluation of radionuclide contributions to dose and an estimation of the variability in the radionuclide composition of the dose source term that results from the spent fuel sent to waste encompassing a broad spectrum of fuel compositions and irradiation conditions

  5. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  6. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  7. Computer simulation of thermal-hydraulics of MNSR fuel-channel assembly using LabView

    International Nuclear Information System (INIS)

    A LabView simulator of thermal hydraulics has been developed to demonstrate the temperature profile of coolant flow in the reactor core during normal operation. The simulator could equally be used for any transient behaviour of the reactor. Heat generation, transfer and the associated temperature profile in the fuel-channel elements viz: the coolant, cladding and fuel were studied and the corresponding analytical temperature equations in the axial and radial directions for the coolant, outer surface of the cladding, fuel surface and fuel center were obtained for the simulation using LabView. Tables of values for the equations were constructed by MATLAB and excel software programs. Plots of the equations with LabView were verified and validated with the graphs drawn by the MATLAB. In this thesis, an analysis of the effects of the coolant inlet temperature of 24.5°C and exit temperature of 70.0° on the temperature distribution in fuel-channel elements of the reactor core of cylindrical geometry was carried out. Other parameters, including the total fuel channel power, mass flow rate and convective heat transfer coefficient were varied to study the effects on the temperature profile. The analytical temperature equations in the fuel channel elements of the reactor core were obtained. MATLAB and Excel software were used to construct data for the equations. The plots by MATLAB were used to benchmark the LabVIEW simulation. Excellent agreement was obtained between the MATLAB plots and the LabView simulation results with an error margin of 0.001. The analysis of the results by comparing gradients of inlet temperature, total reactor channel power and mass flow indicated that inlet temperature gradient is one of the key parameters in determining the temperature profile in the MNSR core. (au)

  8. Solution of the international benchmark with trip of one of four reactor coolant pumps for VVER-1000 reactor plants using the computer code package KORSAR/GP and complex reactor nodalization

    International Nuclear Information System (INIS)

    The International OECD/NEA test benchmark for the trip of one of four operating reactor coolant pumps (RCPs) was solved using thermohydraulic code package KORSAR/GP. KORSAR/GP applies 1D calculational units. This benchmark was based on experimental results obtained during commissioning of Kalinin NPP, Unit 3. During the experiments a large amount of experimental data was obtained that enabled us to supplement the validation of the computer codes and nodalizations of 1D thermohydraulic codes. In the given transient there was a difference between coolant temperatures in different loops that resulted in the necessity of numerical simulation of the coolant mixing in the reactor plenums. To solve this problem, complex branched nodalization (i.e. the set of code calculational units) was used. The analysis results matched closely with the experimental data. Thus it was shown that the nodalization developed with the use of KORSAR/GP and the code itself can be applied for the simulation of VVER-1000 transients with one ore more RCPs in operation and sharp difference between coolant temperature in loops. (author)

  9. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells

    International Nuclear Information System (INIS)

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424–7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20–30%) extent of Hartree–Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO–LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed. (paper)

  10. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells.

    Science.gov (United States)

    Tortorella, Sara; Talamo, Maurizio Mastropasqua; Cardone, Antonio; Pastore, Mariachiara; De Angelis, Filippo

    2016-02-24

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424-7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20-30%) extent of Hartree-Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO-LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed. PMID:26808717

  11. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  12. Computer-assisted assembly and correction simulation for complex axis deviations using the Ilizarov fixator.

    Science.gov (United States)

    Kochs, A

    1995-01-01

    In axis correction with the Ilizarov ring fixator, the correction results are often insufficient or there are unexpected translation effects, which can be causally attributed to wrong preoperative planning or inaccurate assembly. To avoid such results, computerised simulation was developed. Via digitalisation of the bone outlines traced from X-radiographs with an additional scale, preoperative correction planning can be performed, simulated with normal software. This can be used while constructing the apparatus and positioning the joints. In addition, the translation effect of the bone fragments can be simulated by arbitrarily choosing the pivot of the correction. In transferring the X-radiograph true to scale, one can compare the ring planes before and after correction. It is possible to estimate the necessary distraction as well as compression and thus the postoperative distraction mode. Using computerised planning, the apparatus construction can be optimised and complications caused by misplanning avoided. Not only the inexperienced user can benefit from this aid. PMID:7577222

  13. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  14. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  15. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  16. Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center

    International Nuclear Information System (INIS)

    This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art

  17. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  18. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  19. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  20. ABACUS, a direct method for protein NMR structure computation via assembly of fragments.

    Science.gov (United States)

    Grishaev, A; Steren, C A; Wu, B; Pineda-Lucena, A; Arrowsmith, C; Llinás, M

    2005-10-01

    The ABACUS algorithm obtains the protein NMR structure from unassigned NOESY distance restraints. ABACUS works as an integrated approach that uses the complete set of available NMR experimental information in parallel and yields spin system typing, NOE spin pair identities, sequence specific resonance assignments, and protein structure, all at once. The protocol starts from unassigned molecular fragments (including single amino acid spin systems) derived from triple-resonance (1)H/(13)C/(15)N NMR experiments. Identifications of connected spin systems and NOEs precede the full sequence specific resonance assignments. The latter are obtained iteratively via Monte Carlo-Metropolis and/or probabilistic sequence selections, molecular dynamics structure computation and BACUS filtering (A. Grishaev and M. Llinás, J Biomol NMR 2004;28:1-10). ABACUS starts from scratch, without the requirement of an initial approximate structure, and improves iteratively the NOE identities in a self-consistent fashion. The procedure was run as a blind test on data recorded on mth1743, a 70-amino acid genomic protein from M. thermoautotrophicum. It converges to a structure in ca. 15 cycles of computation on a 3-GHz processor PC. The calculated structures are very similar to the ones obtained via conventional methods (1.22 A backbone RMSD). The success of ABACUS on mth1743 further validates BACUS as a NOESY identification protocol.

  1. Parallel molecular computation of modular-multiplication with two same inputs over finite field GF(2(n)) using self-assembly of DNA tiles.

    Science.gov (United States)

    Li, Yongnan; Xiao, Limin; Ruan, Li

    2014-06-01

    Two major advantages of DNA computing - huge memory capacity and high parallelism - are being explored for large-scale parallel computing, mass data storage and cryptography. Tile assembly model is a highly distributed parallel model of DNA computing. Finite field GF(2(n)) is one of the most commonly used mathematic sets for constructing public-key cryptosystem. It is still an open question that how to implement the basic operations over finite field GF(2(n)) using DNA tiles. This paper proposes how the parallel tile assembly process could be used for computing the modular-square, modular-multiplication with two same inputs, over finite field GF(2(n)). This system could obtain the final result within less steps than another molecular computing system designed in our previous study, because square and reduction are executed simultaneously and the previous system computes reduction after calculating square. Rigorous theoretical proofs are described and specific computing instance is given after defining the basic tiles and the assembly rules. Time complexity of this system is 3n-1 and space complexity is 2n(2).

  2. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    Energy Technology Data Exchange (ETDEWEB)

    Abanades, Alberto [Universidad Politecnica de Madrid (Spain); Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto [ANL, Argonne (United States); Bornos, Victor; Kiyavitskaya, Anna [Joint Institute of Power Eng. and Nucl. Research ' Sosny' , Minsk (Belarus); Carta, Mario [ENEA, Casaccia (Italy); Janczyszyn, Jerzy [AGH-University of Science and Technology, Krakow (Poland); Maiorino, Jose [IPEN, Sao Paulo (Brazil); Pyeon, Cheolho [Kyoto University (Japan); Stanculescu, Alexander [IAEA, Vienna (Austria); Titarenko, Yury [ITEP, Moscow (Russian Federation); Westmeier, Wolfram [Wolfram Westmeier GmbH, Ebsdorfergrund (Germany)

    2008-07-01

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  3. Benchmarking for plant maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Komonen, K.; Ahonen, T.; Kunttu, S. (VTT Technical Research Centre of Finland, Espoo (Finland))

    2010-05-15

    The product of the project, e-Famemain, is a new kind of tool for benchmarking, which is based on many years' research efforts within Finnish industry. It helps to evaluate plants' performance in operations and maintenance by making industrial plants comparable with the aid of statistical methods. The system is updated continually and automatically. It carries out automatically multivariate statistical analysis when data is entered into system, and many other statistical operations. Many studies within Finnish industry during the last ten years have revealed clear causalities between various performance indicators. In addition, these causalities should be taken into account when utilising benchmarking or forecasting indicator values e.g. for new investments. The benchmarking system consists of five sections: data input section, positioning section, locating differences section, best practices and planning section and finally statistical tables. (orig.)

  4. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  5. Abstracts of digital computer code packages. Assembled by the Radiation Shielding Information Center

    International Nuclear Information System (INIS)

    The term ''code package'' is used to describe a miscellaneous grouping of materials which, when interpreted in connection with a digital computer, enables the scientist--user to solve technical problems in the area for which the material was designed. In general, a ''code package'' consists of written material--reports, instructions, flow charts, listings of data, and other useful material and IBM card decks (or, more often, a reel of magnetic tape) on which the source decks, sample problem input (including libraries of data) and the BCD/EBCDIC output listing from the sample problem are written. In addition to the main code, and any available auxiliary routines are also included. The abstract format was chosen to give to a potential code user several criteria for deciding whether or not he wishes to request the code package

  6. Track 3: growth of nuclear technology and research numerical and computational aspects of the coupled three-dimensional core/plant simulations: organization for economic cooperation and development/U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-I. 4. Methods and Results for the MSLB NEA Benchmark Using SIMTRAN and RELAP-5

    International Nuclear Information System (INIS)

    . The neutronic constants are then nearly implicitly calculated in the next time step as a function of the extrapolated T-H variables (water density and water and fuel temperatures), where the limited half-step extrapolation prevents significant oscillations, allowing for larger time steps. For the MSLB Benchmark, the SIMTRAN code was extended to deal with axial subdivision of cross-section sets, including varying and moving boundaries, to allow for control rod continuous movement in axially subdivided zones/compositions. The synthetic two-group nodal discontinuity factors were generated by 2-D fine-mesh diffusion calculations of the different (15) core planes, with un-rodded and rodded configurations and for the initial, mid-transient, and final quasi-steady-state conditions, with axial buckling and local T-H conditions per node (quarter of assembly), obtained by iterating the 3-D and 2-D solutions that converge in two or three iterations. For the NEA/OECD MSLB benchmark, we have contributed results for exercise 2, the guided core transient analysis, using our full SIMTRAN code (with COBRA for the 3-D core T-H transient solution with given core inlet boundary conditions along the transient), and for exercise 3, the full system transient, using our reduced SIMTRAN code (without COBRA) coupled with RELAP-5, using the same code version and input deck for RELAP-5 as supplied by the Purdue-NRC group, which we fully acknowledge. This system model was validated by them for exercise 1 and for exercises 2 and 3 using their PARCS 3-D neutronic code. Our results for the steady states and the transients proposed for exercise 2 of the MSLB Benchmark, including a best-estimate scenario, with the physical control rod absorption XS sets, and a return-to-power scenario, with reduced control rod absorption XS sets, show small deviations from the mean results of other participants, especially for core average parameters, as will be fully documented in the final reports of the benchmark

  7. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  8. Track 3: growth of nuclear technology and research numerical and computational aspects of the coupled three-dimensional core/plant simulations: organization for economic cooperation and development/U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-I. 5. Analyses of the OECD MSLB Benchmark with the Codes DYN3D and DYN3D/ATHLET

    International Nuclear Information System (INIS)

    The code DYN3D coupled with ATHLET was used for the analysis of the OECD Main-Steam-Line-Break (MSLB) Benchmark, which is based on real plant design and operational data of the TMI-1 pressurized water reactor (PWR). Like the codes RELAP or TRAC,ATHLET is a thermal-hydraulic system code with point or one-dimensional neutron kinetic models. ATHLET, developed by the Gesellschaft for Anlagen- und Reaktorsicherheit, is widely used in Germany for safety analyses of nuclear power plants. DYN3D consists of three-dimensional nodal kinetic models and a thermal-hydraulic part with parallel coolant channels of the reactor core. DYN3D was coupled with ATHLET for analyzing more complex transients with interactions between coolant flow conditions and core behavior. It can be applied to the whole spectrum of operational transients and accidents, from small and intermediate leaks to large breaks of coolant loops or steam lines at PWRs and boiling water reactors. The so-called external coupling is used for the benchmark, where the thermal hydraulics is split into two parts: DYN3D describes the thermal hydraulics of the core, while ATHLET models the coolant system. Three exercises of the benchmark were simulated: Exercise 1: point kinetics plant simulation (ATHLET) Exercise 2: coupled three-dimensional neutronics/core thermal-hydraulics evaluation of the core response for given core thermal-hydraulic boundary conditions (DYN3D) Exercise 3: best-estimate coupled core-plant transient analysis (DYN3D/ATHLET). Considering the best-estimate cases (scenarios 1 of exercises 2 and 3), the reactor does not reach criticality after the reactor trip. Defining more serious tests for the codes, the efficiency of the control rods was decreased (scenarios 2 of exercises 2 and 3) to obtain recriticality during the transient. Besides the standard simulation given by the specification, modifications are introduced for sensitivity studies. The results presented here show (a) the influence of a reduced

  9. Xenopus laevis: an ideal experimental model for studying the developmental dynamics of neural network assembly and sensory-motor computations.

    Science.gov (United States)

    Straka, Hans; Simmers, John

    2012-04-01

    The amphibian Xenopus laevis represents a highly amenable model system for exploring the ontogeny of central neural networks, the functional establishment of sensory-motor transformations, and the generation of effective motor commands for complex behaviors. Specifically, the ability to employ a range of semi-intact and isolated preparations for in vitro morphophysiological experimentation has provided new insights into the developmental and integrative processes associated with the generation of locomotory behavior during changing life styles. In vitro electrophysiological studies have begun to explore the functional assembly, disassembly and dynamic plasticity of spinal pattern generating circuits as Xenopus undergoes the developmental switch from larval tail-based swimming to adult limb-based locomotion. Major advances have also been made in understanding the developmental onset of multisensory signal processing for reactive gaze and posture stabilizing reflexes during self-motion. Additionally, recent evidence from semi-intact animal and isolated CNS experiments has provided compelling evidence that in Xenopus tadpoles, predictive feed-forward signaling from the spinal locomotor pattern generator are engaged in minimizing visual disturbances during tail-based swimming. This new concept questions the traditional view of retinal image stabilization that in vertebrates has been exclusively attributed to sensory-motor transformations of body/head motion-detecting signals. Moreover, changes in visuomotor demands associated with the developmental transition in propulsive strategy from tail- to limb-based locomotion during metamorphosis presumably necessitates corresponding adaptive alterations in the intrinsic spinoextraocular coupling mechanism. Consequently, Xenopus provides a unique opportunity to address basic questions on the developmental dynamics of neural network assembly and sensory-motor computations for vertebrate motor behavior in general. PMID:21834082

  10. Xenopus laevis: an ideal experimental model for studying the developmental dynamics of neural network assembly and sensory-motor computations.

    Science.gov (United States)

    Straka, Hans; Simmers, John

    2012-04-01

    The amphibian Xenopus laevis represents a highly amenable model system for exploring the ontogeny of central neural networks, the functional establishment of sensory-motor transformations, and the generation of effective motor commands for complex behaviors. Specifically, the ability to employ a range of semi-intact and isolated preparations for in vitro morphophysiological experimentation has provided new insights into the developmental and integrative processes associated with the generation of locomotory behavior during changing life styles. In vitro electrophysiological studies have begun to explore the functional assembly, disassembly and dynamic plasticity of spinal pattern generating circuits as Xenopus undergoes the developmental switch from larval tail-based swimming to adult limb-based locomotion. Major advances have also been made in understanding the developmental onset of multisensory signal processing for reactive gaze and posture stabilizing reflexes during self-motion. Additionally, recent evidence from semi-intact animal and isolated CNS experiments has provided compelling evidence that in Xenopus tadpoles, predictive feed-forward signaling from the spinal locomotor pattern generator are engaged in minimizing visual disturbances during tail-based swimming. This new concept questions the traditional view of retinal image stabilization that in vertebrates has been exclusively attributed to sensory-motor transformations of body/head motion-detecting signals. Moreover, changes in visuomotor demands associated with the developmental transition in propulsive strategy from tail- to limb-based locomotion during metamorphosis presumably necessitates corresponding adaptive alterations in the intrinsic spinoextraocular coupling mechanism. Consequently, Xenopus provides a unique opportunity to address basic questions on the developmental dynamics of neural network assembly and sensory-motor computations for vertebrate motor behavior in general.

  11. Solvent-driven symmetry of self-assembled nanocrystal superlattices-A computational study

    KAUST Repository

    Kaushik, Ananth P.

    2012-10-29

    The preference of experimentally realistic sized 4-nm facetted nanocrystals (NCs), emulating Pb chalcogenide quantum dots, to spontaneously choose a crystal habit for NC superlattices (Face Centered Cubic (FCC) vs. Body Centered Cubic (BCC)) is investigated using molecular simulation approaches. Molecular dynamics simulations, using united atom force fields, are conducted to simulate systems comprised of cube-octahedral-shaped NCs covered by alkyl ligands, in the absence and presence of experimentally used solvents, toluene and hexane. System sizes in the 400,000-500,000-atom scale followed for nanoseconds are required for this computationally intensive study. The key questions addressed here concern the thermodynamic stability of the superlattice and its preference of symmetry, as we vary the ligand length of the chains, from 9 to 24 CH2 groups, and the choice of solvent. We find that hexane and toluene are "good" solvents for the NCs, which penetrate the ligand corona all the way to the NC surfaces. We determine the free energy difference between FCC and BCC NC superlattice symmetries to determine the system\\'s preference for either geometry, as the ratio of the length of the ligand to the diameter of the NC is varied. We explain these preferences in terms of different mechanisms in play, whose relative strength determines the overall choice of geometry. © 2012 Wiley Periodicals, Inc.

  12. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  13. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...

  14. Benchmark problem proposal

    International Nuclear Information System (INIS)

    The meeting of the Radiation Energy Spectra Unfolding Workshop organized by the Radiation Shielding Information Center is discussed. The plans of the unfolding code benchmarking effort to establish methods of standardization for both the few channel neutron and many channel gamma-ray and neutron spectroscopy problems are presented

  15. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  16. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  17. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  18. Benchmarking Public Procurement 2016

    OpenAIRE

    World Bank Group

    2015-01-01

    Benchmarking Public Procurement 2016 Report aims to develop actionable indicators which will help countries identify and monitor policies and regulations that impact how private sector companies do business with the government. The project builds on the Doing Business methodology and was initiated at the request of the G20 Anti-Corruption Working Group.

  19. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  20. Comparing Neuromorphic Solutions in Action: Implementing a Bio-Inspired Solution to a Benchmark Classification Task on Three Parallel-Computing Platforms.

    Science.gov (United States)

    Diamond, Alan; Nowotny, Thomas; Schmuker, Michael

    2015-01-01

    Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and "neuromorphic algorithms" are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for

  1. Comparing neuromorphic solutions in action: implementing a bio-inspired solution to a benchmark classification task on three parallel-computing platforms

    Directory of Open Access Journals (Sweden)

    Alan eDiamond

    2016-01-01

    Full Text Available Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and neuromorphic algorithms are being developed. As they are maturing towards deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability and power efficiency.Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analogue Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task.We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model’s ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached.With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication

  2. Implementation of the NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for High Performance Computing (HPC). In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for CFD applications.

  3. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  4. Radiography benchmark 2014

    Science.gov (United States)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  5. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects, and m...... of the benchmark to three spatio-temporal indexes - the TPR-, TPR*-, and Bx-trees. Representative experimental results and consequent guidelines for the usage of these indexes are reported....

  6. Analysis of VRML-Based Computer Experimental Assembly%浅析基于VRML的计算机组装实验

    Institute of Scientific and Technical Information of China (English)

    叶龙妹

    2011-01-01

    VRML is Virtual Reality Modeling Language short,is the virtual reality modeling language,is to create simulated real-world scenes and man-made fictional scenario of a three-dimensional modeling language.Assembled in the computer experiments,often due to lack of equipment and renovation of a faster computer components sake,so that the computer has a certain degree of difficulty experimental operation,affecting the experimental results.Therefore,this paper,the assembly of computer virtual reality technology in a variety of experimental simulation of the hardware devices in order to build a computer assembly of virtual experiments.Therefore,this paper,the VRML virtual environment,computer assembly overview of the experiment,analysis of the assembly process of virtual experiments,virtual experiments to explore the effect of the assembly or function.%VRML是Virtual Reality Modeling Language的简称,是指虚拟现实建模语言,是为了建立模拟真实世界中的场景而人为虚构的一种三维的场景建模语言。在计算机组装实验中,往往由于相关设备的不足和计算机部件翻新较快的缘故,从而使得计算机实验操作具有一定的难度,影响了实验的效果。所以,本文通过虚拟现实技术对计算机组装实验中的各种设备硬件进行仿真模拟,从而建立计算机组装虚拟实验。因此,本文通过对VRML环境下计算机组装虚拟实验进行概况,分析了组装虚拟实验的过程,探讨了组装虚拟实验的效果或者作用。

  7. Application of FORSS sensitivity and uncertainty methodology to fast reactor benchmark analysis

    Energy Technology Data Exchange (ETDEWEB)

    Weisbin, C.R.; Marable, J.H.; Lucius, J.L.; Oblow, E.M.; Mynatt, F.R.; Peelle, R.W.; Perey, F.G.

    1976-12-01

    FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions, and associated uncertainties. This paper presents the theory and code description as well as the first results of applying FORSS to fast reactor benchmarks. Specifically, for various assemblies and reactor performance parameters, the nuclear data sensitivities were computed by nuclide, reaction type, and energy. Comprehensive libraries of energy-dependent coefficients have been developed in a computer retrievable format and released for distribution by RSIC and NNCSC. Uncertainties induced by nuclear data were quantified using preliminary, energy-dependent relative covariance matrices evaluated with ENDF/B-IV expectation values and processed for /sup 238/U(n,f), /sup 238/U(n,..gamma..), /sup 239/Pu(n,f), and /sup 239/Pu(..nu..). Nuclear data accuracy requirements to meet specified performance criteria at minimum experimental cost were determined.

  8. WIDER FACE: A Face Detection Benchmark

    OpenAIRE

    Yang, Shuo; Luo, Ping; Loy, Chen Change; Tang, Xiaoou

    2015-01-01

    Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categori...

  9. 计算机硬件组装及维护技术的探讨%Discussion of the Assembly and Maintenance of Computer Hardware Technology

    Institute of Scientific and Technical Information of China (English)

    马钊

    2015-01-01

    With the progress of the development of society and economy, science and technology, the computer has become indispensable at work, learning tools. Therefore, we need to have some basic computer hardware assembly and maintenance techniques.%随着社会经济的不断发展和科学技术的不断进步,计算机已成为人们在工作、学习中必不可少的工具。因此,我们需要掌握一些基本的计算机硬件组装和维护技术。

  10. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  11. Benchmark Generation and Simulation at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Lagadapati, Mahesh [North Carolina State University (NCSU), Raleigh; Mueller, Frank [North Carolina State University (NCSU), Raleigh; Engelmann, Christian [ORNL

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  12. First CSNI numerical benchmark problem: comparison report

    International Nuclear Information System (INIS)

    In order to be able to make valid statements about a model's ability to describe a certain physical situation, it is indispensable that the numerical errors are much smaller than the modelling errors; otherwise, numerical errors could compensate or over pronounce model errors in an uncontrollable way. Therefore, knowledge about the numerical errors dependence on discretization parameters (e.g. size of spatial and temporal mesh) is required. In recognition of this need, numerical benchmark problems have been introduced. In the area of transient two-phase flow, numerical benchmarks are rather new. In June 1978, the CSNI Working Group on Emergency Core Cooling of Water Reactors has proposed to ICD /CSNI to sponsor a First CSNI Numerical Benchmark exercise. By the end of October 1979, results of the computation had been received from 10 organisations in 10 different countries. Based on these contributions, a preliminary comparison report has been prepared and distributed to the members of the CSNI Working Group on Emergency Core Cooling of Water Reactors, and to the contributors to the benchmark exercise. Comments on the preliminary comparison report by some contributors have subsequently been received. They have been considered in writing this final comparison report

  13. Algorithm and Architecture Independent Benchmarking with SEAK

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  14. CAPRG: sequence assembling pipeline for next generation sequencing of non-model organisms.

    Directory of Open Access Journals (Sweden)

    Arun Rawat

    Full Text Available Our goal is to introduce and describe the utility of a new pipeline "Contigs Assembly Pipeline using Reference Genome" (CAPRG, which has been developed to assemble "long sequence reads" for non-model organisms by leveraging a reference genome of a closely related phylogenetic relative. To facilitate this effort, we utilized two avian transcriptomic datasets generated using ROCHE/454 technology as test cases for CAPRG assembly. We compared the results of CAPRG assembly using a reference genome with the results of existing methods that utilize de novo strategies such as VELVET, PAVE, and MIRA by employing parameter space comparisons (intra-assembling comparison. CAPRG performed as well or better than the existing assembly methods based on various benchmarks for "gene-hunting." Further, CAPRG completed the assemblies in a fraction of the time required by the existing assembly algorithms. Additional advantages of CAPRG included reduced contig inflation resulting in lower computational resources for annotation, and functional identification for contigs that may be categorized as "unknowns" by de novo methods. In addition to providing evaluation of CAPRG performance, we observed that the different assembly (inter-assembly results could be integrated to enhance the putative gene coverage for any transcriptomics study.

  15. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  16. Fault detection of a benchmark wind turbine using interval analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Odgaard, Peter Fogh; Bak, Thomas

    2012-01-01

    This paper investigates a state estimation set- membership approach for fault detection of a benchmark wind turbine. The main challenges in the benchmark are high noise on the wind speed measurement and the nonlinearities in the aerodynamic torque such that the overall model of the turbine...... of the measurement with a closed set that is computed based on the past measurements and a model of the system. If the measurement is not consistent with this set, a fault is detected. The result demonstrates effectiveness of the method for fault detection of the benchmark wind turbine....

  17. 计算机组装实践课程改革的探索与研究%Exploration and Research of the Computer Assembly Practice Curriculum Reform

    Institute of Scientific and Technical Information of China (English)

    聂幸

    2012-01-01

    计算机组装实践课程是计算机系学生的一门必修课程,通过课程的学习,可以使学生对计算机内部结构有一个清晰的认识,了解和掌握计算机组装的完整步骤。本文从计算机组装实践实际教学出发,综合实践教学与虚拟环境教学各自的优缺点,在实践教学中适当引入虚拟环境,既保持了实践教学的特点,又发挥了虚拟环境的优势,使两者达到一个相对平衡的状态。%The computer assembly practice course is a compulsory course of computer science students, through the learning, aUow students to have a clear understanding of the internal structure of computer, understand and master the computer assembly complete steps. Actual teaching practice from the computer assembly, comprehensive practice teaching with virtual environment teaching their advantages and disadvantages, in the practice of teaching appropriate to introduce a virtual environment, that is, to maintain the characteristics of practice teaching, but also play to the advantages of a virtual environ- ment, so that the two reached a relatively balanced state.

  18. Diffusion benchmark calculations of a VVER-440 core with 180 deg symmetry

    International Nuclear Information System (INIS)

    A diffusion benchmark of the VVER-440 core with 180 deg symmetry and fixed cross sections is proposed. The new benchmark is the modification of Seidel's 3-dimensional 30 degree benchmark, which plays an important role in the verification and validation of nodal neutronic codes. In the new benchmark the 180 deg symmetry is assured by a stuck eccentric control assembly. The recommended reference solution is derived from diverse solutions of the DIF3D finite difference code. The results of the HEXAN module of the KARATE code system are also presented. (author)

  19. Performance and Scalability of the NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for scientific applications. In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for scientific applications.

  20. Benchmarking Open-Source Tree Learners in R/RWeka

    OpenAIRE

    Schauerhuber, Michael; Zeileis, Achim; Meyer, David; Hornik, Kurt

    2007-01-01

    The two most popular classification tree algorithms in machine learning and statistics - C4.5 and CART - are compared in a benchmark experiment together with two other more recent constant-fit tree learners from the statistics literature (QUEST, conditional inference trees). The study assesses both misclassification error and model complexity on bootstrap replications of 18 different benchmark datasets. It is carried out in the R system for statistical computing, made possible by means of the...

  1. Monte Carlo and deterministic simulations of activation ratio experiments for 238U(n,f), 238U(n,g) and 238U(n,2n) in the Big Ten benchmark critical assembly

    Energy Technology Data Exchange (ETDEWEB)

    Descalle, M; Clouse, C; Pruet, J

    2009-07-28

    The authors have compared calculations of critical assembly activation ratios using 3 different Monte Carlo codes and one deterministic code. There is excellent agreement. Discrepancies between the different Monte Carlo codes are the 1-2% level. Notably, the deterministic calculations with 87 groups are also in good agreement with the continuous energy Monte Carlo results. The three codes underestimate the {sup 238}U(n,f) reaction, suggesting that there is room for improvement in the evaluation, or in the evaluations of other reactions influencing the spectrum in BigTen. Until statistical uncertainties are implemented in Mercury, they strongly advise long runs to guarantee sufficient convergence of the flux at high energies, and they strongly encourage comparing Mercury results to a well-developed and documented code such as MCNP5 and/or COG. It may be that ENDL2008 will be available for use in COG within a year. Finally, it may be worthwhile to add a 'standard' reaction rate tally similar to those implemented in COG and MCNP5, if the goal is to expand the central fission and activation ratios simulations to include isotopes that are not part of the specifications for the assembly material composition.

  2. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  3. VENUS-2 Benchmark Problem Analysis with HELIOS-1.9

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hyeon-Jun; Choe, Jiwon; Lee, Deokjung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2014-10-15

    Since there are reliable results of benchmark data from the OECD/NEA report of the VENUS-2 MOX benchmark problem, by comparing benchmark results users can identify the credibility of code. In this paper, the solution of the VENUS-2 benchmark problem from HELIOS 1.9 using the ENDF/B-VI library(NJOY91.13) is compared with the result from HELIOS 1.7 with consideration of the MCNP-4B result as reference data. The comparison contains the results of pin cell calculation, assembly calculation, and core calculation. The eigenvalues from those are considered by comparing the results from other codes. In the case of UOX and MOX assemblies, the differences from the MCNP-4B results are about 10 pcm. However, there is some inaccuracy in baffle-reflector condition, and relatively large differences were found in the MOX-reflector assembly and core calculation. Although HELIOS 1.9 utilizes an inflow transport correction, it seems that it has a limited effect on the error in baffle-reflector condition.

  4. Introduction to the HPC Challenge Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  5. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The applica......nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  6. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  7. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  8. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  9. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  10. Quantum benchmarks for Gaussian states

    CERN Document Server

    Chiribella, Giulio

    2014-01-01

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.

  11. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  12. Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems

    International Nuclear Information System (INIS)

    A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, Sn order effect and mesh size effect were systematically evaluated and summarized in this report. (author)

  13. Parallel processing of neutron transport in fuel assembly calculation

    International Nuclear Information System (INIS)

    Group constants, which are used for reactor analyses by nodal method, are generated by fuel assembly calculations based on the neutron transport theory, since one or a quarter of the fuel assembly corresponds to a unit mesh in the current nodal calculation. The group constant calculation for a fuel assembly is performed through spectrum calculations, a two-dimensional fuel assembly calculation, and depletion calculations. The purpose of this study is to develop a parallel algorithm to be used in a parallel processor for the fuel assembly calculation and the depletion calculations of the group constant generation. A serial program, which solves the neutron integral transport equation using the transmission probability method and the linear depletion equation, was prepared and verified by a benchmark calculation. Small changes from the serial program was enough to parallelize the depletion calculation which has inherent parallel characteristics. In the fuel assembly calculation, however, efficient parallelization is not simple and easy because of the many coupling parameters in the calculation and data communications among CPU's. In this study, the group distribution method is introduced for the parallel processing of the fuel assembly calculation to minimize the data communications. The parallel processing was performed on Quadputer with 4 CPU's operating in NURAD Lab. at KAIST. Efficiencies of 54.3 % and 78.0 % were obtained in the fuel assembly calculation and depletion calculation, respectively, which lead to the overall speedup of about 2.5. As a result, it is concluded that the computing time consumed for the group constant generation can be easily reduced by parallel processing on the parallel computer with small size CPU's

  14. Benchmarking of the FENDL-3 Neutron Cross-section Data Starter Library for Fusion Applications

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, U., E-mail: ulrich.fischer@kit.edu [Association KIT-Euratom, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Angelone, M. [Associazione ENEA-Euratom, ENEA Fusion Division, Via E. Fermi 27, I-00044 Frascati (Italy); Bohm, T. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States); Kondo, K. [Association KIT-Euratom, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Konno, C. [Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan); Sawan, M. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States); Villari, R. [Associazione ENEA-Euratom, ENEA Fusion Division, Via E. Fermi 27, I-00044 Frascati (Italy); Walker, B. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States)

    2014-06-15

    This paper summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) on a computational ITER benchmark and a series of 14 MeV neutron benchmark experiments. The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses. In general, FENDL-3 shows an improved performance for fusion neutronics applications.

  15. Benchmarking of the FENDL-3 Neutron Cross-section Data Starter Library for Fusion Applications

    International Nuclear Information System (INIS)

    This paper summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) on a computational ITER benchmark and a series of 14 MeV neutron benchmark experiments. The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses. In general, FENDL-3 shows an improved performance for fusion neutronics applications

  16. Benchmark analysis of MCNP{trademark} ENDF/B-VI iron

    Energy Technology Data Exchange (ETDEWEB)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets.

  17. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  18. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  19. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  20. Comparison of Computational Estimations of Reactivity Margin From Fission Products and Minor Actinides in PWR Burnup Credit

    International Nuclear Information System (INIS)

    This paper has presented the results of a computational benchmark and independent calculations to verify the benchmark calculations for the estimation of the additional reactivity margin available from fission products and minor actinides in a PWR burnup credit storage/transport environment. The calculations were based on a generic 32 PWR-assembly cask. The differences between the independent calculations and the benchmark lie within 1% for the uniform axial burnup distribution, which is acceptable. The Δk for KENO - MCNP results are generally lower than the other Δk values, due to the fact that HELIOS performed the depletion part of the calculation for both the KENO and MCNP results. The differences between the independent calculations and the benchmark for the non-uniform axial burnup distribution were within 1.1%

  1. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  2. Computation of concentration changes of heavy metals in the fuel assemblies with 1.6% enrichment by ORIGEN code for VVER-1000

    International Nuclear Information System (INIS)

    ORIGEN code is a widely used computer code for calculating the buildup, decay, and processing of radioactive materials. During the past few years, a sustained effort was undertaken by ORNL to update the original ORIGEN code [4] and its associated data bases. The results of this effort were updated on the reactor model, cross section, fission product yields, decay data, decay photon data and the ORIGEN computer code itself. In this paper we have obtained concentration changes of uranium and plutonium isotopes by ORIGEN code at different burn-up and then the results have been compared with VVER-1000 results in the first fuel cycle for fuel assemblies with 1.6% enrichment in the BUSHEHR Nuclear Power Plant. (author)

  3. Comparative analysis of CTF and trace thermal-hydraulic codes using OECD/NRC PSBT benchmark void distribution database

    International Nuclear Information System (INIS)

    The international OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark has been established to provide a test bed for assessing the capabilities of various thermal-hydraulic subchannel, system, and computational fluid dynamics (CFD) codes and to encourage advancement in the analysis of fluid flow in rod bundles. The aim is to improve the reliability of the nuclear reactor safety margin evaluations. The benchmark is based on one of the most valuable databases identified for the thermal-hydraulics modeling, which was developed by the Nuclear Power Engineering Corporation (NUPEC) in Japan. The database includes subchannel void fraction and departure from nucleate boiling (DNB) measurements in a representative Pressurized Water Reactor (PWR) fuel assembly. Part of this database is made available for the international PSBT benchmark activity. The PSBT benchmark team is organized based on the collaboration between the Pennsylvania State University (PSU) and the Japan Nuclear Energy Safety organization (JNES) including the participation and support of the U.S. Nuclear Regulatory Commission (NRC) and the Nuclear Energy Agency (NEA), OECD. On behalf of the PSBT benchmark team, PSU in collaboration with US NRC is performing supporting calculations of the benchmark exercises using its in-house advanced thermalhydraulic subchannel code CTF and the US NRC system code TRACE. CTF is a version of the well-known and widely used code COBRA-TF whose models have been continuously improved and validated over the last years at the Reactor Dynamics and Fuel Management Group (RDFMG) at PSU. TRACE is a reactor systems code developed by the U.S. Nuclear Regulatory Commission to analyze transient and steady-state thermal-hydraulic behavior in Light Water Reactors (LWRs) and it has been designed to perform best-estimate analyses of loss-of-coolant accidents (LOCAs), operational transients, and other accident scenarios in PWRs and boiling light-water reactors (BWRs). The paper presents

  4. A CFD simulation process for fast reactor fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Hamman, Kurt D., E-mail: Kurt.Hamman@inl.go [Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Berry, Ray A. [Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States)

    2010-09-15

    A CFD modeling and simulation process for large-scale problems using an arbitrary fast reactor fuel assembly design was evaluated. Three-dimensional flow distributions of sodium for several fast reactor fuel assembly pin spacing configurations were simulated on high performance computers using commercial CFD software. This research focused on 19-pin fuel assembly 'benchmark' geometry, similar in design to the Advanced Burner Test Reactor, where each pin is separated by helical wire-wrap spacers. Several two-equation turbulence models including the k-{epsilon} and SST (Menter) k-{omega} were evaluated. Considerable effort was taken to resolve the momentum boundary layer, so as to eliminate the need for wall functions and reduce computational uncertainty. High performance computers were required to generate the hybrid meshes needed to predict secondary flows created by the wire-wrap spacers; computational meshes ranging from 65 to 85 million elements were common. A general validation methodology was followed, including mesh refinement and comparison of numerical results with empirical correlations. Predictions for velocity, temperature, and pressure distribution are shown. The uncertainty of numerical models, importance of high fidelity experimental data, and the challenges associated with simulating and validating large production-type problems are presented.

  5. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  6. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    Science.gov (United States)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  7. An Interactive Assembly Process Planner

    Institute of Scientific and Technical Information of China (English)

    廖华飞; 张林鍹; 肖田元; 曾理; 古月

    2004-01-01

    This paper describes the implementation and performance of the virtual assembly support system (VASS), a new system that can provide designers and assembly process engineers with a simulation and visualization environment where they can evaluate the assemblability/disassemblability of products, and thereby use a computer to intuitively create assembly plans and interactively generate assembly process charts. Subassembly planning and assembly priority reasoning techniques were utilized to find heuristic information to improve the efficiency of assembly process planning. Tool planning was implemented to consider tool requirements in the product design stage. New methods were developed to reduce the computation amount involved in interference checking. As an important feature of the VASS, human interaction was integrated into the whole process of assembly process planning, extending the power of computer reasoning by including human expertise, resulting in better assembly plans and better designs.

  8. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  9. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  10. BEGAFIP. Programming service, development and benchmark calculations

    International Nuclear Information System (INIS)

    This report summarizes improvements to BEGAFIP (the Swedish equivalent to the Oak Ridge computer code ORIGEN). The improvements are: addition of a subroutine making it possible to calculate neutron sources, exchange of fission yields and branching ratios in the data library to those published by Meek and Rider in 1978. In addition, BENCHMARK-calculations have been made with BEGAFIP as well as with ORIGEN regarding the build-up of actinides for a fuel burnup of 33 MWd/kg U. The results were compared to those arrived upon from the more sophisticated code CASMO. (author)

  11. COVE 2A Benchmarking calculations using NORIA

    International Nuclear Information System (INIS)

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs

  12. ABM11 parton distributions and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, Sergey [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institut Fiziki Vysokikh Ehnergij, Protvino (Russian Federation); Bluemlein, Johannes; Moch, Sven-Olaf [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-08-15

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant {alpha}{sub s} at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS scheme for {alpha}{sub s} and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  13. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    International Nuclear Information System (INIS)

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  14. Self-organization of Dynamic Distributed Computational Systems Applying Principles of Integrative Activity of Brain Neuronal Assemblies

    OpenAIRE

    Eugene Burmakin; Fingelkurts, Alexander A.; Fingelkurts, Andrew A

    2009-01-01

    This paper presents a method for self-organization of the distributed systems operating in a dynamic context. We propose the use of a simple biologically (cognitive neuroscience) inspired method for system configuration that allows allocating most of the computational load to off-line in order to improve the scalability property of the system. The method proposed has less computational burden at runtime than traditional system adaptation approaches.

  15. Benchmarking Ligand-Based Virtual High-Throughput Screening with the PubChem Database

    Directory of Open Access Journals (Sweden)

    Mariusz Butkiewicz

    2013-01-01

    Full Text Available With the rapidly increasing availability of High-Throughput Screening (HTS data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR models are built using Artificial Neural Networks (ANNs, Support Vector Machines (SVMs, Decision Trees (DTs, and Kohonen networks (KNs. Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed.

  16. SMORN-III benchmark test on reactor noise analysis methods

    International Nuclear Information System (INIS)

    A computational benchmark test was performed in conjunction with the Third Specialists Meeting on Reactor Noise (SMORN-III) which was held in Tokyo, Japan in October 1981. This report summarizes the results of the test as well as the works made for preparation of the test. (author)

  17. A Meta-Theory of Boundary Detection Benchmarks

    OpenAIRE

    Hou, Xiaodi; Yuille, Alan; Koch, Christof

    2012-01-01

    Human labeled datasets, along with their corresponding evaluation algorithms, play an important role in boundary detection. We here present a psychophysical experiment that addresses the reliability of such benchmarks. To find better remedies to evaluate the performance of any boundary detection algorithm, we propose a computational framework to remove inappropriate human labels and estimate the instrinsic properties of boundaries.

  18. Smart Meter Data Analytics: Systems, Algorithms and Benchmarking

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Golab, Lukasz; Golab, Wojciech;

    2016-01-01

    the proposed benchmark using five representative platforms: a traditional numeric computing platform (Matlab), a relational DBMS with a built-in machine learning toolkit (PostgreSQL/MADlib), a main-memory column store (“System C”), and two distributed data processing platforms (Hive and Spark/Spark Streaming...

  19. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  20. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  1. Diffusion benchmark calculations of a WWER-440 core with 180 deg symmetry

    International Nuclear Information System (INIS)

    A diffusion benchmark of the VVER-440 core with 180 degree symmetry and fixed cross sections is proposed. The new benchmark is the modification of Seidel's 3 dimensional 30 degree benchmark, which plays an important role in the verification and validation of nodal neutronic codes. In the 180 degree symmetry is assured by a stuck eccentric control assembly. The recommended reference solution is derived from diverse solution of the DIF3D finite difference code. The results of the HEXAN module of the KARATE code system are also presented.(Authors)

  2. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2009-01-01

    The classic textbook for computer systems analysis and design, Computer Organization and Design, has been thoroughly updated to provide a new focus on the revolutionary change taking place in industry today: the switch from uniprocessor to multicore microprocessors. This new emphasis on parallelism is supported by updates reflecting the newest technologies with examples highlighting the latest processor designs, benchmarking standards, languages and tools. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, compu

  3. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... of the ‘inside’ costs of the sub-component, technical specifications of the product, opportunistic behavior from the suppliers and cognitive limitation. These are all aspects that easily can dismantle the market mechanism and make it counter-productive in the organization. Thus, by directing more attention...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...

  4. 载人航天某装置人机交互式结构优化设计%Structural optimization using human-computer interaction for an aerospace assembly

    Institute of Scientific and Technical Information of China (English)

    刘磊; 刘洪英; 马爱军; 胡清华; 冯雪梅; 石蒙; 董睿; 赵亚雄

    2016-01-01

    To solve the problem of the structural optimization of a complicated structure under dynamic response constraints, a human-computer interaction method is proposed to take the advantages of human and computer in the structural optimization, and it is used in the structural optimization of an aerospace assembly. The assembly, after the structural optimization, exhibits remarkable performance improvement in that the first integral vibration frequency increases 41.1% and the maximal the frequency response acceleration of cared points drops 24.3% under the sinusoidal vibration test load conditions while the mass remains essentially unchanged. The result satisfies the requirement of the optimal design and proves the effectiveness and feasibility of the method.%为了解决复杂结构在动力学响应约束下优化的难题,综合人工以及计算机在复杂结构优化中的优点,提出一种人机交互式优化方法用于载人航天某复杂装置的优化设计。经过结构优化后的装置,在质量保持基本不变的情况下一阶振动频率提升41.1%,正弦试验条件下关心节点的最大加速度响应值减小24.3%,优化效果明显,满足优化设计要求,验证了该优化设计方法的可行、有效。

  5. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  6. PapaBench: a Free Real-Time Benchmark

    OpenAIRE

    Nemer, Fadia; Cassé, Hugues; Sainrat, Pascal; Bahsoun, Jean-Paul; De Michiel, Marianne; Potpourri

    2006-01-01

    This paper presents PapaBench, a free real-time benchmark and compares it with the existing benchmark suites. It is designed to be valuable for experimental works in WCET computation and may be also useful for scheduling analysis. This bench is based on the Paparazzi project that represents a real-time application, developed to be embedded on different Unmanned Aerial Vehicles (UAV). In this paper, we explain the transformation process of Paparazzi applied to obtain the PapaBench. We provide ...

  7. Initiation of assembly of tau(273-284) and its ΔK280 mutant: an experimental and computational study.

    Science.gov (United States)

    Larini, Luca; Gessel, Megan Murray; LaPointe, Nichole E; Do, Thanh D; Bowers, Michael T; Feinstein, Stuart C; Shea, Joan-Emma

    2013-06-21

    The microtubule associated protein tau is essential for the development and maintenance of the nervous system. Tau dysfunction is associated with a class of diseases called tauopathies, in which tau is found in an aggregated form. This paper focuses on a small aggregating fragment of tau, (273)GKVQIINKKLDL(284), encompassing the (PHF6*) region that plays a central role in tau aggregation. Using a combination of simulations and experiments, we probe the self-assembly of this peptide, with an emphasis on characterizing the early steps of aggregation. Ion-mobility mass spectrometry experiments provide a size distribution of early oligomers, TEM studies provide a time course of aggregation, and enhanced sampling molecular dynamics simulations provide atomistically detailed structural information about this intrinsically disordered peptide. Our studies indicate that a point mutation, as well the addition of heparin, lead to a shift in the conformations populated by the earliest oligomers, affecting the kinetics of subsequent fibril formation as well as the morphology of the resulting aggregates. In particular, a mutant associated with a K280 deletion (a mutation that causes a heritable form of neurodegeneration/dementia in the context of full length tau) is seen to aggregate more readily than its wild-type counterpart. Simulations and experiment reveal that the ΔK280 mutant peptide adopts extended conformations to a greater extent than the wild-type peptide, facilitating aggregation through the pre-structuring of the peptide into a fibril-competent structure.

  8. Initiation of Assembly of Tau(273–284) and its ΔK280 Mutant: An Experimental and Computational Study†

    Science.gov (United States)

    Larini, Luca; Gessel, Megan Murray; LaPointe, Nichole E.; Do, Thanh D.; Bowers, Michael T.; Feinstein, Stuart C.; Shea, Joan-Emma

    2013-01-01

    The microtubule associated protein tau is essential for the development and maintenance of the nervous system. Tau dysfunction is associated with a class of diseases called tauopathies, in which tau is found in an aggregated form. This paper focuses on a small aggregating fragment of tau, 273GKVQIINKKLDL284, encompassing the (PHF6*) region that plays a central role in tau aggregation. Using a combination of simulations and experiments, we probe the self-assembly of this peptide, with an emphasis on characterizing the early steps of aggregation. Ion-mobility mass spectrometry experiments provide a size distribution of early oligomers, TEM studies provide a time course of aggregation, and enhanced sampling molecular dynamics simulations provide atomistically detailed structural information about this intrinsically disordered peptide. Our studies indicate that a point mutation, as well the addition of heparin, lead to a shift in the conformations populated by the earliest oligomers, affecting the kinetics of subsequent fibril formation as well as the morphology of the resulting aggregates. In particular, a mutant associated with a K280 deletion (a mutation that causes a heritable form of neurodegeneration/dementia in the context of full length tau) is seen to aggregate more readily than its wild-type counterpart. Simulations and experiment reveal that the ΔK280 mutant peptide adopts extended conformations to a greater extent than the wild-type peptide, facilitating aggregation through the pre-structuring of the peptide into a fibril-competent structure. PMID:23515417

  9. Compilation report of VHTRC temperature coefficient benchmark calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yasuda, Hideshi; Yamane, Tsuyoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-11-01

    A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, `Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors` to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k{sub eff}, by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other`s ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.).

  10. Nominal GDP: Target or Benchmark?

    OpenAIRE

    Hetzel, Robert L.

    2015-01-01

    Some observers have argued that the Federal Reserve would best fulfill its mandate by adopting a target for nominal gross domestic product (GDP). Insights from the monetarist tradition suggest that nominal GDP targeting could be destabilizing. However, adopting benchmarks for both nominal and real GDP could offer useful information about when monetary policy is too tight or too loose.

  11. Benchmark calculations for EGS5

    International Nuclear Information System (INIS)

    In the past few years, EGS4 has undergone an extensive upgrade to EGS5, in particularly in the areas of low-energy electron physics, low-energy photon physics, PEGS cross section generation, and the coding from Mortran to Fortran programming. Benchmark calculations have been made to assure the accuracy, reliability and high quality of the EGS5 code system. This study reports three benchmark examples that show the successful upgrade from EGS4 to EGS5 based on the excellent agreements among EGS4, EGS5 and measurements. The first benchmark example is the 1969 Crannell Experiment to measure the three-dimensional distribution of energy deposition for 1-GeV electrons shower in water and aluminum tanks. The second example is the 1995 Compton-scattered spectra measurements for 20-40 keV, linearly polarized photon by Namito et. al., in KEK, which was a main part of the low-energy photon expansion work for both EGS4 and EGS5. The third example is the 1986 heterogeneity benchmark experiment by Shortt et. al., who used a monoenergetic 20-MeV electron beam to hit the front face of a water tank containing both air and aluminum cylinders and measured spatial depth dose distribution using a small solid-state detector. (author)

  12. Benchmarking biodiversity performances of farmers

    NARCIS (Netherlands)

    Snoo, de G.R.; Lokhorst, A.M.; Dijk, van J.; Staats, H.; Musters, C.J.M.

    2010-01-01

    Farmers are the key players when it comes to the enhancement of farmland biodiversity. In this study, a benchmark system that focuses on improving farmers’ nature conservation was developed and tested among Dutch arable farmers in different social settings. The results show that especially tailored

  13. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  14. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  15. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  16. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  18. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  19. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Science.gov (United States)

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  20. Protein-DNA binding specificity: a grid-enabled computational approach applied to single and multiple protein assemblies.

    Science.gov (United States)

    Zakrzewska, Krystyna; Bouvier, Benjamin; Michon, Alexis; Blanchet, Christophe; Lavery, Richard

    2009-12-01

    We use a physics-based approach termed ADAPT to analyse the sequence-specific interactions of three proteins which bind to DNA on the side of the minor groove. The analysis is able to estimate the binding energy for all potential sequences, overcoming the combinatorial problem via a divide-and-conquer approach which breaks the protein-DNA interface down into a series of overlapping oligomeric fragments. All possible base sequences are studied for each fragment. Energy minimisation with an all-atom representation and a conventional force field allows for conformational adaptation of the DNA and of the protein side chains for each new sequence. As a result, the analysis depends linearly on the length of the binding site and complexes as large as the nucleosome can be treated, although this requires access to grid computing facilities. The results on the three complexes studied are in good agreement with experiment. Although they all involve significant DNA deformation, it is found that this does not necessarily imply that the recognition will be dominated by the sequence-dependent mechanical properties of DNA.

  1. COMPUTER SIMULATION OF MISCIBILITY AND SELF-ASSEMBLY STRUCTURE FOR POLYMER-CONTAINING SYSTEMS WITH SPECIAL INTERACTIONS

    Institute of Scientific and Technical Information of China (English)

    Tong-fei Shi; Ying Zhang; Wei Jiang; Li-jia An; Bin-yao Li

    2003-01-01

    The miscibility and structure ofA-B copolymer/C homopolymer blends with special interactions were studied by a Monte Carlo simulation in two dimensions. The interaction between segment A and segment C was repulsive, whereas it was attractive between segment B and segment C. In order to study the effect of copolymer chain structure on the morphology and structure of A-B copolymer/C homopolymer blends, the alternating, random and block A-B copolymers were introduced into the blends, respectively. The simulation results indicated that the miscibility of A-B block copolymer/C homopolymer blends depended on the chain structure of the A-B copolymer. Compared with. alternating or random copolymer, the block copolymer, especially the diblock copolymer, could lead to a poor miscibility of A-B copolymer/C homopolymer blends.Moreover, for diblock A-B copolymer/C homopolymer blends, obvious self-organized core-shell structure was observed in the segment B composition region from 20% to 60%. However, if diblock copolymer composition in the blends is less than 40%, obvious self-organized core-shell structure could be formed in the B-segment component region from 10 to 90%.Furthermore, computer statistical analysis for the simulation results showed that the core sizes tended to increase continuously and their distribution became wider with decreasing B-segment component.

  2. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  3. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  5. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  7. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    International Nuclear Information System (INIS)

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of keff. Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for keff, f28/f25, c28/f25, and βeff. These limited results demonstrate the importance of studying other integral parameters in addition to keff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files

  8. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  9. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  10. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  11. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Experimental results of pulse parameters and control rod worth measurements at TRIGA Mark 2 reactor in Ljubljana are presented. The measurements were performed with a completely fresh, uniform, and compact core. Only standard fuel elements with 12 wt% uranium were used. Special efforts were made to get reliable and accurate results at well-defined experimental conditions, and it is proposed to use the results as a benchmark test case for TRIGA reactors

  12. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  13. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past the doorstep and is now in early trials in the project and construction environment....

  14. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  15. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  16. COMPUTING

    CERN Document Server

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  17. Development and validation of burnup dependent computational schemes for the analysis of assemblies with advanced lattice codes

    Science.gov (United States)

    Ramamoorthy, Karthikeyan

    The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant

  18. Thermal Analysis of a TREAT Fuel Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Papadias, Dionissios [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, Arthur E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-07-09

    The objective of this study was to explore options as to reduce peak cladding temperatures despite an increase in peak fuel temperatures. A 3D thermal-hydraulic model for a single TREAT fuel assembly was benchmarked to reproduce results obtained with previous thermal models developed for a TREAT HEU fuel assembly. In exercising this model, and variants thereof depending on the scope of analysis, various options were explored to reduce the peak cladding temperatures.

  19. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  20. A computational module assembled from different protease family motifs identifies PI PLC from Bacillus cereus as a putative prolyl peptidase with a serine protease scaffold.

    Science.gov (United States)

    Rendón-Ramírez, Adela; Shukla, Manish; Oda, Masataka; Chakraborty, Sandeep; Minda, Renu; Dandekar, Abhaya M; Ásgeirsson, Bjarni; Goñi, Félix M; Rao, Basuthkar J

    2013-01-01

    Proteolytic enzymes have evolved several mechanisms to cleave peptide bonds. These distinct types have been systematically categorized in the MEROPS database. While a BLAST search on these proteases identifies homologous proteins, sequence alignment methods often fail to identify relationships arising from convergent evolution, exon shuffling, and modular reuse of catalytic units. We have previously established a computational method to detect functions in proteins based on the spatial and electrostatic properties of the catalytic residues (CLASP). CLASP identified a promiscuous serine protease scaffold in alkaline phosphatases (AP) and a scaffold recognizing a β-lactam (imipenem) in a cold-active Vibrio AP. Subsequently, we defined a methodology to quantify promiscuous activities in a wide range of proteins. Here, we assemble a module which encapsulates the multifarious motifs used by protease families listed in the MEROPS database. Since APs and proteases are an integral component of outer membrane vesicles (OMV), we sought to query other OMV proteins, like phospholipase C (PLC), using this search module. Our analysis indicated that phosphoinositide-specific PLC from Bacillus cereus is a serine protease. This was validated by protease assays, mass spectrometry and by inhibition of the native phospholipase activity of PI-PLC by the well-known serine protease inhibitor AEBSF (IC50 = 0.018 mM). Edman degradation analysis linked the specificity of the protease activity to a proline in the amino terminal, suggesting that the PI-PLC is a prolyl peptidase. Thus, we propose a computational method of extending protein families based on the spatial and electrostatic congruence of active site residues.

  1. Statistical benchmark for BosonSampling

    Science.gov (United States)

    Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas

    2016-03-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.

  2. The Teaching Research of Computer Assembly and Maintenance Based on CDIO%CDIO模式下的计算机组装与维护教学研究

    Institute of Scientific and Technical Information of China (English)

    刘洪江

    2012-01-01

    Computer assembly and maintenance is a very practical course, which places great emphasis on culturing practical abilities of students. The article begins with describing the didactical characteristics of the course, and then combined with a new type of instruction model named CDIO, the teaching can achieve better result and improve practical abilities of students through die design of multiple practices.%计算机组装与维护是一门实践性很强的课程,该课程特别注重学生实践能力的培养.文章首先阐述计算机组装与维护教学的特点,结合CDIO这种新型的教学模式,通过设计多个实验内容,以达到更好的教学效果以及在更大程度上提高学生的实践操作能力.

  3. Hydrologic information server for benchmark precipitation dataset

    Science.gov (United States)

    McEnery, John A.; McKee, Paul W.; Shelton, Gregory P.; Ramsey, Ryan W.

    2013-01-01

    This paper will present the methodology and overall system development by which a benchmark dataset of precipitation information has been made available. Rainfall is the primary driver of the hydrologic cycle. High quality precipitation data is vital for hydrologic models, hydrometeorologic studies and climate analysis,and hydrologic time series observations are important to many water resources applications. Over the past two decades, with the advent of NEXRAD radar, science to measure and record rainfall has improved dramatically. However, much existing data has not been readily available for public access or transferable among the agricultural, engineering and scientific communities. This project takes advantage of the existing CUAHSI Hydrologic Information System ODM model and tools to bridge the gap between data storage and data access, providing an accepted standard interface for internet access to the largest time-series dataset of NEXRAD precipitation data ever assembled. This research effort has produced an operational data system to ingest, transform, load and then serve one of most important hydrologic variable sets.

  4. Benchmark analysis of the DeCART MOC code with the VENUS-2 critical experiment

    International Nuclear Information System (INIS)

    Computational benchmarks based on well-defined problems with a complete set of input and a unique solution are often used as a means of verifying the reliability of numerical solutions. VENUS is a widely used MOX benchmark problem for the validation of numerical methods and nuclear data set. In this paper, the results of benchmarking the DeCART (Deterministic Core Analysis based on Ray Tracing) integral transport code is reported using the OECD/NEA VENUS-2 MOX benchmark problem. Both 2-D and 3-D DeCART calculations were performed and comparisons are reported with measured data, as well as with the results of other benchmark participants. In general the DeCART results agree well with both the experimental data as well as those of other participants. (authors)

  5. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  7. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  8. RESRAD benchmarking against six radiation exposure pathway models

    Energy Technology Data Exchange (ETDEWEB)

    Faillace, E.R.; Cheng, J.J.; Yu, C.

    1994-10-01

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, input parameters such as occupancy, shielding, and consumption factors.

  9. BENCHMARKING THE ACCURACY OF INERTIAL SENSORS IN CELL PHONES

    OpenAIRE

    An, Bin

    2012-01-01

    Many ubiquitous computing applications rely on data from a cell phone's inertial sensors. Unfortunately, the accuracy of this data is often unknown, which impedes predictive analysis of applications that require high sensor accuracy (e.g., dead reckoning). This work focuses on benchmarking the accuracy of the accelerometers and gyroscopes on a cell phone. The cell phones are attached to a robotic arm, which provides ground truth measurements. The misalignment between the cell phone's and the ...

  10. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Domm, T.C.; Underwood, R.S.

    1999-10-13

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  11. Energy-efficient Benchmarking for Energy-efficient Software

    OpenAIRE

    Pukhkaiev, Dmytro

    2016-01-01

    With respect to the continuous growth of computing systems, the energy-efficiency requirement of their processes becomes even more important. Different configurations, implying different energy-efficiency of the system, could be used to perform the process. A configuration denotes the choice among different hard- and software settings (e.g., CPU frequency, number of threads, the concrete algorithm, etc.). The identification of the most energy-efficient configuration demands to benchmark all ...

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  15. COMPUTING

    CERN Document Server

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  17. COMPUTING

    CERN Document Server

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  18. COMPUTING

    CERN Document Server

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  19. Gaia FGK benchmark stars: Metallicity

    Science.gov (United States)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  20. International benchmark study of advanced thermal hydraulic safety analysis codes against measurements on IEA-R1 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Hainoun, A., E-mail: pscientific2@aec.org.sy [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Doval, A. [Nuclear Engineering Department, Av. Cmdt. Luis Piedrabuena 4950, C.P. 8400 S.C de Bariloche, Rio Negro (Argentina); Umbehaun, P. [Centro de Engenharia Nuclear – CEN, IPEN-CNEN/SP, Av. Lineu Prestes 2242-Cidade Universitaria, CEP-05508-000 São Paulo, SP (Brazil); Chatzidakis, S. [School of Nuclear Engineering, Purdue University, West Lafayette, IN 47907 (United States); Ghazi, N. [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Park, S. [Research Reactor Design and Engineering Division, Basic Science Project Operation Dept., Korea Atomic Energy Research Institute (Korea, Republic of); Mladin, M. [Institute for Nuclear Research, Campului Street No. 1, P.O. Box 78, 115400 Mioveni, Arges (Romania); Shokr, A. [Division of Nuclear Installation Safety, Research Reactor Safety Section, International Atomic Energy Agency, A-1400 Vienna (Austria)

    2014-12-15

    Highlights: • A set of advanced system thermal hydraulic codes are benchmarked against IFA of IEA-R1. • Comparative safety analysis of IEA-R1 reactor during LOFA by 7 working teams. • This work covers both experimental and calculation effort and presents new out findings on TH of RR that have not been reported before. • LOFA results discrepancies from 7% to 20% for coolant and peak clad temperatures are predicted conservatively. - Abstract: In the framework of the IAEA Coordination Research Project on “Innovative methods in research reactor analysis: Benchmark against experimental data on neutronics and thermal hydraulic computational methods and tools for operation and safety analysis of research reactors” the Brazilian research reactor IEA-R1 has been selected as reference facility to perform benchmark calculations for a set of thermal hydraulic codes being widely used by international teams in the field of research reactor (RR) deterministic safety analysis. The goal of the conducted benchmark is to demonstrate the application of innovative reactor analysis tools in the research reactor community, validation of the applied codes and application of the validated codes to perform comprehensive safety analysis of RR. The IEA-R1 is equipped with an Instrumented Fuel Assembly (IFA) which provided measurements for normal operation and loss of flow transient. The measurements comprised coolant and cladding temperatures, reactor power and flow rate. Temperatures are measured at three different radial and axial positions of IFA summing up to 12 measuring points in addition to the coolant inlet and outlet temperatures. The considered benchmark deals with the loss of reactor flow and the subsequent flow reversal from downward forced to upward natural circulation and presents therefore relevant phenomena for the RR safety analysis. The benchmark calculations were performed independently by the participating teams using different thermal hydraulic and safety

  1. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  2. Benchmarking and testing the "Sea Level Equation

    Science.gov (United States)

    Spada, G.; Barletta, V. R.; Klemann, V.; van der Wal, W.; James, T. S.; Simon, K.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2012-04-01

    The study of the process of Glacial Isostatic Adjustment (GIA) and of the consequent sea level variations is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying GIA can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated. Following the example of the mantle convection community and our recent successful Benchmark for Post Glacial Rebound codes (Spada et al., 2011, doi: 10.1111/j.1365-246X.2011.04952.x), here we present the results of a benchmark study of independently developed codes designed to solve the SLE. This study has taken place within a collaboration facilitated through the European Cooperation in Science and Technology (COST) Action ES0701. The tests involve predictions of past and current sea level variations, and 3D deformations of the Earth surface. In spite of the signi?cant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found, which can be often attributed to the different numerical algorithms employed within the community, help to constrain the intrinsic errors in model predictions. These are of fundamental importance for a correct interpretation of the geodetic variations observed today, and

  3. Benchmarking of human resources management

    OpenAIRE

    David M. Akinnusi

    2008-01-01

    This paper reviews the role of human resource management (HRM) which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HR...

  4. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  5. NFS Tricks and Benchmarking Traps

    OpenAIRE

    Seltzer, Margo; Ellard, Daniel

    2003-01-01

    We describe two modi cations to the FreeBSD 4.6 NFS server to increase read throughput by improving the read-ahead heuristic to deal with reordered requests and stride access patterns. We show that for some stride access patterns, our new heuristics improve end-to-end NFS throughput by nearly a factor of two. We also show that benchmarking and experimenting with changes to an NFS server can be a subtle and challenging task, and that it is often difficult to distinguish the impact of a new ...

  6. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    The experimental results of startup tests after reconstruction and modification of the TRIGA Mark II reactor in Ljubljana are presented. The experiments were performed with a completely fresh, compact, and uniform core. The operating conditions were well defined and controlled, so that the results can be used as a benchmark test case for TRIGA reactor calculations. Both steady-state and pulse mode operation were tested. In this paper, the following steady-state experiments are treated: critical core and excess reactivity, control rod worths, fuel element reactivity worth distribution, fuel temperature distribution, and fuel temperature reactivity coefficient

  7. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  8. Verification of the code DYN3D/R with the help of international benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Grundmann, U.; Rohde, U.

    1997-10-01

    Different benchmarks for reactors with quadratic fuel assemblies were calculated with the code DYN3D/R. In this report comparisons with the results of the reference solutions are carried out. The results of DYN3D/R and the reference calculation for the eigenvalue k{sub eff} and the power distribution are shown for the steady-state 3-dimensional IAEA-Benchmark. The results of NEACRP-Benchmarks on control rod ejections in a standard PWR were compared with the reference solutions published by the NEA Data Bank. For assessing the accuracy of DYN3D/R results in comparison to other codes the deviations to the reference solutions are considered. Detailed comparisons with the published reference solutions of the NEA-NSC Benchmarks on uncontrolled withdrawal of control rods are made. The influence of the axial nodalization is also investigated. All in all, a good agreement of the DYN3D/R results with the reference solutions can be seen for the considered benchmark problems. (orig.) [Deutsch] Verschiedene Benchmarks fuer Reaktoren mit quadratischen Brennelementen wurden mit dem Code DYN3D/R berechnet. In diesem Bericht erfolgen Vergleiche mit den Ergebnissen der Referenzloesungen. Die Ergebnisse von DYN3D/R und der Referenzrechnung fuer Eigenwert k{sub eff} und Leistungsverteilung des stationaeren 3-dimensionalen IAEA-Benchmarks werden dargestellt. Die Ergebnisse der NEACRP-Benchmarks fuer die Auswuerfe von Steuerstaeben in einem typischen DWR werden mit den von der NEA Data Bank veroeffentlichten Referenzloesungen verglichen. Zur Einschaetzung der Genauigkeit der DYN3D/R Resultate im Vergleich zu anderen Rechenprogrammen werden die Abweichungen zu den Referenzloesungen betrachtet. Detaillierte Vergleiche mit den Referenzloesungen erfolgen fuer die NEA-NSC Benchmarks zum unkontrollierten Ausfahren von Steuerstaeben. Dabei wird der Einfluss der axialen Nodalisierung untersucht. Insgesamt wird eine gute Uebereinstimmung der DYN3D/R Resultate mit den Referenzloesungen fuer die

  9. Rethinking benchmark dates in international relations

    OpenAIRE

    Buzan, Barry; Lawson, George

    2014-01-01

    International Relations (IR) has an ‘orthodox set’ of benchmark dates by which much of its research and teaching is organized: 1500, 1648, 1919, 1945 and 1989. This article argues that IR scholars need to question the ways in which these orthodox dates serve as internal and external points of reference, think more critically about how benchmark dates are established, and generate a revised set of benchmark dates that better reflects macro-historical international dynamics. The first part of t...

  10. Benchmarking for Excellence and the Nursing Process

    Science.gov (United States)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  11. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  12. COMPUTING

    CERN Document Server

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  13. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  15. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  16. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  17. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  18. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    International Nuclear Information System (INIS)

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  19. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  20. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  1. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  2. SARNET benchmark on QUENCH-11. Final report

    International Nuclear Information System (INIS)

    computational results can be defined. Larger discrepancies are seen in the hydrogen production and the related oxide scale thickness. Analysis shows that the agreement between calculated and experimental data is determined by both, limitations of severe accident codes and of the experiment. Severe accident codes are intended and developed to analyze typical accident situations in nuclear reactors. Special features of the experimental set-up of integral tests like QUENCH-11 as the presence of a shroud and electrode materials for the electric heating are irrelevant for reactors and cannot be simulated in the desirable detail. User effects add to the problems. However, a limited bandwidth of some calculated mainstream results, including hydrogen production, is a good outcome of the code benchmark. Taking in view other experiments, a further demand for an improvement concerning the oxidation of severe damaged structures during a reflood scenario is seen. Additionally, the benchmark proved to be valuable for a number of participants to become acquainted with the physical problems and with the application of large severe accident codes. For the transfer of knowledge and experience to younger scientists and engineers, this is an important issue to maintain the standard of nuclear safety. (orig.)

  3. SARNET benchmark on QUENCH-11. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Stefanova, A. [Bylgarska Akademiya na Naukite, Sofia (Bulgaria). Inst. for Nuclear Research and Nuclear Energy; Drath, T. [Ruhr-Univ. Bochum (Germany). Energy Systems and Energy Economics; Duspiva, J. [Nuclear Research Inst., Rez (CZ). Dept. of Reactor Technology] (and others)

    2008-03-15

    computational results can be defined. Larger discrepancies are seen in the hydrogen production and the related oxide scale thickness. Analysis shows that the agreement between calculated and experimental data is determined by both, limitations of severe accident codes and of the experiment. Severe accident codes are intended and developed to analyze typical accident situations in nuclear reactors. Special features of the experimental set-up of integral tests like QUENCH-11 as the presence of a shroud and electrode materials for the electric heating are irrelevant for reactors and cannot be simulated in the desirable detail. User effects add to the problems. However, a limited bandwidth of some calculated mainstream results, including hydrogen production, is a good outcome of the code benchmark. Taking in view other experiments, a further demand for an improvement concerning the oxidation of severe damaged structures during a reflood scenario is seen. Additionally, the benchmark proved to be valuable for a number of participants to become acquainted with the physical problems and with the application of large severe accident codes. For the transfer of knowledge and experience to younger scientists and engineers, this is an important issue to maintain the standard of nuclear safety. (orig.)

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  5. Development of solutions to benchmark piping problems. [EPIPE code

    Energy Technology Data Exchange (ETDEWEB)

    Reich, M.; Chang, T.Y.; Prachuktam, S.

    1976-01-01

    Piping analysis is one of the most extensive engineering efforts required for the design of nuclear reactors. Such analysis is normally carried out by use of computer programs which can handle complex piping geometries and various loading conditions, (static or dynamic). A brief outline is presented of the theoretical background for the EPIPE program, together with four benchmark problems: two for the static case and two for the dynamic case. The results obtained from EPIPE runs compare well with those available from known analytical solutions or from other independent computer programs.

  6. Development of parallel benchmark code by sheet metal forming simulator 'ITAS'

    International Nuclear Information System (INIS)

    This report describes the development of parallel benchmark code by sheet metal forming simulator 'ITAS'. ITAS is a nonlinear elasto-plastic analysis program by the finite element method for the purpose of the simulation of sheet metal forming. ITAS adopts the dynamic analysis method that computes displacement of sheet metal at every time unit and utilizes the implicit method with the direct linear equation solver. Therefore the simulator is very robust. However, it requires a lot of computational time and memory capacity. In the development of the parallel benchmark code, we designed the code by MPI programming to reduce the computational time. In numerical experiments on the five kinds of parallel super computers at CCSE JAERI, i.e., SP2, SR2201, SX-4, T94 and VPP300, good performances are observed. The result will be shown to the public through WWW so that the benchmark results may become a guideline of research and development of the parallel program. (author)

  7. Coded nanoscale self-assembly

    Indian Academy of Sciences (India)

    Prathyush Samineni; Debabrata Goswami

    2008-12-01

    We demonstrate coded self-assembly in nanostructures using the code seeded at the component level through computer simulations. Defects or cavities occur in all natural assembly processes including crystallization and our simulations capture this essential aspect under surface minimization constraints for self-assembly. Our bottom-up approach to nanostructures would provide a new dimension towards nanofabrication and better understanding of defects and crystallization process.

  8. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  9. Quantitative Performance Analysis of the SPEC OMPM2001 Benchmarks

    Directory of Open Access Journals (Sweden)

    Vishal Aslot

    2003-01-01

    Full Text Available The state of modern computer systems has evolved to allow easy access to multiprocessor systems by supporting multiple processors on a single physical package. As the multiprocessor hardware evolves, new ways of programming it are also developed. Some inventions may merely be adopting and standardizing the older paradigms. One such evolving standard for programming shared-memory parallel computers is the OpenMP API. The Standard Performance Evaluation Corporation (SPEC has created a suite of parallel programs called SPEC OMP to compare and evaluate modern shared-memory multiprocessor systems using the OpenMP standard. We have studied these benchmarks in detail to understand their performance on a modern architecture. In this paper, we present detailed measurements of the benchmarks. We organize, summarize, and display our measurements using a Quantitative Model. We present a detailed discussion and derivation of the model. Also, we discuss the important loops in the SPEC OMPM2001 benchmarks and the reasons for less than ideal speedup on our platform.

  10. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  11. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  12. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  13. Benchmark scenarios for the NMSSM

    CERN Document Server

    Djouadi, A; Ellwanger, U; Godbole, R; Hugonie, C; King, S F; Lehti, S; Moretti, S; Nikitenko, A; Rottlander, I; Schumacher, M; Teixeira, A

    2008-01-01

    We discuss constrained and semi--constrained versions of the next--to--minimal supersymmetric extension of the Standard Model (NMSSM) in which a singlet Higgs superfield is added to the two doublet superfields that are present in the minimal extension (MSSM). This leads to a richer Higgs and neutralino spectrum and allows for many interesting phenomena that are not present in the MSSM. In particular, light Higgs particles are still allowed by current constraints and could appear as decay products of the heavier Higgs states, rendering their search rather difficult at the LHC. We propose benchmark scenarios which address the new phenomenological features, consistent with present constraints from colliders and with the dark matter relic density, and with (semi--)universal soft terms at the GUT scale. We present the corresponding spectra for the Higgs particles, their couplings to gauge bosons and fermions and their most important decay branching ratios. A brief survey of the search strategies for these states a...

  14. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  15. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  16. Melcor benchmarking against integral severe fuel damage tests

    Energy Technology Data Exchange (ETDEWEB)

    Madni, I.K. [Brookhaven National Lab., Upton, NY (United States)

    1995-09-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the U.S. Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC to provide independent assessment of MELCOR, and a very important part of this program is to benchmark MELCOR against experimental data from integral severe fuel damage tests and predictions of that data from more mechanistic codes such as SCDAP or SCDAP/RELAP5. Benchmarking analyses with MELCOR have been carried out at BNL for five integral severe fuel damage tests, namely, PBF SFD 1-1, SFD 14, and NRU FLHT-2, analyses, and their role in identifying areas of modeling strengths and weaknesses in MELCOR.

  17. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  18. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  19. Synergetic effect of benchmarking competitive advantages

    Directory of Open Access Journals (Sweden)

    N.P. Tkachova

    2011-12-01

    Full Text Available It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  20. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  1. Evaluating software verification systems: benchmarks and competitions

    NARCIS (Netherlands)

    Beyer, Dirk; Huisman, Marieke; Klebanov, Vladimir; Monahan, Rosemary

    2014-01-01

    This report documents the program and the outcomes of Dagstuhl Seminar 14171 “Evaluating Software Verification Systems: Benchmarks and Competitions”. The seminar brought together a large group of current and future competition organizers and participants, benchmark maintainers, as well as practition

  2. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  3. Benchmark Assessment for Improved Learning. AACC Report

    Science.gov (United States)

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  4. The Linked Data Benchmark Council Project

    NARCIS (Netherlands)

    Boncz, P.A.; Fundulaki, I.; Gubichev, A.; Larriba-Pey, J.; Neumann, T.

    2013-01-01

    Despite the fast growth and increasing popularity, the broad field of RDF and Graph database systems lacks an independent authority for developing benchmarks, and for neutrally assessing benchmark results through industry-strength auditing which would allow to quantify and compare the performance of

  5. Benchmarking implementations of lazy functional languages

    NARCIS (Netherlands)

    Hartel, P.H.; Langendoen, K.G.

    1993-01-01

    Five implementations of different lazy functional languages are compared using a common benchmark of a dozen medium size programs. The benchmarking procedure has been designed such that one set of programs can be translated automatically into different languages, thus allowing a fair comparison of t

  6. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  7. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  8. Benchmark 1 - Nonlinear strain path forming limit of a reverse draw: Part A: Benchmark description

    Science.gov (United States)

    Benchmark-1 Committee

    2013-12-01

    The objective of this benchmark is to demonstrate the predictability of forming limits under nonlinear strain paths for a draw panel with a non-axisymmetric reversed dome-shape at the center. It is important to recognize that treating strain forming limits as though they were static during the deformation process may not lead to successful predictions of this benchmark, due to the nonlinearity of the strain paths involved in this benchmark. The benchmark tool is designed to enable a two-stage draw/reverse draw continuous forming process. Three typical sheet materials, AA5182-O Aluminum, and DP600 and TRIP780 Steels, are selected for this benchmark study.

  9. Benchmarking in healthcare using aggregated indicators

    DEFF Research Database (Denmark)

    Traberg, Andreas; Jacobsen, Peter

    2010-01-01

    Benchmarking has become a fundamental part of modern health care systems, but unfortunately, no benchmarking framework is unanimously accepted for assessing both quality and performance. The aim of this paper is to present a benchmarking model that is able to take different stakeholder perspectives...... into account. By presenting performance as a function of a patient perspective, an operations management perspective, and an employee perspective a more holistic approach to benchmarking is proposed. By collecting statistical information from several national and regional agencies and internal databases......, the model is constructed as a comprehensive hierarchy of indicators. By aggregating the outcome of each indicator, the model is able to benchmark healthcare providing units. By assessing performance deeper in the hierarchy, a more detailed view of performance is obtained. The validity test of the model...

  10. Influence of the solvent on the self-assembly of a modified amyloid beta peptide fragment. II. NMR and computer simulation investigation.

    Science.gov (United States)

    Hamley, I W; Nutt, D R; Brown, G D; Miravet, J F; Escuder, B; Rodríguez-Llansola, F

    2010-01-21

    The conformation of a model peptide AAKLVFF based on a fragment of the amyloid beta peptide Abeta16-20, KLVFF, is investigated in methanol and water via solution NMR experiments and molecular dynamics computer simulations. In previous work, we have shown that AAKLVFF forms peptide nanotubes in methanol and twisted fibrils in water. Chemical shift measurements were used to investigate the solubility of the peptide as a function of concentration in methanol and water. This enabled the determination of critical aggregation concentrations. The solubility was lower in water. In dilute solution, diffusion coefficients revealed the presence of intermediate aggregates in concentrated solution, coexisting with NMR-silent larger aggregates, presumed to be beta-sheets. In water, diffusion coefficients did not change appreciably with concentration, indicating the presence mainly of monomers, coexisting with larger aggregates in more concentrated solution. Concentration-dependent chemical shift measurements indicated a folded conformation for the monomers/intermediate aggregates in dilute methanol, with unfolding at higher concentration. In water, an antiparallel arrangement of strands was indicated by certain ROESY peak correlations. The temperature-dependent solubility of AAKLVFF in methanol was well described by a van't Hoff analysis, providing a solubilization enthalpy and entropy. This pointed to the importance of solvophobic interactions in the self-assembly process. Molecular dynamics simulations constrained by NOE values from NMR suggested disordered reverse turn structures for the monomer, with an antiparallel twisted conformation for dimers. To model the beta-sheet structures formed at higher concentration, possible model arrangements of strands into beta-sheets with parallel and antiparallel configurations and different stacking sequences were used as the basis for MD simulations; two particular arrangements of antiparallel beta-sheets were found to be stable, one

  11. Track 3: growth of nuclear technology and research numerical and computational aspects of the coupled three-dimensional core/plant simulations: organization for economic cooperation and development/U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-I. 2. Sensitivity Studies for MSLB Exercises 2 and 3 with RELAP5/PANBOX

    International Nuclear Information System (INIS)

    As a contribution to the verification and validation of the RELAP5/PANBOX coupled code system (R/P/C), we took part in the Main-Steam-Line-Break (MSLB) Benchmark issued by OECD/NEA. Sensitivity studies with respect to external/ internal integration and coarse/fine channel representation have already been presented for exercise 2. The purpose of this paper is to extend the sensitivity studies to exercise 3 also and to present local results for safety-related parameters. R/P/C is a nuclear plant safety analysis code system that consists of the PANBOX core simulator coupled to the RELAP5 best-estimate plant simulator. The coupling is performed via the EUMOD RELAP5 interface package. R/P/C has the capabilities of RELAP5 with added ability for calculation of three-dimensional (3-D) neutronics and thermal margins with COBRA, the core thermal-hydraulic module of PANBOX. The neutronics nodalization is radially based on one node per fuel assembly (FA). Axially, 28 layers are modeled, where the specified mesh sizes are used with the exception of the 2 layers of 29.76 cm, which are subdivided into 4 layers. All calculations use the semi-analytical Nodal Expansion Method. The time discretization is based on the implicit Euler method combined with the exponential transformation technique. In the external integration of R/P/C, the core thermal-hydraulics solution is calculated by COBRA using core inlet boundary conditions from RELAP5. The channel geometry is based on one channel/FA, with axially 24 layers. In the internal integration, the core thermal-hydraulics solution is calculated by RELAP5. The channel geometry is based on 19 coarse channels with axially 11 core layers. R/P/C allows hot subchannel analysis by application of an on-line refinement of channels (HOSCAM). Fuel assembly powers, hot pin powers, and powers of a surrounding subchannel region are passed to COBRA for selected FAs in the external integration option. COBRA performs subchannel analysis by using a

  12. Shielding experimental benchmark storage, retrieval, and display system

    International Nuclear Information System (INIS)

    The complete description of an integral shielding benchmark experiment includes the radiation source, materials, physical geometry, and measurement data. This information is not usually contained in a single document, but must be gathered from several sources, including personal contact with the experimentalists. A comprehensive database of the experimental details is extremely useful and cost-effective in present day computations. Further, experimental data are vulnerable to being lost or destroyed as a result of facility closures, retirement of experimental personnel, and ignorance. A standard set of experiments, used globally, establishes a framework to validate and verify models in computer codes and guarantee comparative analyses between different computational systems. SINBAD is a database that was conceived in 1992 to store, retrieve, and display the measurements from international experiments for the past 50 years in nuclear shielding. Based at Oak Ridge National Laboratory's Radiation Safety Information and Computational Center (RSICC) SINBAD has a collection of integral benchmark experiments from around the world. SINBAD is shared with the Office of Economic and Cooperative Development Nuclear Energy Agency Data Bank, which provides contributions from Europe, Russia, and Japan. (author)

  13. mPUMA: a computational approach to microbiota analysis by de novo assembly of operational taxonomic units based on protein-coding barcode sequences

    Science.gov (United States)

    2013-01-01

    Background Formation of operational taxonomic units (OTU) is a common approach to data aggregation in microbial ecology studies based on amplification and sequencing of individual gene targets. The de novo assembly of OTU sequences has been recently demonstrated as an alternative to widely used clustering methods, providing robust information from experimental data alone, without any reliance on an external reference database. Results Here we introduce mPUMA (microbial Profiling Using Metagenomic Assembly, http://mpuma.sourceforge.net), a software package for identification and analysis of protein-coding barcode sequence data. It was developed originally for Cpn60 universal target sequences (also known as GroEL or Hsp60). Using an unattended process that is independent of external reference sequences, mPUMA forms OTUs by DNA sequence assembly and is capable of tracking OTU abundance. mPUMA processes microbial profiles both in terms of the direct DNA sequence as well as in the translated amino acid sequence for protein coding barcodes. By forming OTUs and calculating abundance through an assembly approach, mPUMA is capable of generating inputs for several popular microbiota analysis tools. Using SFF data from sequencing of a synthetic community of Cpn60 sequences derived from the human vaginal microbiome, we demonstrate that mPUMA can faithfully reconstruct all expected OTU sequences and produce compositional profiles consistent with actual community structure. Conclusions mPUMA enables analysis of microbial communities while empowering the discovery of novel organisms through OTU assembly. PMID:24451012

  14. A BENCHMARK FOR DESIGNING USABLE AND SECURE TEXT-BASED CAPTCHAS

    Directory of Open Access Journals (Sweden)

    Suliman A. Alsuhibany

    2016-07-01

    Full Text Available An automated public Turing test to distinguish between computers and humans known as CAPTCHA is a widely used technique on many websites to protect their online services from malicious users. Two fundamental aspects of captcha considered in various studies in the literature are robustness and usability. A widely accepted standard benchmark, to guide the text-based captcha developers is not yet available. So this paper proposes a benchmark for designing usable-secure text-based captchas based on a community driven evaluation of the usability and security aspects. Based on this benchmark, we develop four new textbased captcha schemes, and conduct two separate experiments to evaluate both the security and usability perspectives of the developed schemes. The result of this evaluation indicates that the proposed benchmark provides a basis for designing usable-secure text-based captchas.

  15. Benchmark solutions for transport in $d$-dimensional Markov binary mixtures

    CERN Document Server

    Larmier, Coline; Malvagi, Fausto; Mazzolo, Alain; Zoia, Andrea

    2016-01-01

    Linear particle transport in stochastic media is key to such relevant applications as neutron diffusion in randomly mixed immiscible materials, light propagation through engineered optical materials, and inertial confinement fusion, only to name a few. We extend the pioneering work by Adams, Larsen and Pomraning \\cite{benchmark_adams} (recently revisited by Brantley \\cite{brantley_benchmark}) by considering a series of benchmark configurations for mono-energetic and isotropic transport through Markov binary mixtures in dimension $d$. The stochastic media are generated by resorting to Poisson random tessellations in $1d$ slab, $2d$ extruded, and full $3d$ geometry. For each realization, particle transport is performed by resorting to the Monte Carlo simulation. The distributions of the transmission and reflection coefficients on the free surfaces of the geometry are subsequently estimated, and the average values over the ensemble of realizations are computed. Reference solutions for the benchmark have never be...

  16. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  17. Benchmarking Measures of Network Influence

    Science.gov (United States)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  18. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  19. Earnings Benchmarks in International hotel firms

    Directory of Open Access Journals (Sweden)

    Laura Parte Esteban

    2011-11-01

    Full Text Available This paper focuses on earnings management around earnings benchmarks (avoiding losses and earnings decreases hypothesis in international firms and non international firms belonging to the Spanish hotel industry. First, frequency histograms are used to determine the existence of a discontinuity in earnings in both segments. Second, the use of discretionary accruals as a tool to meet earnings benchmarks is analysed in international and non international firms. Empirical evidence shows that international and non international firms meet earnings benchmarks. It is also noted different behaviour between international and non international firms.

  20. LAPUR-K BWR stability benchmark

    International Nuclear Information System (INIS)

    This paper documents the stability benchmark of the LAPUR-K code using the measurements taken at the Ringhals Unit 1 plant over four cycles of operation. This benchmark was undertaken to demonstrate the ability of LAPUR-K to calculate the decay ratios for both core-wide and regional mode oscillations. This benchmark contributes significantly to assuring that LAPUR-K can be used to define the exclusion region for the Monticello Plant in response to recent US Nuclear Regulatory Commission notices concerning oscillation observed at Boiling Water Reactor plants. Stability is part of Northern States Power Reload Safety Evaluation of the Monticello Plant

  1. Analytical Radiation Transport Benchmarks for The Next Century

    International Nuclear Information System (INIS)

    Verification of large-scale computational algorithms used in nuclear engineering and radiological applications is an essential element of reliable code performance. For this reason, the development of a suite of multidimensional semi-analytical benchmarks has been undertaken to provide independent verification of proper operation of codes dealing with the transport of neutral particles. The benchmarks considered cover several one-dimensional, multidimensional, monoenergetic and multigroup, fixed source and critical transport scenarios. The first approach, called the Green's Function. In slab geometry, the Green's function is incorporated into a set of integral equations for the boundary fluxes. Through a numerical Fourier transform inversion and subsequent matrix inversion for the boundary fluxes, a semi-analytical benchmark emerges. Multidimensional solutions in a variety of infinite media are also based on the slab Green's function. In a second approach, a new converged SN method is developed. In this method, the SN solution is ''minded'' to bring out hidden high quality solutions. For this case multigroup fixed source and criticality transport problems are considered. Remarkably accurate solutions can be obtained with this new method called the Multigroup Converged SN (MGCSN) method as will be demonstrated

  2. Criticality Benchmark Analysis of the HTTR Annular Startup Core Configurations

    International Nuclear Information System (INIS)

    One of the high priority benchmarking activities for corroborating the Next Generation Nuclear Plant (NGNP) Project and Very High Temperature Reactor (VHTR) Program is evaluation of Japan's existing High Temperature Engineering Test Reactor (HTTR). The HTTR is a 30 MWt engineering test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. A large amount of critical reactor physics data is available for validation efforts of High Temperature Gas-cooled Reactors (HTGRs). Previous international reactor physics benchmarking activities provided a collation of mixed results that inaccurately predicted actual experimental performance.1 Reevaluations were performed by the Japanese to reduce the discrepancy between actual and computationally-determined critical configurations.2-3 Current efforts at the Idaho National Laboratory (INL) involve development of reactor physics benchmark models in conjunction with the International Reactor Physics Experiment Evaluation Project (IRPhEP) for use with verification and validation methods in the VHTR Program. Annular cores demonstrate inherent safety characteristics that are of interest in developing future HTGRs.

  3. Adding Fault Tolerance to NPB Benchmarks Using ULFM

    Energy Technology Data Exchange (ETDEWEB)

    Parchman, Zachary W [Tennessee Technological University (TTU); Vallee, Geoffroy R [ORNL; Naughton III, Thomas J [ORNL; Engelmann, Christian [ORNL; Bernholdt, David E [ORNL; Scott, Stephen L [Tennessee Technological University (TTU)

    2016-01-01

    In the world of high-performance computing, fault tolerance and application resilience are becoming some of the primary concerns because of increasing hardware failures and memory corruptions. While the research community has been investigating various options, from system-level solutions to application-level solutions, standards such as the Message Passing Interface (MPI) are also starting to include such capabilities. The current proposal for MPI fault tolerant is centered around the User-Level Failure Mitigation (ULFM) concept, which provides means for fault detection and recovery of the MPI layer. This approach does not address application-level recovery, which is currently left to application developers. In this work, we present a mod- ification of some of the benchmarks of the NAS parallel benchmark (NPB) to include support of the ULFM capabilities as well as application-level strategies and mechanisms for application-level failure recovery. As such, we present: (i) an application-level library to checkpoint and restore data, (ii) extensions of NPB benchmarks for fault tolerance based on different strategies, (iii) a fault injection tool, and (iv) some preliminary results that show the impact of such fault tolerant strategies on the application execution.

  4. Criticality Benchmark Analysis of the HTTR Annular Startup Core Configurations

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-11-01

    One of the high priority benchmarking activities for corroborating the Next Generation Nuclear Plant (NGNP) Project and Very High Temperature Reactor (VHTR) Program is evaluation of Japan's existing High Temperature Engineering Test Reactor (HTTR). The HTTR is a 30 MWt engineering test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. A large amount of critical reactor physics data is available for validation efforts of High Temperature Gas-cooled Reactors (HTGRs). Previous international reactor physics benchmarking activities provided a collation of mixed results that inaccurately predicted actual experimental performance.1 Reevaluations were performed by the Japanese to reduce the discrepancy between actual and computationally-determined critical configurations.2-3 Current efforts at the Idaho National Laboratory (INL) involve development of reactor physics benchmark models in conjunction with the International Reactor Physics Experiment Evaluation Project (IRPhEP) for use with verification and validation methods in the VHTR Program. Annular cores demonstrate inherent safety characteristics that are of interest in developing future HTGRs.

  5. Criticality benchmarking of ANET Monte Carlo code

    International Nuclear Information System (INIS)

    In this work the new Monte Carlo code ANET is tested on criticality calculations. ANET is developed based on the high energy physics code GEANT of CERN and aims at progressively satisfying several requirements regarding both simulations of GEN II/III reactors, as well as of innovative nuclear reactor designs such as the Accelerator Driven Systems (ADSs). Here ANET is applied on three different nuclear configurations, including a subcritical assembly, a Material Testing Reactor and the conceptual configuration of an ADS. In the first case, calculation of the effective multiplication factor (keff) are performed for the Training Nuclear Reactor of the Aristotle University of Thessaloniki, while in the second case keff is computed for the fresh fueled core of the Portuguese research reactor (RPJ) just after its conversion to Low Enriched Uranium, considering the control rods at the position that renders the reactor critical. In both cases ANET computations are compared with corresponding results obtained by three different well established codes, including both deterministic (XSDRNPM/CITATION) and Monte Carlo (TRIPOLI, MCNP). In the RPI case, keff computations are also compared with observations during the reactor core commissioning since the control rods are considered at criticality position. The above verification studies show ANET to produce reasonable results since they are satisfactorily compared with other models as well as with observations. For the third case (ADS), preliminary ANET computations of keff for various intensities of the proton beam are presented, showing also a reasonable code performance concerning both the order of magnitude and the relative variation of the computed parameter. (author)

  6. Simultaneous Assembly of Multiple Test Forms

    NARCIS (Netherlands)

    Linden, van der Wim J.; Adema, Jos J.

    1998-01-01

    An algorithm for the assembly of multiple test forms is proposed in which the multiple-form problem is reduced to a series of computationally less intensive two-form problems. At each step, one form is assembled to its true specifications; the other form is a dummy assembled only to maintain a balan

  7. Benchmark physics experiment of metallic-fueled LMFBR at FCA

    International Nuclear Information System (INIS)

    A benchmark physics experiment of a metallic-fueled LMFBR was performed at Japan Atomic Energy Research Institute's Fast Critical Assembly (FCA) in order to examine availability of data and method for a design of metallic-fueled core. The nuclear data and the calculation methods used for a LMFBR core design have been improved based on the oxide fuel core experiments. A metallic-fueled core has a harder neutron spectrum than an oxide-fueled core and has typical nuclear characteristics affected by the neutron spectrum. In this study, availability of the conventional calculation method for the design of the metallic-fueled core was examined by comparing the calculation values of the nuclear characteristics with the measured values. The experimental core (FCA assembly XVI-1) was selected by referring to the conceptual design of Central Research Institute of Electric Power Industry. The calculated-to-experiment (C/E) value for keff of assembly XVI-1 was 1.001. From this, as far as the criticality the prediction accuracy of the conventional calculation for the metallic-fueled core was concluded to be similar to that of an oxide-fueled core. (author)

  8. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.

  9. A new benchmark for pose estimation with ground truth from virtual reality

    DEFF Research Database (Denmark)

    Schlette, Christian; Buch, Anders Glent; Aksoy, Eren Erdal;

    2014-01-01

    assembly tasks. Following the eRobotics methodology, a simulatable 3D representation of this platform was modelled in virtual reality. Based on a detailed camera and sensor simulation, we generated a set of benchmark images and point clouds with controlled levels of noise as well as ground truth data......The development of programming paradigms for industrial assembly currently gets fresh impetus from approaches in human demonstration and programming-by-demonstration. Major low- and mid-level prerequisites for machine vision and learning in these intelligent robotic applications are pose estimation......, stereo reconstruction and action recognition. As a basis for the machine vision and learning involved, pose estimation is used for deriving object positions and orientations and thus target frames for robot execution. Our contribution introduces and applies a novel benchmark for typical multi...

  10. Planeación asistida por computadora del proceso tecnológico de ensamble. //Computer-aided gliding of the assembles technological process.

    Directory of Open Access Journals (Sweden)

    L. L. Tomás García

    2008-01-01

    Full Text Available El presente trabajo está dedicado a la optimización bajo criterios múltiples de la planificación de procesos de ensamblemecánico a partir de su modelo geométrico tridimensional. Se soporta sobre un enfoque que integra tanto informacióngeométrica como restricciones tecnológicas del proceso de ensamble. En el desarrollo de la misma quedó demostrado, queuna vez conocido el modelo geométrico tridimensional de un ensamble, la aplicación de criterios tecnológicos ygeométricos al proceso inverso de desensamble y su posterior tratamiento con métodos evolutivos, genera planes deensamble mecánico próximos a los óptimos de acuerdo al sistema de preferencias del decisor. La integración de lainformación permite disminuir el número de secuencias a evaluar y de elementos a procesar, con lo que se evita lageneración y evaluación de todas las secuencias posibles con la consecuente disminución del tiempo de procesamiento.Como resultado de la aplicación del modelo integrado propuesto, se obtiene la planificación del proceso de ensamblemecánico con una reducción del tiempo de ensamble debido a que en las secuencias obtenidas se reduce el número decambios de dirección de ensamble, los cambios de herramientas y de puestos de trabajo, así como se minimiza la distanciaa recorrer debido al cambio de puestos de trabajo. Esto se logra mediante un modelo de optimización multiobjetivo basadoen algoritmos genéticos.Palabras claves: Ensamble mecánico, algoritmos genéticos, optimización multiobjetivo.______________________________________________________________________________Abstract:This work deals with the combinatorial problem of generating and optimizing technologically feasible assembly sequencesand process planning involving tools and work places. The assembly sequences and related technological decisions areobtained from a 3D model of the assembled parts based on mating conditions along with a set of technological criteria

  11. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  12. Reference Solutions for Benchmark Turbulent Flows in Three Dimensions

    Science.gov (United States)

    Diskin, Boris; Thomas, James L.; Pandya, Mohagna J.; Rumsey, Christopher L.

    2016-01-01

    A grid convergence study is performed to establish benchmark solutions for turbulent flows in three dimensions (3D) in support of turbulence-model verification campaign at the Turbulence Modeling Resource (TMR) website. The three benchmark cases are subsonic flows around a 3D bump and a hemisphere-cylinder configuration and a supersonic internal flow through a square duct. Reference solutions are computed for Reynolds Averaged Navier Stokes equations with the Spalart-Allmaras turbulence model using a linear eddy-viscosity model for the external flows and a nonlinear eddy-viscosity model based on a quadratic constitutive relation for the internal flow. The study involves three widely-used practical computational fluid dynamics codes developed and supported at NASA Langley Research Center: FUN3D, USM3D, and CFL3D. Reference steady-state solutions computed with these three codes on families of consistently refined grids are presented. Grid-to-grid and code-to-code variations are described in detail.

  13. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and...

  14. DOE Commercial Building Benchmark Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  15. International piping benchmarks: Use of simplified code PACE 2

    International Nuclear Information System (INIS)

    This report compares the results obtained using the code PACE 2 with the International Working Group on Fast Reactors (IWGFR) International Piping Benchmark solutions. PACE 2 is designed to analyse systems of pipework using a simplified method which is economical of computer time and hence inexpensive. This low cost is not achieved without some loss of accuracy in the solution, but for most parts of a system this inaccuracy is acceptable and those sections of particular importance may be reanalysed using more precise methods in order to produce a satisfactory analysis of the complete system at reasonable cost. (author)

  16. National Energy Software Center: benchmark problem book. Revision

    Energy Technology Data Exchange (ETDEWEB)

    None

    1985-12-01

    Computational benchmarks are given for the following problems: (1) Finite-difference, diffusion theory calculation of a highly nonseparable reactor, (2) Iterative solutions for multigroup two-dimensional neutron diffusion HTGR problem, (3) Reference solution to the two-group diffusion equation, (4) One-dimensional neutron transport transient solutions, (5) To provide a test of the capabilities of multi-group multidimensional kinetics codes in a heavy water reactor, (6) Test of capabilities of multigroup neutron diffusion in LMFBR, and (7) Two-dimensional PWR models.

  17. CFD validation in OECD/NEA t-junction benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E. (Mathematics and Computer Science); (Cambridge Univ.); (Moscow Institute of Nuclar Energy Safety)

    2011-08-23

    When streams of rapidly moving flow merge in a T-junction, the potential arises for large oscillations at the scale of the diameter, D, with a period scaling as O(D/U), where U is the characteristic flow velocity. If the streams are of different temperatures, the oscillations result in experimental fluctuations (thermal striping) at the pipe wall in the outlet branch that can accelerate thermal-mechanical fatigue and ultimately cause pipe failure. The importance of this phenomenon has prompted the nuclear energy modeling and simulation community to establish a benchmark to test the ability of computational fluid dynamics (CFD) codes to predict thermal striping. The benchmark is based on thermal and velocity data measured in an experiment designed specifically for this purpose. Thermal striping is intrinsically unsteady and hence not accessible to steady state simulation approaches such as steady state Reynolds-averaged Navier-Stokes (RANS) models.1 Consequently, one must consider either unsteady RANS or large eddy simulation (LES). This report compares the results for three LES codes: Nek5000, developed at Argonne National Laboratory (USA), and Cabaret and Conv3D, developed at the Moscow Institute of Nuclear Energy Safety at (IBRAE) in Russia. Nek5000 is based on the spectral element method (SEM), which is a high-order weighted residual technique that combines the geometric flexibility of the finite element method (FEM) with the tensor-product efficiencies of spectral methods. Cabaret is a 'compact accurately boundary-adjusting high-resolution technique' for fluid dynamics simulation. The method is second-order accurate on nonuniform grids in space and time, and has a small dispersion error and computational stencil defined within one space-time cell. The scheme is equipped with a conservative nonlinear correction procedure based on the maximum principle. CONV3D is based on the immersed boundary method and is validated on a wide set of the experimental

  18. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    Energy Technology Data Exchange (ETDEWEB)

    Tisseur, D., E-mail: david.tisseur@cea.fr; Costin, M., E-mail: david.tisseur@cea.fr; Rattoni, B., E-mail: david.tisseur@cea.fr; Vienne, C., E-mail: david.tisseur@cea.fr; Vabre, A., E-mail: david.tisseur@cea.fr; Cattiaux, G., E-mail: david.tisseur@cea.fr [CEA LIST, CEA Saclay 91191 Gif sur Yvette Cedex (France); Sollier, T. [Institut de Radioprotection et de Sûreté Nucléaire, B.P.17 92262 Fontenay-Aux-Roses (France)

    2015-03-31

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  19. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  20. NERSC-6 Workload Analysis and Benchmark Selection Process

    Energy Technology Data Exchange (ETDEWEB)

    Antypas, Katie; Shalf, John; Wasserman, Harvey

    2008-08-29

    This report describes efforts carried out during early 2008 to determine some of the science drivers for the"NERSC-6" next-generation high-performance computing system acquisition. Although the starting point was existing Greenbooks from DOE and the NERSC User Group, the main contribution of this work is an analysis of the current NERSC computational workload combined with requirements information elicited from key users and other scientists about expected needs in the 2009-2011 timeframe. The NERSC workload is described in terms of science areas, computer codes supporting research within those areas, and description of key algorithms that comprise the codes. This work was carried out in large part to help select a small set of benchmark programs that accurately capture the science and algorithmic characteristics of the workload. The report concludes with a description of the codes selected and some preliminary performance data for them on several important systems.

  1. Results of the isotopic concentrations of VVER calculational burnup credit benchmark no. 2(cb2

    International Nuclear Information System (INIS)

    The characterization of the irradiated fuel materials is becoming more important with the Increasing use of nuclear energy in the world. The purpose of this document is to present the results of the nuclide concentrations calculated Using Calculation VVER Burnup Credit Benchmark No. 2(CB2). The calculations were Performed in The Nuclear Technology Center of Cuba. The CB2 benchmark specification as the second phase of the VVER burnup credit benchmark is Summarized in [1]. The CB2 benchmark focused on VVER burnup credit study proposed on the 97' AER Symposium [2]. It should provide a comparison of the ability of various code systems And data libraries to predict VVER-440 spent fuel isotopes (isotopic concentrations) using Depletion analysis. This phase of the benchmark calculations is still in progress. CB2 should be finished by summer 1999 and evaluated results could be presented on the next AER Symposium. The obtained results are isotopic concentrations of spent fuel as a function of the burnup and Cooling time. The depletion point ORIGEN2[3] code was used for the calculation of the spent Fuel concentration. The depletion analysis was performed using the VVER-440 irradiated fuel assemblies with in-core Irradiation time of 3 years, burnup of the 30000 mwd/TU, and an after discharge cooling Time of 0 and 1 year. This work also comprises the results obtained by other codes[4].

  2. Benchmarking carbon emissions performance in supply chains

    OpenAIRE

    Acquaye, Adolf; Genovese, Andrea; Barrett, John W.; Koh, Lenny

    2014-01-01

    Purpose – The paper aims to develop a benchmarking framework to address issues such as supply chain complexity and visibility, geographical differences and non-standardized data, ensuring that the entire supply chain environmental impact (in terms of carbon) and resource use for all tiers, including domestic and import flows, are evaluated. Benchmarking has become an important issue in supply chain management practice. However, challenges such as supply chain complexity and visibility, geogra...

  3. MPI Benchmarking Revisited: Experimental Design and Reproducibility

    OpenAIRE

    Hunold, Sascha; Carpen-Amarie, Alexandra

    2015-01-01

    The Message Passing Interface (MPI) is the prevalent programming model used on today's supercomputers. Therefore, MPI library developers are looking for the best possible performance (shortest run-time) of individual MPI functions across many different supercomputer architectures. Several MPI benchmark suites have been developed to assess the performance of MPI implementations. Unfortunately, the outcome of these benchmarks is often neither reproducible nor statistically sound. To overcome th...

  4. Benchmark Two-Good Utility Functions

    OpenAIRE

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticity. It is shown how each of these utility functions arises from a simple graphical construction based on a single given indifference curve. Also, it is shown that possessors of such utility function...

  5. Under Pressure Benchmark for DDBMS Availability

    OpenAIRE

    Fior, Alessandro Gustavo; Meira, Jorge Augusto; Cunha De Almeida, Eduardo; Coelho, Ricardo Gonçalves; Didonet Del Fabro, Marcos; Le Traon, Yves

    2013-01-01

    The availability of Distributed Database Management Systems (DDBMS) is related to the probability of being up and running at a given point in time, and managing failures. One well-known and widely used mechanism to ensure availability is replication, which includes performance impact on maintaining data replicas across the DDBMS's machine nodes. Benchmarking can be used to measure such impact. In this article, we present a benchmark that evaluates the performance of DDBMS, considering availab...

  6. Benchmarking implementations of lazy functional languages

    OpenAIRE

    Hartel, P.H.; Langendoen, K. G.

    1993-01-01

    Five implementations of different lazy functional languages are compared using a common benchmark of a dozen medium size programs. The benchmarking procedure has been designed such that one set of programs can be translated automatically into different languages, thus allowing a fair comparison of the quality of compilers for different lazy functional languages. Aspects studied include compile time, execution time, ease of programmingdetermined by the availability of certain key features

  7. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  8. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  9. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  10. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  11. Clinically meaningful performance benchmarks in MS

    Science.gov (United States)

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (10 seconds) ranges of performance. PMID:24174581

  12. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  13. United assembly algorithm for optical burst switching

    Institute of Scientific and Technical Information of China (English)

    Jinhui Yu(于金辉); Yijun Yang(杨教军); Yuehua Chen(陈月华); Ge Fan(范戈)

    2003-01-01

    Optical burst switching (OBS) is a promising optical switching technology. The burst assembly algorithm controls burst assembly, which significantly impacts performance of OBS network. This paper provides a new assembly algorithm, united assembly algorithm, which has more practicability than conventional algorithms. In addition, some factors impacting selections of parameters of this algorithm are discussed and the performance of this algorithm is studied by computer simulation.

  14. Verification of MVP-II and SRAC2006 code to the core physics vera benchmark problem

    International Nuclear Information System (INIS)

    In this research, verification calculation for VERA core physics benchmark on the Zero Power Physical Test (ZPPT) of the nuclear reactor Watts Bar 1. The reactor is a 1000 MWe class of PWR designed by. Westinghouse, arranged from 193 unit of 17 x 17 fuel assembly consisting 3 type enrichment of UO2 that are 2.1wt%, 2.619wt% and 3.1wt%. Core power factor distribution and k-eff calculation has been done for the first cycle operation of the core at beginning of cycle (BOC) and hot zero power (HZP). In this calculation, MVP-II and CITATION module of SRAC2006 computer code has been used with ENDF/B-VII.0. cross section data library. Calculation result showed that differences value of k-eff for the core at controlled and uncontrolled condition between reference with MVP-II (-0,07% and -0,014%) and SRAC2006 (0,92% and 0,99%) are very small or below 1%. Differences value of radial power peaking factor at controlled and uncontrolled of the core between reference value with MVP-II are 0,38% and 1,53%, even though with SRAC2006 are 1,13% and -2,45%. It can be said that the calculation result by both computer code showing suitability with reference value. In order to determinate of criticality of the core, the calculation result using MVP-II code is more conservative compare with SRAC2006 code. (author)

  15. Planificación y optimización asistida por computadora de secuencias de ensamble mecánico // Computer aided Planning and optimization for mechanical assembly.

    Directory of Open Access Journals (Sweden)

    L. L. Tomás-García

    2009-01-01

    Full Text Available El presente trabajo versa sobre la generación, planificación y optimización de secuencias deensamble mecánico a partir de su modelo geométrico tridimensional. Se soporta sobre un enfoqueque integra tanto información geométrica como restricciones tecnológicas del proceso deensamble. En el desarrollo de la misma quedó demostrado, que una vez conocido el modelogeométrico tridimensional de un ensamble, la aplicación de criterios tecnológicos y geométricos alproceso inverso de desensamble y su posterior tratamiento con el método de algoritmosevolutivos, genera una planificación optimizada del su proceso de ensamble mecánico. Laintegración de la información permite disminuir el número de secuencias a evaluar y de elementosa procesar, con lo que se evita la generación y evaluación de todas las secuencias posibles con laconsecuente disminución del tiempo de procesamiento. Como resultado de la aplicación delmodelo integrado propuesto, se obtiene la planificación del proceso de ensamble mecánico conuna reducción del tiempo de ensamble debido a que en las secuencias de ensamble obtenidas sereduce el número de cambios de dirección de ensamble, los cambios de herramientas y de puestosde trabajo, así como se minimiza la distancia a recorrer debido al cambio de puestos de trabajo.Esto se logra mediante un modelo de optimización multiobjetivo basado en algoritmos evolutivos.Palabras claves: ensamble mecánico, algoritmos genéticos, optimización multiobjetivo.____________________________________________________________________________AbstractThis work deals with the combinatorial problem of generating and optimizing feasible assemblysequences and doing the process planning involving tools and work places. The assembly sequencesare obtained from a 3D model of the assembled parts based on mating conditions along with a setof technological criteria, which allows automatically analyzing and generating the sequences. Thegenerated

  16. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  17. Generation of accurate benchmarks for transport in stochastic media by means of dynamic error control

    International Nuclear Information System (INIS)

    The assessment of proposed numerical methods for radiation transport in stochastic media requires accurate benchmarks for comparison. Although published benchmarks exist, the errors in these benchmarks were minimized through conservative approaches rather than by explicit measurement and estimation. We report on our efforts to directly control the statistical, discretization, and iterative errors of benchmark calculations for transport in such media. We examine a variety of error control mechanisms and measure their effect on the actual error in the subsequent results. We find that simultaneous control of all of the desired quantities of interest yields better results than the use of a single control mechanism. We also find that the ensemble-averaged results are less sensitive to error control issues than individual realizations are. In addition we show that published benchmarks are generally over resolved for the desired level of statistical error, but that in a few cases an insufficient number of realizations were used. These observations are useful for more computationally efficient generation of stochastic media benchmarks with better characterized errors. (author)

  18. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  19. Classification of criticality calculations with correlation coefficient method and its application to OECD/NEA burnup credit benchmarks phase III-A and II-A

    International Nuclear Information System (INIS)

    A method for classifying benchmark results of criticality calculations according to similarity was proposed in this paper. After formulation of the method utilizing correlation coefficients, it was applied to burnup credit criticality benchmarks Phase III-A and II-A, which were conducted by the Expert Group on Burnup Credit Criticality Safety under auspices of the Nuclear Energy Agency of the Organisation for Economic Cooperation and Development (OECD/NEA). Phase III-A benchmark was a series of criticality calculations for irradiated Boiling Water Reactor (BWR) fuel assemblies, whereas Phase II-A benchmark was a suite of criticality calculations for irradiated Pressurized Water Reactor (PWR) fuel pins. These benchmark problems and their results were summarized. The correlation coefficients were calculated and sets of benchmark calculation results were classified according to the criterion that the values of the correlation coefficients were no less than 0.15 for Phase III-A and 0.10 for Phase II-A benchmarks. When a couple of benchmark calculation results belonged to the same group, one calculation result was found predictable from the other. An example was shown for each of the Benchmarks. While the evaluated nuclear data seemed the main factor for the classification, further investigations were required for finding other factors. (author)

  20. SUMMARY OF GENERAL WORKING GROUP A+B+D: CODES BENCHMARKING.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.; SHAPOSHNIKOVA, E.; ZIMMERMANN, F.; HOFMANN, I.

    2006-05-29

    Computer simulation is an indispensable tool in assisting the design, construction, and operation of accelerators. In particular, computer simulation complements analytical theories and experimental observations in understanding beam dynamics in accelerators. The ultimate function of computer simulation is to study mechanisms that limit the performance of frontier accelerators. There are four goals for the benchmarking of computer simulation codes, namely debugging, validation, comparison and verification: (1) Debugging--codes should calculate what they are supposed to calculate; (2) Validation--results generated by the codes should agree with established analytical results for specific cases; (3) Comparison--results from two sets of codes should agree with each other if the models used are the same; and (4) Verification--results from the codes should agree with experimental measurements. This is the summary of the joint session among working groups A, B, and D of the HI32006 Workshop on computer codes benchmarking.

  1. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  2. Analysis of selected fast critical assemblies

    International Nuclear Information System (INIS)

    Integral parameters for a series of fast reactor bench-mark assemblies covering a wide range of energy spectra have been calculated with the reference Cadarache Cross Section Library. Multigroup cross sections relative to each assembly were generated using the self-shielding factor approach and were used in a diffusion-cum-perturbation theory code to obtain the parameters. The parameters considered in this study include K-eff, spectral indices (reaction rate ratios), β-effs and central reactivity worths. Results of these calculations indicate that some of the important neutron cross section data need re-evaluation. (author)

  3. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind...

  4. A benchmark study for glacial isostatic adjustment codes

    Science.gov (United States)

    Spada, G.; Barletta, V. R.; Klemann, V.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2011-04-01

    The study of glacial isostatic adjustment (GIA) is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to loading is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements (e.g. GRACE and GOCE) to the projections of future sea level trends in response to climate change. Modern modelling approaches to GIA are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated; a community benchmark data set would clearly be valuable. Following the example of the mantle convection community, here we present, for the first time, the results of a benchmark study of codes designed to model GIA. This has taken place within a collaboration facilitated through European Cooperation in Science and Technology (COST) Action ES0701. The approaches benchmarked are based on significantly different codes and different techniques. The test computations are based on models with spherical symmetry and Maxwell rheology and include inputs from different methods and solution techniques: viscoelastic normal modes, spectral-finite elements and finite elements. The tests involve the loading and tidal Love numbers and their relaxation spectra, the deformation and gravity variations driven by surface loads characterized by simple geometry and time history and the rotational fluctuations in response to glacial unloading. In spite of the significant differences in the numerical methods employed, the test computations show a satisfactory agreement between the results provided by the participants.

  5. Modeling of shielding benchmark for Na-24 γ-rays using scale code package and QAD-CGGP code

    International Nuclear Information System (INIS)

    The benchmark data were recently published for 1.37 and 2.75 MeV photons emitted by an Na-24 uniform disc source penetrating shields of six two-layer combinations, namely, 12''Al+Fe, 12''+Pb, 6''Fe+Al, 6''Fe+Pb, 4''Pb+Al, and 4''Pb+Fe. These benchmark data fill a gap in the energy range of practical interest and provide useful reference values for computational method evaluation. In order to evaluate the computational methods incorporated into widely used shielding codes SCALE and QAD we compared the benchmark data with results of benchmark modeling with these codes. Using the functional module SAS4 of SCALE4 modular code package and the point kernel code system for gamma-ray shielding calculations QAD-CGGP scalar flux density spectra in benchmark energy group structure for three two-layer combinations were calculated. The comparison of the benchmark data and the results obtained showed that QAD-CGGP and SAS4 results are in good agreement, but the benchmark experimental data differ significantly from the both of them. (author)

  6. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    Science.gov (United States)

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  7. Benchmark 2 - Springback of a draw / re-draw panel: Part C: Benchmark analysis

    Science.gov (United States)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing

    2013-12-01

    Benchmark analysis is summarized for DP600 and AA 5182-O. Nine simulation results submitted for this benchmark study are compared to the physical measurement results. The details on the codes, friction parameters, mesh technology, CPU, and material models are also summarized at the end of this report with the participant information details.

  8. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  9. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Primm III, RT

    2002-05-29

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  10. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  11. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  12. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  13. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  14. Furnace assembly

    Science.gov (United States)

    Panayotou, Nicholas F.; Green, Donald R.; Price, Larry S.

    1985-01-01

    A method of and apparatus for heating test specimens to desired elevated temperatures for irradiation by a high energy neutron source. A furnace assembly is provided for heating two separate groups of specimens to substantially different, elevated, isothermal temperatures in a high vacuum environment while positioning the two specimen groups symmetrically at equivalent neutron irradiating positions.

  15. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Polyethylene Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Celik, Cihangir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); McMahan, Kimberly L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Yi-kang [French Atomic Energy Commission (CEA), Saclay (France); Gagnier, Emmanuel [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Authier, Nicolas [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Piot, Jerome [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Jacquet, Xavier [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Rousseau, Guillaume [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Reynolds, Kevin H. [Y-12 National Security Complex, Oak Ridge, TN (United States)

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 19, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc depositing energy in a Si solid state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  16. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Lead Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Celik, Cihangir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Isbell, Kimberly McMahan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Yi-kang [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Gagnier, Emmanuel [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Authier, Nicolas [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Piot, Jerome [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Jacquet, Xavier [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Rousseau, Guillaume [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Reynolds, Kevin H. [Y-12 National Security Complex, Oak Ridge, TN (United States)

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 13, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube, and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc, depositing energy in a Si solid-state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  17. Neutron Activation and Thermoluminescent Detector Responses to a Bare Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [ORNL; Isbell, Kimberly McMahan [ORNL; Lee, Yi-kang [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Gagnier, Emmanuel [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Authier, Nicolas [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Piot, Jerome [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Jacquet, Xavier [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Rousseau, Guillaume [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Reynolds, Kevin H. [Y-12 National Security Complex

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 11, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  18. Neutron Activation and Thermoluminescent Detector Responses to a Bare Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Celik, Cihangir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); McMahan, Kimberly L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Yi-kang [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette (France); Gagnier, Emmanuel [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette (France); Authier, Nicolas [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille (France); Piot, Jerome [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille (France); Jacquet, Xavier [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille (France); Rousseau, Guillaume [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille (France); Reynolds, Kevin H. [Oak Ridge Y-12 Plant (Y-12), Oak Ridge, TN (United States)

    2015-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 11, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  19. Computational benchmark for calculation of silane and siloxane thermochemistry.

    Science.gov (United States)

    Cypryk, Marek; Gostyński, Bartłomiej

    2016-01-01

    Geometries of model chlorosilanes, R3SiCl, silanols, R3SiOH, and disiloxanes, (R3Si)2O, R = H, Me, as well as the thermochemistry of the reactions involving these species were modeled using 11 common density functionals in combination with five basis sets to examine the accuracy and applicability of various theoretical methods in organosilicon chemistry. As the model reactions, the proton affinities of silanols and siloxanes, hydrolysis of chlorosilanes and condensation of silanols to siloxanes were considered. As the reference values, experimental bonding parameters and reaction enthalpies were used wherever available. Where there are no experimental data, W1 and CBS-QB3 values were used instead. For the gas phase conditions, excellent agreement between theoretical CBS-QB3 and W1 and experimental thermochemical values was observed. All DFT methods also give acceptable values and the precision of various functionals used was comparable. No significant advantage of newer more advanced functionals over 'classical' B3LYP and PBEPBE ones was noted. The accuracy of the results was improved significantly when triple-zeta basis sets were used for energy calculations, instead of double-zeta ones. The accuracy of calculations for the reactions in water solution within the SCRF model was inferior compared to the gas phase. However, by careful estimation of corrections to the ΔHsolv and ΔGsolv of H(+) and HCl, reasonable values of thermodynamic quantities for the discussed reactions can be obtained. PMID:26781663

  20. Benchmarking MILC code with OpenMP and MPI

    International Nuclear Information System (INIS)

    A trend in high performance computers that is becoming increasingly popular is the use of symmetric multi-processing (SMP) rather than the older paradigm of MPP. MPI codes that ran and scaled well on MPP machines can often be run on an SMP machine using the vendor's version of MPI. However, this approach may not make optimal use of the (expensive) SMP hardware. More significantly, there are machines like Blue Horizon, an IBM SP with 8-way SMP nodes at the San Diego Supercomputer Center that can only support 4 MPI processes per node (with the current switch). On such a machine it is imperative to be able to use OpenMP parallelism on the node, and MPI between nodes. We describe the challenges of converting MILC MPI code to using a second level of OpenMP parallelism, and benchmarks on IBM and Sun computers

  1. SP2Bench: A SPARQL Performance Benchmark

    CERN Document Server

    Schmidt, Michael; Lausen, Georg; Pinkel, Christoph

    2008-01-01

    Recently, the SPARQL query language for RDF has reached the W3C recommendation status. In response to this emerging standard, the database community is currently exploring efficient storage techniques for RDF data and evaluation strategies for SPARQL queries. A meaningful analysis and comparison of these approaches necessitates a comprehensive and universal benchmark platform. To this end, we have developed SP$^2$Bench, a publicly available, language-specific SPARQL performance benchmark. SP$^2$Bench is settled in the DBLP scenario and comprises both a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. As a proof of concept, we apply SP$^2$Bench to existing engines and discuss ...

  2. Analysis of ANS LWR physics benchmark problems.

    Energy Technology Data Exchange (ETDEWEB)

    Taiwo, T. A.

    1998-07-29

    Various Monte Carlo and deterministic solutions to the three PWR Lattice Benchmark Problems recently defined by the ANS Ad Hoc Committee on Reactor Physics Benchmarks are presented. These solutions were obtained using the VIM continuous-energy Monte Carlo code and the DIF3D/WIMS-D4M code package implemented at the Argonne National Laboratory. The code results for the K{sub eff} and relative pin power distribution are compared to measured values. Additionally, code results for the three benchmark-prescribed infinite lattice configurations are also intercompared. The results demonstrate that the codes produce very good estimates of both the K{sub eff} and power distribution for the critical core and the lattice parameters of the infinite lattice configuration.

  3. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador;

    2016-01-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision......-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  4. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  5. Standardized benchmarking in the quest for orthologs.

    Science.gov (United States)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  6. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  7. SCORPIUS algorithm benchmarks on the image understanding architecture machine

    Science.gov (United States)

    Bogdanowicz, Julius F.; Nash, J. Gregory; Shu, David B.

    1992-04-01

    Many Hughes tactical and strategic programs need high performance image processing. For example, photo-interpretation applications can require up to four orders of magnitude speedup over conventional computer architectures. Therefore, parallel processing systems are needed to help close the processing gap. Vision applications can usually be decomposed into three levels of processing called high, intermediate, and low level vision. Each processing level typically requires different types of numeric/symbolic computation, processing task granularities, and communications bandwidths. No parallel processing system is commercially available that is optimized for the entire range of computations. To meet these processing challenges, the image understanding architecture (IUA) has been developed by Hughes in collaboration with the University of Massachusetts. The IUA is a heterogeneous, hierarchical, associative parallel processor that is organized in three levels corresponding to the vision problem. Its lowest level consists of a large content addressable array parallel processor. This array of 'per pixel' bit serial processors is used for fixed point, low level numeric, and symbolic computations. The middle level is an interface communications array processor (ICAP). ICAP is an array of digital signal processing chips from TI TMS320Cx line, used for high speed number crunching. The highest level is the symbolic processing array. It is an array of general purpose microprocessors in which the artificial intelligence content of the image understanding software resides. A set of benchmarks from the DARPA/ORD sponsored SCORPIUS program were developed using the IUA. The set of algorithms included low level image processing as well as high level matching algorithms. Benchmark performance on the second generation IUA hardware is over four orders of magnitude faster than equivalent algorithms implemented on a DEC VAX 8650. The first generation hardware is operational. Development

  8. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  9. Benchmarking ontologies: bigger or better?

    Directory of Open Access Journals (Sweden)

    Lixia Yao

    Full Text Available A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the biomedical sciences. Powerful approaches exist for testing the internal consistency of an ontology, but not for assessing the fidelity of its domain representation. We introduce a family of metrics that describe the breadth and depth with which an ontology represents its knowledge domain. We then test these metrics using (1 four of the most common medical ontologies with respect to a corpus of medical documents and (2 seven of the most popular English thesauri with respect to three corpora that sample language from medicine, news, and novels. Here we show that our approach captures the quality of ontological representation and guides efforts to narrow the breach between ontology and collective discourse within a domain. Our results also demonstrate key features of medical ontologies, English thesauri, and discourse from different domains. Medical ontologies have a small intersection, as do English thesauri. Moreover, dialects characteristic of distinct domains vary strikingly as many of the same words are used quite differently in medicine, news, and novels. As ontologies are intended to mirror the state of knowledge, our methods to tighten the fit between ontology and domain will increase their relevance for new areas of biomedical science and improve the accuracy and power of inferences computed across them.

  10. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  11. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  12. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  13. International Benchmarking of Electricity Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2014-01-01

    TSO operating in each jurisdiction. The solution for European regulators has been found in international regulatory benchmarking, organized in collaboration with the Council of European Energy Regulators (CEER) in 2008 and 2012 for 22 and 23 TSOs, respectively. The frontier study provides static cost...... efficiency estimates for each TSO, as well as dynamic results in terms of technological improvement rate and efficiency catch-up speed. In this paper, we provide the methodology for the benchmarking, using non-parametric DEA under weight restrictions, as well as an analysis of the static cost efficiency...

  14. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  15. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li17Pb83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li17Pb83 blankets. (author)

  16. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    International Nuclear Information System (INIS)

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this effort changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppording the needs of the Nuclear Weapons Complex (NW at sign) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system

  17. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Domm, T.D.; Underwood, R.S.

    1999-04-26

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supprting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate lheir engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this efforL changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppordng the needs of the Nuclear Weapons Complex (NW@) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system

  18. General Assembly

    CERN Multimedia

    Staff Association

    2016-01-01

    5th April, 2016 – Ordinary General Assembly of the Staff Association! In the first semester of each year, the Staff Association (SA) invites its members to attend and participate in the Ordinary General Assembly (OGA). This year the OGA will be held on Tuesday, April 5th 2016 from 11:00 to 12:00 in BE Auditorium, Meyrin (6-2-024). During the Ordinary General Assembly, the activity and financial reports of the SA are presented and submitted for approval to the members. This is the occasion to get a global view on the activities of the SA, its financial management, and an opportunity to express one’s opinion, including taking part in the votes. Other points are listed on the agenda, as proposed by the Staff Council. Who can vote? Only “ordinary” members (MPE) of the SA can vote. Associated members (MPA) of the SA and/or affiliated pensioners have a right to vote on those topics that are of direct interest to them. Who can give his/her opinion? The Ordinary General Asse...

  19. Status of the international criticality safety benchmark evaluation project (ICSBEP)

    International Nuclear Information System (INIS)

    Since ICNC'99, four new editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments have been published. The number of benchmark specifications in the Handbook has grown from 2157 in 1999 to 3073 in 2003, an increase of nearly 1000 specifications. These benchmarks are used to validate neutronics codes and nuclear cross-section data. Twenty evaluations representing 192 benchmark specifications were added to the Handbook in 2003. The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) is provided in this paper along with a summary of the newly added benchmark specifications that appear in the 2003 Edition of the Handbook. (author)

  20. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...... organizational relations, behaviors and actions. In closing it is briefly considered how to study the calculative practices of benchmarking....

  1. Experimental power density distribution benchmark in the TRIGA Mark II reactor

    Energy Technology Data Exchange (ETDEWEB)

    Snoj, L.; Stancar, Z.; Radulovic, V.; Podvratnik, M.; Zerovnik, G.; Trkov, A. [Josef Stefan Inst., Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Barbot, L.; Domergue, C.; Destouches, C. [CEA DEN, DER, Instrumentation Sensors and Dosimetry laboratory Cadarache, F-13108 Saint-Paul-Lez-Durance (France)

    2012-07-01

    In order to improve the power calibration process and to benchmark the existing computational model of the TRIGA Mark II reactor at the Josef Stefan Inst. (JSI), a bilateral project was started as part of the agreement between the French Commissariat a l'energie atomique et aux energies alternatives (CEA) and the Ministry of higher education, science and technology of Slovenia. One of the objectives of the project was to analyze and improve the power calibration process of the JSI TRIGA reactor (procedural improvement and uncertainty reduction) by using absolutely calibrated CEA fission chambers (FCs). This is one of the few available power density distribution benchmarks for testing not only the fission rate distribution but also the absolute values of the fission rates. Our preliminary calculations indicate that the total experimental uncertainty of the measured reaction rate is sufficiently low that the experiments could be considered as benchmark experiments. (authors)

  2. Thermal-Hydraulic Analysis of OECD Benchmark Problem for PBMR 400 Using MARS-GCR

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Wook; Jeong, Jae Jun; Lee, Won Jae [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2006-07-01

    The OECD benchmark problem for the PBMR 400 aims to test the existing methods for HTGRs but also develop the more accurate and efficient tools to analyse the neutronics and thermal-hydraulic behaviour for the design and safety evaluations of the PBMR. In addition, it includes defining appropriate benchmarks to verify and validate the new methods in computer codes. The benchmark procedure is divided into two parts; 1) phase I, which includes the stand-alone steady state calculations (neutronics and thermal-hydraulics) and coupled steady state calculation, 2) phase II, which includes various transient calculations. Till now, standalone calculations for neutronics and thermal-hydraulics were performed with given cross-section and power density data, respectively. This paper includes the standalone thermal-hydraulic calculation results of MARSGCR with a given power density. Although a preliminary steady state calculation coupled with MASTER was also performed, the calculation results will be released later.

  3. Synthetic graph generation for data-intensive HPC benchmarking: Scalability, analysis and real-world application

    Energy Technology Data Exchange (ETDEWEB)

    Powers, Sarah S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lothian, Joshua [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-12-01

    The benchmarking effort within the Extreme Scale Systems Center at Oak Ridge National Laboratory seeks to provide High Performance Computing benchmarks and test suites of interest to the DoD sponsor. The work described in this report is a part of the effort focusing on graph generation. A previously developed benchmark, SystemBurn, allows the emulation of a broad spectrum of application behavior profiles within a single framework. To complement this effort, similar capabilities are desired for graph-centric problems. This report described the in-depth analysis of the generated synthetic graphs' properties at a variety of scales using different generator implementations and examines their applicability to replicating real world datasets.

  4. Experimental power density distribution benchmark in the TRIGA Mark II reactor

    International Nuclear Information System (INIS)

    In order to improve the power calibration process and to benchmark the existing computational model of the TRIGA Mark II reactor at the Josef Stefan Inst. (JSI), a bilateral project was started as part of the agreement between the French Commissariat a l'energie atomique et aux energies alternatives (CEA) and the Ministry of higher education, science and technology of Slovenia. One of the objectives of the project was to analyze and improve the power calibration process of the JSI TRIGA reactor (procedural improvement and uncertainty reduction) by using absolutely calibrated CEA fission chambers (FCs). This is one of the few available power density distribution benchmarks for testing not only the fission rate distribution but also the absolute values of the fission rates. Our preliminary calculations indicate that the total experimental uncertainty of the measured reaction rate is sufficiently low that the experiments could be considered as benchmark experiments. (authors)

  5. ActivityNet: A Large-Scale Video Benchmark for Human Activity Understanding

    KAUST Repository

    Heilbron, Fabian Caba

    2015-06-02

    In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper we introduce ActivityNet, a new largescale video benchmark for human activity understanding. Our benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. We illustrate three scenarios in which ActivityNet can be used to compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.

  6. Calculation of Sodium Cooled Fast Reactor Concepts. Preliminary results of an OECD NEA benchmark calculation

    International Nuclear Information System (INIS)

    In this paper we present the results of our calculations of the OECD NEA benchmark on generation-IV advanced sodium-cooled fast reactor (SFR) concepts. The aim of this benchmark is to study the core design features, moreover the feedback and transient behaviour of four SFR concepts. At the present state, static global neutronic parameters, e.g. keff, effective delayed neutron fraction, Doppler constant, sodium void worth, control rod worth, power distribution; and burnup were calculated for both the beginning and the end of cycle. In the benchmark definition, the following core descriptions were specified: two large cores (3600 MW thermal power) with carbide and oxide fuel, and two medium cores (1000 MW thermal power) with metal and oxide fuel. The calculations were performed by using the ECCO module of the ERANOS code system at the subassembly level, and with the KIKO3DMG code at the core level. The former code produced the assembly homogenized cross sections applying 1968 group collision probability calculations; the latter one determined the core multiplication factor, the radial power distribution using a 3D nodal diffusion method in 9 energy groups. We examined the effects of increasing the energy groups to 17 in the core calculation. The reflector and shield assembly homogenization methodology was also tested: a “homogeneous region model” was compared with a “concentric cylindrical core” calculation. The breeding ratio was also determined for the beginning of cycle. (author)

  7. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    solutions to the problem have been proposed so far including, for instance, evolutionary techniques, swarm intelligence or ad hoc solutions. However, the large diversity of the solutions and the lack of a common benchmark, made any comparative analysis of the different solutions extremely difficult...

  8. A protein–DNA docking benchmark

    NARCIS (Netherlands)

    van Dijk, M.; Bonvin, A.M.J.J.

    2008-01-01

    We present a protein–DNA docking benchmark containing 47 unbound–unbound test cases of which 13 are classified as easy, 22 as intermediate and 12 as difficult cases. The latter shows considerable structural rearrangement upon complex formation. DNA-specific modifications such as flipped out bases an

  9. FinPar: A Parallel Financial Benchmark

    DEFF Research Database (Denmark)

    Andreetta, Christian; Begot, Vivien; Berthold, Jost;

    2016-01-01

    sensitive to the input dataset and therefore requires multiple code versions that are optimized differently, which also raises maintainability problems. This article presents three array-based applications from the financial domain that are suitable for gpgpu execution. Common benchmark-design practice has...

  10. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  11. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  12. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    Angles Rojas, R.; Pham, M.D.; Boncz, P.A.

    2014-01-01

    With inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics in industrial-st

  13. Benchmarking in radiation protection in pharmaceutical industries

    International Nuclear Information System (INIS)

    A benchmarking on radiation protection in seven pharmaceutical companies in Germany and Switzerland was carried out. As the result relevant parameters describing the performance and costs of radiation protection were acquired and compiled and subsequently depicted in figures in order to make these data comparable. (orig.)

  14. Alberta K-12 ESL Proficiency Benchmarks

    Science.gov (United States)

    Salmon, Kathy; Ettrich, Mike

    2012-01-01

    The Alberta K-12 ESL Proficiency Benchmarks are organized by division: kindergarten, grades 1-3, grades 4-6, grades 7-9, and grades 10-12. They are descriptors of language proficiency in listening, speaking, reading, and writing. The descriptors are arranged in a continuum of seven language competences across five proficiency levels. Several…

  15. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  16. Overview and Discussion of the OECD/NRC Benchmark Based on NUPEC PWR Subchannel and Bundle Tests

    Directory of Open Access Journals (Sweden)

    M. Avramova

    2013-01-01

    Full Text Available The Pennsylvania State University (PSU under the sponsorship of the US Nuclear Regulatory Commission (NRC has prepared, organized, conducted, and summarized the Organisation for Economic Co-operation and Development/US Nuclear Regulatory Commission (OECD/NRC benchmark based on the Nuclear Power Engineering Corporation (NUPEC pressurized water reactor (PWR subchannel and bundle tests (PSBTs. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency (NEA of OECD and the Japan Nuclear Energy Safety Organization (JNES, Japan. The OECD/NRC PSBT benchmark was organized to provide a test bed for assessing the capabilities of various thermal-hydraulic subchannel, system, and computational fluid dynamics (CFDs codes. The benchmark was designed to systematically assess and compare the participants’ numerical models for prediction of detailed subchannel void distribution and department from nucleate boiling (DNB, under steady-state and transient conditions, to full-scale experimental data. This paper provides an overview of the objectives of the benchmark along with a definition of the benchmark phases and exercises. The NUPEC PWR PSBT facility and the specific methods used in the void distribution measurements are discussed followed by a summary of comparative analyses of submitted final results for the exercises of the two benchmark phases.

  17. Fusing Swarm Intelligence and Self-Assembly for Optimizing Echo State Networks

    Directory of Open Access Journals (Sweden)

    Charles E. Martin

    2015-01-01

    Full Text Available Optimizing a neural network’s topology is a difficult problem for at least two reasons: the topology space is discrete, and the quality of any given topology must be assessed by assigning many different sets of weights to its connections. These two characteristics tend to cause very “rough.” objective functions. Here we demonstrate how self-assembly (SA and particle swarm optimization (PSO can be integrated to provide a novel and effective means of concurrently optimizing a neural network’s weights and topology. Combining SA and PSO addresses two key challenges. First, it creates a more integrated representation of neural network weights and topology so that we have just a single, continuous search domain that permits “smoother” objective functions. Second, it extends the traditional focus of self-assembly, from the growth of predefined target structures, to functional self-assembly, in which growth is driven by optimality criteria defined in terms of the performance of emerging structures on predefined computational problems. Our model incorporates a new way of viewing PSO that involves a population of growing, interacting networks, as opposed to particles. The effectiveness of our method for optimizing echo state network weights and topologies is demonstrated through its performance on a number of challenging benchmark problems.

  18. Fusing Swarm Intelligence and Self-Assembly for Optimizing Echo State Networks.

    Science.gov (United States)

    Martin, Charles E; Reggia, James A

    2015-01-01

    Optimizing a neural network's topology is a difficult problem for at least two reasons: the topology space is discrete, and the quality of any given topology must be assessed by assigning many different sets of weights to its connections. These two characteristics tend to cause very "rough." objective functions. Here we demonstrate how self-assembly (SA) and particle swarm optimization (PSO) can be integrated to provide a novel and effective means of concurrently optimizing a neural network's weights and topology. Combining SA and PSO addresses two key challenges. First, it creates a more integrated representation of neural network weights and topology so that we have just a single, continuous search domain that permits "smoother" objective functions. Second, it extends the traditional focus of self-assembly, from the growth of predefined target structures, to functional self-assembly, in which growth is driven by optimality criteria defined in terms of the performance of emerging structures on predefined computational problems. Our model incorporates a new way of viewing PSO that involves a population of growing, interacting networks, as opposed to particles. The effectiveness of our method for optimizing echo state network weights and topologies is demonstrated through its performance on a number of challenging benchmark problems. PMID:26346488

  19. Research and practice on teaching of computer organization and assemble language programming%《计算机组成原理与汇编语言》的教学研究与实践

    Institute of Scientific and Technical Information of China (English)

    陈建能

    2012-01-01

    This paper analyzes the cause of the problems that exit in the process of teaching computer organization and assemble language programming.According to the experiences of teaching,this paper proposes some specific methods to solve the existing problems and improve the quality of the teaching.%分析了《计算机组成原理与汇编语言》这门课程在教学过程中存在的问题及其根源,结合平时教学中的经验、心得,有针对性地探讨了解决现存问题、提高该课程教学质量的具体方法.

  20. Reform on teaching the course, Assembly Language in the college computer specialty%高校计算机专业“汇编语言”课程教学改革探究

    Institute of Scientific and Technical Information of China (English)

    姚富光

    2012-01-01

    分析了目前高校计算机专业汇编语言课程的教学现状及存在问题的缘由,指出了汇编语言教学改革的必要性,对理论教学及实践课程教学的改革做了探讨,提出了具体改革措施。%This paper analyses the teaching status of and the problems existing in the course, Assembly Language in the college computer specialty, points out the necessity of the teaching reform, discusses the reform of theoretical and practical teaching and presents some concrete measures.

  1. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    Directory of Open Access Journals (Sweden)

    Hongzhang Shan

    2010-01-01

    Full Text Available Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the three programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.

  2. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  3. A chemical EOR benchmark study of different reservoir simulators

    Science.gov (United States)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    chemical design for field-scale studies using commercial simulators. The benchmark tests illustrate the potential of commercial simulators for chemical flooding projects and provide a comprehensive table of strengths and limitations of each simulator for a given chemical EOR process. Mechanistic simulations of chemical EOR processes will provide predictive capability and can aid in optimization of the field injection projects. The objective of this paper is not to compare the computational efficiency and solution algorithms; it only focuses on the process modeling comparison.

  4. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./ Russian Progress Report for Fiscal Year 1997, Volume 4, Part 8 - Neutron Poison Plates in Assemblies Containing Homogeneous Mixtures of Polystyrene-Moderated Plutonium and Uranium Oxides

    Energy Technology Data Exchange (ETDEWEB)

    Yavuz, M.

    1999-05-01

    In the 1970s at the Battelle Pacific Northwest Laboratory (PNL), a series of critical experiments using a remotely operated Split-Table Machine was performed with homogeneous mixtures of (Pu-U)O{sub 2}-polystyrene fuels in the form of square compacts having different heights. The experiments determined the critical geometric configurations of MOX fuel assemblies with and without neutron poison plates. With respect to PuO{sub 2} content and moderation [H/(Pu+U)atomic] ratio (MR), two different homogeneous (Pu-U) O{sub 2}-polystyrene mixtures were considered: Mixture (1) 14.62 wt% PuO{sub 2} with 30.6 MR, and Mixture (2) 30.3 wt% PuO{sub 2} with 2.8 MR. In all mixtures, the uranium was depleted to about O.151 wt% U{sup 235}. Assemblies contained copper, copper-cadmium or aluminum neutron poison plates having thicknesses up to {approximately}2.5 cm. This evaluation contains 22 experiments for Mixture 1, and 10 for Mixture 2 compacts. For Mixture 1, there are 10 configurations with copper plates, 6 with aluminum, and 5 with copper-cadmium. One experiment contained no poison plate. For Mixture 2 compacts, there are 3 configurations with copper, 3 with aluminum, and 3 with copper-cadmium poison plates. One experiment contained no poison plate.

  5. An IBM 370 assembly language program verifier

    Science.gov (United States)

    Maurer, W. D.

    1977-01-01

    The paper describes a program written in SNOBOL which verifies the correctness of programs written in assembly language for the IBM 360 and 370 series of computers. The motivation for using assembly language as a source language for a program verifier was the realization that many errors in programs are caused by misunderstanding or ignorance of the characteristics of specific computers. The proof of correctness of a program written in assembly language must take these characteristics into account. The program has been compiled and is currently running at the Center for Academic and Administrative Computing of The George Washington University.

  6. Testing New Programming Paradigms with NAS Parallel Benchmarks

    Science.gov (United States)

    Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.

    2000-01-01

    Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage

  7. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    Science.gov (United States)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  8. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  9. Effects of Exposure Imprecision on Estimation of the Benchmark Dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose......Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose...

  10. Assembling consumption

    DEFF Research Database (Denmark)

    Assembling Consumption marks a definitive step in the institutionalisation of qualitative business research. By gathering leading scholars and educators who study markets, marketing and consumption through the lenses of philosophy, sociology and anthropology, this book clarifies and applies...... the investigative tools offered by assemblage theory, actor-network theory and non-representational theory. Clear theoretical explanation and methodological innovation, alongside empirical applications of these emerging frameworks will offer readers new and refreshing perspectives on consumer culture and market...... societies. This is an essential reading for both seasoned scholars and advanced students of markets, economies and social forms of consumption....

  11. Heater assembly

    International Nuclear Information System (INIS)

    An electrical resistance heater, installed in the H1 borehole, is used to thermally perturb the rock mass through a controlled heating and cooling cycle. Heater power levels are controlled by a Variac power transformer and are measured by wattmeters. Temperatures are measured by thermocouples on the borehole wall and on the heater assembly. Power and temperature values are recorded by the DAS described in Chapter 12. The heater assembly consists of a 3.55-m (11.6-ft) long by 20.3-cm (8-in.) O.D., Type 304 stainless steel pipe, containing a tubular hairpin heating element. The element has a heated length of 3 m (9.84 ft). The power rating of the element is 10 kW; however, we plan to operate the unit at a maximum power of only 3 kW. The heater is positioned with its midpoint directly below the axis of the P2 borehole, as shown in the borehole configuration diagram. This heater midpoint position corresponds to a distance of approximately 8.5 m (27.9 ft) from the H1 borehole collar. A schematic of the heater assembly in the borehole is shown. The distance from the borehole collar to the closest point on the assembly (the front end) is 6.5 m (21.3 ft). A high-temperature inflatable packer, used to seal the borehole for moisture collection, is positioned 50 cm (19.7 in.) ahead of the heater front end. The heater is supported and centralized within the borehole by two skids, fabricated from 25-mm (1-in.) O.D. stainless steel pipe. Thermocouples are installed at a number of locations in the H1 borehole. Four thermocouples that are attached to the heater skin monitor temperatures on the outer surface of the can, while three thermocouples that are held in place by rock sections monitor borehole wall temperatures beneath the heater. Temperatures are also monitored at the heater terminal and on the packer hardware

  12. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint US/Russian Progress Report for Fiscal 1997. Volume 3 - Calculations Performed in the Russian Federation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-06-01

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the Russian Federation during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the contaminated benchmarks that the United States and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  13. Benchmarking evaluation for criticality analysis of high density spent fuel storge rack

    Energy Technology Data Exchange (ETDEWEB)

    Yun, J. H.; Jeon, J. K.; Ko, D. J.; Ha, J. H.; Song, M. J.; Kim, B. T.; Jo, H. S. [Korea Nuclear Environment Technology Institute, Taejon (Korea, Republic of)

    2000-05-01

    In order to evaluate criticality of spent fuel storage pool in Ulchin Unit 2 under normal operation, a series of benchmark calculations were carried out using a CSAS module of SCALE 4.4 along with CASMO- 3 computer code. Through the benchmark calculations for the criticality computer codes, bias and uncertainties of the computer codes were evaluated. We can take 0.00656 of bias result for CSAS (KENO-V .a.) of SCALE system and its uncertainty was calculated as 0.00731 with a 95% probability at the 95% of confidence level. Criticality evaluation results for spent fuel storage pool of Ulchin Unit 2 using SCALE system had a very similar trend compared with CASMO-3 results.

  14. Pre Managed Earnings Benchmarks and Earnings Management of Australian Firms

    Directory of Open Access Journals (Sweden)

    Subhrendu Rath

    2012-03-01

    Full Text Available This study investigates benchmark beating behaviour and circumstances under which managers inflate earnings to beat earnings benchmarks. We show that two benchmarks, positive earnings and positive earnings change, are associated with earnings manipulation. Using a sample ofAustralian firms from 2000 to 2006, we find that when the underlying earnings are negative or below prior year’s earnings, firms are more likely to use discretionary accruals to inflate earnings to beat benchmarks.

  15. Benchmarking of corporate social responsibility: Methodological problems and robustness.

    OpenAIRE

    Graafland, J.J.; Eijffinger, S.C.W.; Smid, H.

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  16. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, and inefficient portfolio versions. The covariance between the of...

  17. Towards a Benchmark Suite for Modelica Compilers: Large Models

    OpenAIRE

    Frenkel, Jens; Schubert, Christian; Kunze, Günter; Fritzson, Peter; Sjölund, Martin; Pop, Adrian

    2011-01-01

    The paper presents a contribution to a Modelica benchmark suite. Basic ideas for a tool independent benchmark suite based on Python scripting along with models for testing the performance of Modelica compilers regarding large systems of equation are given. The automation of running the benchmark suite is demonstrated followed by a selection of benchmark results to determine the current limits of Modelica tools and how they scale for an increasing number of equations.

  18. General Assembly

    CERN Multimedia

    Staff Association

    2015-01-01

    Mardi 5 mai à 11 h 00 Salle 13-2-005 Conformément aux statuts de l’Association du personnel, une Assemblée générale ordinaire est organisée une fois par année (article IV.2.1). Projet d’ordre du jour : 1- Adoption de l’ordre du jour. 2- Approbation du procès-verbal de l’Assemblée générale ordinaire du 22 mai 2014. 3- Présentation et approbation du rapport d’activités 2014. 4- Présentation et approbation du rapport financier 2014. 5- Présentation et approbation du rapport des vérificateurs aux comptes pour 2014. 6- Programme 2015. 7- Présentation et approbation du projet de budget 2015 et taux de cotisation pour 2015. 8- Pas de modifications aux Statuts de l'Association du personnel proposée. 9- Élections des membres de la Commission é...

  19. General assembly

    CERN Multimedia

    Staff Association

    2015-01-01

    Mardi 5 mai à 11 h 00 Salle 13-2-005 Conformément aux statuts de l’Association du personnel, une Assemblée générale ordinaire est organisée une fois par année (article IV.2.1). Projet d’ordre du jour : Adoption de l’ordre du jour. Approbation du procès-verbal de l’Assemblée générale ordinaire du 22 mai 2014. Présentation et approbation du rapport d’activités 2014. Présentation et approbation du rapport financier 2014. Présentation et approbation du rapport des vérificateurs aux comptes pour 2014. Programme 2015. Présentation et approbation du projet de budget 2015 et taux de cotisation pour 2015. Pas de modifications aux Statuts de l'Association du personnel proposée. Élections des membres de la Commission électorale. &am...

  20. General Assembly

    CERN Multimedia

    Staff Association

    2016-01-01

    Mardi 5 avril à 11 h 00 BE Auditorium Meyrin (6-2-024) Conformément aux statuts de l’Association du personnel, une Assemblée générale ordinaire est organisée une fois par année (article IV.2.1). Projet d’ordre du jour : Adoption de l’ordre du jour. Approbation du procès-verbal de l’Assemblée générale ordinaire du 5 mai 2015. Présentation et approbation du rapport d’activités 2015. Présentation et approbation du rapport financier 2015. Présentation et approbation du rapport des vérificateurs aux comptes pour 2015. Programme de travail 2016. Présentation et approbation du projet de budget 2016 Approbation du taux de cotisation pour 2017. Modifications aux Statuts de l'Association du personnel proposée. Élections des membres de la Commissio...