WorldWideScience

Sample records for assembly computational benchmark

  1. Benchmark assemblies of the Los Alamos critical assemblies facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1986-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described. (author)

  2. Benchmark assemblies of the Los Alamos Critical Assemblies Facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1985-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  3. Benchmark assemblies of the Los Alamos critical assemblies facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1985-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  4. A simplified approach to WWER-440 fuel assembly head benchmark

    International Nuclear Information System (INIS)

    Muehlbauer, P.

    2010-01-01

    The WWER-440 fuel assembly head benchmark was simulated with FLUENT 12 code as a first step of validation of the code for nuclear reactor safety analyses. Results of the benchmark together with comparison of results provided by other participants and results of measurement will be presented in another paper by benchmark organisers. This presentation is therefore focused on our approach to this simulation as illustrated on the case 323-34, which represents a peripheral assembly with five neighbours. All steps of the simulation and some lessons learned are described. Geometry of the computational region supplied as STEP file by organizers of the benchmark was first separated into two parts (inlet part with spacer grid, and the rest of assembly head) in order to keep the size of the computational mesh manageable with regard to the hardware available (HP Z800 workstation with Intel Zeon four-core CPU 3.2 GHz, 32 GB of RAM) and then further modified at places where shape of the geometry would probably lead to highly distorted cells. Both parts of the geometry were connected via boundary profile file generated at cross section, where effect of grid spacers is still felt but the effect of out flow boundary condition used in the computations of the inlet part of geometry is negligible. Computation proceeded in several steps: start with basic mesh, standard k-ε model of turbulence with standard wall functions and first order upwind numerical schemes; after convergence (scaled residuals lower than 10-3) and near-wall meshes local adaptation when needed, realizable k-ε of turbulence was used with second order upwind numerical schemes for momentum and energy equations. During iterations, area-average temperature of thermocouples and area-averaged outlet temperature which are the main figures of merit of the benchmark were also monitored. In this 'blind' phase of the benchmark, effect of spacers was neglected. After results of measurements are available, standard validation

  5. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  6. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  7. Verification of FA2D Prediction Capability Using Fuel Assembly Benchmark

    International Nuclear Information System (INIS)

    Jecmenica, R.; Pevec, D.; Grgic, D.; Konjarek, D.

    2008-01-01

    FA2D is 2D transport collision probability code developed at Faculty of Electrical Engineering and Computing, University Zagreb. It is used for calculation of cross section data at fuel assembly level. Main objective of its development was capability to generate cross section data to be used for fuel management and safety analyses of PWR reactors. Till now formal verification of code predictions capability is not performed at fuel assembly level, but results of fuel management calculations obtained using FA2D generated cross sections for NPP Krsko and IRIS reactor are compared against Westinghouse calculations. Cross section data were used within NRC's PARCS code and satisfactory preliminary results were obtained. This paper presents results of calculations performed for Nuclear Fuel Industries, Ltd., benchmark using FA2D, and SCALE5 TRITON calculation sequence (based on discrete ordinates code NEWT). Nuclear Fuel Industries, Ltd., Japan, released LWR Next Generation Fuels Benchmark with the aim to verify prediction capability in nuclear design for extended burnup regions. We performed calculations for two different Benchmark problem geometries - UO 2 pin cell and UO 2 PWR fuel assembly. The results obtained with two mentioned 2D spectral codes are presented for burnup dependency of infinite multiplication factor, isotopic concentration of important materials and for local peaking factor vs. burnup (in case of fuel assembly calculation).(author)

  8. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    Cavarec, C.; Perron, J.F.; Verwaerde, D.; West, J.P.

    1994-09-01

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (P ij , S n , Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  9. An improved benchmark model for the Big Ten critical assembly - 021

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    2010-01-01

    A new benchmark specification is developed for the BIG TEN uranium critical assembly. The assembly has a fast spectrum, and its core contains approximately 10 wt.% enriched uranium. Detailed specifications for the benchmark are provided, and results from the MCNP5 Monte Carlo code using a variety of nuclear-data libraries are given for this benchmark and two others. (authors)

  10. Critical Assessment of Metagenome Interpretation – a benchmark of computational metagenomics software

    Science.gov (United States)

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.

    2018-01-01

    In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888

  11. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  12. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  13. Computer simulation of variform fuel assemblies using Dragon code

    International Nuclear Information System (INIS)

    Ju Haitao; Wu Hongchun; Yao Dong

    2005-01-01

    The DRAGON is a cell code that developed for the CANDU reactor by the Ecole Polytechnique de Montreal of CANADA. Although, the DRAGON is mainly used to simulate the CANDU super-cell fuel assembly, it has an ability to simulate other geometries of the fuel assembly. However, only NEACRP benchmark problem of the BWR lattice cell was analyzed until now except for the CANDU reactor. We also need to develop the code to simulate the variform fuel assemblies, especially, for design of the advanced reactor. We validated that the cell code DRAGON is useful for simulating various kinds of the fuel assembly by analyzing the rod-shape fuel assembly of the PWR and the MTR plate-shape fuel assembly. Some other kinds of geometry of geometry were computed. Computational results show that the DRAGON is able to analyze variform fuel assembly problems and the precision is high. (authors)

  14. GABenchToB: a genome assembly benchmark tuned on bacteria and benchtop sequencers.

    Directory of Open Access Journals (Sweden)

    Sebastian Jünemann

    Full Text Available De novo genome assembly is the process of reconstructing a complete genomic sequence from countless small sequencing reads. Due to the complexity of this task, numerous genome assemblers have been developed to cope with different requirements and the different kinds of data provided by sequencers within the fast evolving field of next-generation sequencing technologies. In particular, the recently introduced generation of benchtop sequencers, like Illumina's MiSeq and Ion Torrent's Personal Genome Machine (PGM, popularized the easy, fast, and cheap sequencing of bacterial organisms to a broad range of academic and clinical institutions. With a strong pragmatic focus, here, we give a novel insight into the line of assembly evaluation surveys as we benchmark popular de novo genome assemblers based on bacterial data generated by benchtop sequencers. Therefore, single-library assemblies were generated, assembled, and compared to each other by metrics describing assembly contiguity and accuracy, and also by practice-oriented criteria as for instance computing time. In addition, we extensively analyzed the effect of the depth of coverage on the genome assemblies within reasonable ranges and the k-mer optimization problem of de Bruijn Graph assemblers. Our results show that, although both MiSeq and PGM allow for good genome assemblies, they require different approaches. They not only pair with different assembler types, but also affect assemblies differently regarding the depth of coverage where oversampling can become problematic. Assemblies vary greatly with respect to contiguity and accuracy but also by the requirement on the computing power. Consequently, no assembler can be rated best for all preconditions. Instead, the given kind of data, the demands on assembly quality, and the available computing infrastructure determines which assembler suits best. The data sets, scripts and all additional information needed to replicate our results are freely

  15. Neutronics benchmark of a MOX assembly with near-weapons-grade plutonium

    International Nuclear Information System (INIS)

    Difilippo, F.C.; Fisher, S.E.

    1998-01-01

    One of the proposed ways to dispose of surplus weapons-grade plutonium (Pu) is to irradiate the high-fissile material in light-water reactors in order to reduce the Pu enrichment to the level of spent fuels from commercial reactors. Considerable experience has been accumulated about the behavior of mixed-oxide (MOX) uranium and plutonium fuels for plutonium recycling in commercial reactors, but the experience is related to Pu enrichments typical of spent fuels quite below the values of weapons-grade plutonium. Important decisions related to the kind of reactors to be used for the disposition of the plutonium are going to be based on calculations, so the validation of computational algorithms related to all aspects of the fuel cycle (power distributions, isotopics as function of the burnup, etc.), for weapons-grade isotopics is very important. Analysis of public domain data reveals that the cycle-2 irradiation in the Quad cities boiling-water reactor (BWR) is the most recent US destructive examination. This effort involved the irradiation of five MOX assemblies using 80 and 90% fissile plutonium. These benchmark data were gathered by General Electric under the sponsorship of the Electric Power Research Institute. It is emphasized, however, that global parameters are not the focus of this benchmark, since the five bundles containing MOX fuels did not significantly affect the overall core performance. However, since the primary objective of this work is to compare against measured post-irradiation assembly data, the term benchmark is applied here. One important reason for performing the benchmark on Quad Cities irradiation is that the fissile blends (up to 90%) are higher than reactor-grade and, quite close to, weapons-grade isotopics

  16. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  17. Benchmarking of HEU mental annuli critical assemblies with internally reflected graphite cylinder

    Directory of Open Access Journals (Sweden)

    Xiaobo Liu

    2017-01-01

    Full Text Available Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00057, 0.00058 and 0.00057 respectively, and biases to the benchmark models which are − 0.00286, − 0.00242 and − 0.00168 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified models. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF/B-VII.1 agree well to the benchmark experimental results within difference less than 0.2%. The benchmarking results were accepted for the inclusion of ICSBEP Handbook.

  18. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  19. Benchmarking Computational Fluid Dynamics for Application to PWR Fuel

    International Nuclear Information System (INIS)

    Smith, L.D. III; Conner, M.E.; Liu, B.; Dzodzo, B.; Paramonov, D.V.; Beasley, D.E.; Langford, H.M.; Holloway, M.V.

    2002-01-01

    The present study demonstrates a process used to develop confidence in Computational Fluid Dynamics (CFD) as a tool to investigate flow and temperature distributions in a PWR fuel bundle. The velocity and temperature fields produced by a mixing spacer grid of a PWR fuel assembly are quite complex. Before using CFD to evaluate these flow fields, a rigorous benchmarking effort should be performed to ensure that reasonable results are obtained. Westinghouse has developed a method to quantitatively benchmark CFD tools against data at conditions representative of the PWR. Several measurements in a 5 x 5 rod bundle were performed. Lateral flow-field testing employed visualization techniques and Particle Image Velocimetry (PIV). Heat transfer testing involved measurements of the single-phase heat transfer coefficient downstream of the spacer grid. These test results were used to compare with CFD predictions. Among the parameters optimized in the CFD models based on this comparison with data include computational mesh, turbulence model, and boundary conditions. As an outcome of this effort, a methodology was developed for CFD modeling that provides confidence in the numerical results. (authors)

  20. Evaluation of PWR and BWR assembly benchmark calculations. Status report of EPRI computational benchmark results, performed in the framework of the Netherlands` PINK programme (Joint project of ECN, IRI, KEMA and GKN)

    Energy Technology Data Exchange (ETDEWEB)

    Gruppelaar, H. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Klippel, H.T. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Kloosterman, J.L. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Hoogenboom, J.E. [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Verhagen, F.C.M. [Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands); Bruggink, J.C. [Gemeenschappelijke Kernenergiecentrale Nederland N.V., Dodewaard (Netherlands)

    1993-11-01

    Benchmark results of the Dutch PINK working group on calculational benchmarks on single pin cell and multipin assemblies as defined by EPRI are presented and evaluated. First a short update of methods used by the various institutes involved is given as well as an update of the status with respect to previous performed pin-cell calculations. Problems detected in previous pin-cell calculations are inspected more closely. Detailed discussion of results of multipin assembly calculations is given. The assembly consists of 9 pins in a multicell square lattice in which the central pin is filled differently, i.e. a Gd pin for the BWR assembly and a control rod/guide tube for the PWR assembly. The results for pin cells showed a rather good overall agreement between the four participants although BWR pins with high void fraction turned out to be difficult to calculate. With respect to burnup calculations good overall agreement for the reactivity swing was obtained, provided that a fine time grid is used. (orig.)

  1. Evaluation of PWR and BWR assembly benchmark calculations. Status report of EPRI computational benchmark results, performed in the framework of the Netherlands' PINK programme (Joint project of ECN, IRI, KEMA and GKN)

    International Nuclear Information System (INIS)

    Gruppelaar, H.; Klippel, H.T.; Kloosterman, J.L.; Hoogenboom, J.E.; Bruggink, J.C.

    1993-11-01

    Benchmark results of the Dutch PINK working group on calculational benchmarks on single pin cell and multipin assemblies as defined by EPRI are presented and evaluated. First a short update of methods used by the various institutes involved is given as well as an update of the status with respect to previous performed pin-cell calculations. Problems detected in previous pin-cell calculations are inspected more closely. Detailed discussion of results of multipin assembly calculations is given. The assembly consists of 9 pins in a multicell square lattice in which the central pin is filled differently, i.e. a Gd pin for the BWR assembly and a control rod/guide tube for the PWR assembly. The results for pin cells showed a rather good overall agreement between the four participants although BWR pins with high void fraction turned out to be difficult to calculate. With respect to burnup calculations good overall agreement for the reactivity swing was obtained, provided that a fine time grid is used. (orig.)

  2. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Science.gov (United States)

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  3. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Directory of Open Access Journals (Sweden)

    Seyhan Yazar

    Full Text Available A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR on Amazon EC2 instances and Google Compute Engine (GCE, using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2 for E.coli and 53.5% (95% CI: 34.4-72.6 for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1 and 173.9% (95% CI: 134.6-213.1 more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  4. Calculation of Single Cell and Fuel Assembly IRIS Benchmarks Using WIMSD5B and GNOMER Codes

    International Nuclear Information System (INIS)

    Pevec, D.; Grgic, D.; Jecmenica, R.

    2002-01-01

    IRIS reactor (an acronym for International Reactor Innovative and Secure) is a modular, integral, light water cooled, small to medium power (100-335 MWe/module) reactor, which addresses the requirements defined by the United States Department of Energy for Generation IV nuclear energy systems, i.e., proliferation resistance, enhanced safety, improved economics, and waste reduction. An international consortium led by Westinghouse/BNFL was created for development of IRIS reactor; it includes universities, institutes, commercial companies, and utilities. Faculty of Electrical Engineering and Computing, University of Zagreb joined the consortium in year 2001, with the aim to take part in IRIS neutronics design and safety analyses of IRIS transients. A set of neutronic benchmarks for IRIS reactor was defined with the objective to compare results of all participants with exactly the same assumptions. In this paper a calculation of Benchmark 44 for IRIS reactor is described. Benchmark 44 is defined as a core depletion benchmark problem for specified IRIS reactor operating conditions (e.g., temperatures, moderator density) without feedback. Enriched boron, inhomogeneously distributed in axial direction, is used as an integral fuel burnable absorber (IFBA). The aim of this benchmark was to enable a more direct comparison of results of different code systems. Calculations of Benchmark 44 were performed using the modified CORD-2 code package. The CORD-2 code package consists of WIMSD and GNOMER codes. WIMSD is a well-known lattice spectrum calculation code. GNOMER solves the neutron diffusion equation in three-dimensional Cartesian geometry by the Green's function nodal method. The following parameters were obtained in Benchmark 44 analysis: effective multiplication factor as a function of burnup, nuclear peaking factor as a function of burnup, axial offset as a function of burnup, core-average axial power profile, core radial power profile, axial power profile for selected

  5. The new deterministic 3-D radiation transport code Multitrans: C5G7 MOX fuel assembly benchmark

    International Nuclear Information System (INIS)

    Kotiluoto, P.

    2003-01-01

    The novel deterministic three-dimensional radiation transport code MultiTrans is based on combination of the advanced tree multigrid technique and the simplified P3 (SP3) radiation transport approximation. In the tree multigrid technique, an automatic mesh refinement is performed on material surfaces. The tree multigrid is generated directly from stereo-lithography (STL) files exported by computer-aided design (CAD) systems, thus allowing an easy interface for construction and upgrading of the geometry. The deterministic MultiTrans code allows fast solution of complicated three-dimensional transport problems in detail, offering a new tool for nuclear applications in reactor physics. In order to determine the feasibility of a new code, computational benchmarks need to be carried out. In this work, MultiTrans code is tested for a seven-group three-dimensional MOX fuel assembly transport benchmark without spatial homogenization (NEA C5G7 MOX). (author)

  6. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  7. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  8. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  9. Inelastic finite element analysis of a pipe-elbow assembly (benchmark problem 2)

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, H P [Internationale Atomreaktorbau GmbH (INTERATOM) Bergisch Gladbach (Germany); Prij, J [Netherlands Energy Research Foundation (ECN) Petten (Netherlands)

    1979-06-01

    In the scope of the international benchmark problem effort on piping systems, benchmark problem 2 consisting of a pipe elbow assembly, subjected to a time dependent in-plane bending moment, was analysed using the finite element program MARC. Numerical results are presented and a comparison with experimental results is made. It is concluded that the main reason for the deviation between the calculated and measured values is due to the fact that creep-plasticity interaction is not taken into account in the analysis. (author)

  10. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  11. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  12. Second benchmark problem for WIPP structural computations

    International Nuclear Information System (INIS)

    Krieg, R.D.; Morgan, H.S.; Hunter, T.O.

    1980-12-01

    This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications

  13. Benchmark calculations of power distribution within fuel assemblies. Phase 2: comparison of data reduction and power reconstruction methods in production codes

    International Nuclear Information System (INIS)

    2000-01-01

    Systems loaded with plutonium in the form of mixed-oxide (MOX) fuel show somewhat different neutronic characteristics compared with those using conventional uranium fuels. In order to maintain adequate safety standards, it is essential to accurately predict the characteristics of MOX-fuelled systems and to further validate both the nuclear data and the computation methods used. A computation benchmark on power distribution within fuel assemblies to compare different techniques used in production codes for fine flux prediction in systems partially loaded with MOX fuel was carried out at an international level. It addressed first the numerical schemes for pin power reconstruction, then investigated the global performance including cross-section data reduction methods. This report provides the detailed results of this second phase of the benchmark. The analysis of the results revealed that basic data still need to be improved, primarily for higher plutonium isotopes and minor actinides. (author)

  14. AER Benchmark Specification Sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the WWER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational fluid dynamics codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D computational fluid dynamics modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the twenty third cycle of the Paks NPPs Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (Authors)

  15. Computational Benchmark for Estimation of Reactivity Margin from Fission Products and Minor Actinides in PWR Burnup Credit

    International Nuclear Information System (INIS)

    Wagner, J.C.

    2001-01-01

    This report proposes and documents a computational benchmark problem for the estimation of the additional reactivity margin available in spent nuclear fuel (SNF) from fission products and minor actinides in a burnup-credit storage/transport environment, relative to SNF compositions containing only the major actinides. The benchmark problem/configuration is a generic burnup credit cask designed to hold 32 pressurized water reactor (PWR) assemblies. The purpose of this computational benchmark is to provide a reference configuration for the estimation of the additional reactivity margin, which is encouraged in the U.S. Nuclear Regulatory Commission (NRC) guidance for partial burnup credit (ISG8), and document reference estimations of the additional reactivity margin as a function of initial enrichment, burnup, and cooling time. Consequently, the geometry and material specifications are provided in sufficient detail to enable independent evaluations. Estimates of additional reactivity margin for this reference configuration may be compared to those of similar burnup-credit casks to provide an indication of the validity of design-specific estimates of fission-product margin. The reference solutions were generated with the SAS2H-depletion and CSAS25-criticality sequences of the SCALE 4.4a package. Although the SAS2H and CSAS25 sequences have been extensively validated elsewhere, the reference solutions are not directly or indirectly based on experimental results. Consequently, this computational benchmark cannot be used to satisfy the ANS 8.1 requirements for validation of calculational methods and is not intended to be used to establish biases for burnup credit analyses

  16. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  17. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    International Nuclear Information System (INIS)

    Orii, Shigeo

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  18. Benchmark for evaluation and validation of reactor simulations (BEAVRS)

    Energy Technology Data Exchange (ETDEWEB)

    Horelik, N.; Herman, B.; Forget, B.; Smith, K. [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States)

    2013-07-01

    Advances in parallel computing have made possible the development of high-fidelity tools for the design and analysis of nuclear reactor cores, and such tools require extensive verification and validation. This paper introduces BEAVRS, a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading patterns, and numerous in-vessel components. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from fifty-eight instrumented assemblies. Initial comparisons between calculations performed with MIT's OpenMC Monte Carlo neutron transport code and measured cycle 1 HZP test data are presented, and these results display an average deviation of approximately 100 pcm for the various critical configurations and control rod worth measurements. Computed HZP radial fission detector flux maps also agree reasonably well with the available measured data. All results indicate that this benchmark will be extremely useful in validation of coupled-physics codes and uncertainty quantification of in-core physics computational predictions. The detailed BEAVRS specification and its associated data package is hosted online at the MIT Computational Reactor Physics Group web site (http://crpg.mit.edu/), where future revisions and refinements to the benchmark specification will be made publicly available. (authors)

  19. OECD/NEA burnup credit criticality benchmarks phase IIIB: Burnup calculations of BWR fuel assemblies for storage and transport

    International Nuclear Information System (INIS)

    Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya

    2002-02-01

    The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of ±10% relative to the average, although some results, esp. 155 Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k ∞ also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)

  20. OECD/NEA burnup credit criticality benchmarks phase IIIB. Burnup calculations of BWR fuel assemblies for storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-02-01

    The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of {+-}10% relative to the average, although some results, esp. {sup 155}Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k{sub {infinity}} also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)

  1. OECD/NEA burnup credit criticality benchmarks phase IIIA: Criticality calculations of BWR spent fuel assemblies in storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ando, Yoshihira [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    2000-09-01

    The report describes the final results of Phase IIIA Benchmarks conducted by the Burnup Credit Criticality Calculation Working Group under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD/NEA). The benchmarks are intended to confirm the predictive capability of the current computer code and data library combinations for the neutron multiplication factor (k{sub eff}) of a layer of irradiated BWR fuel assembly array model. In total 22 benchmark problems are proposed for calculations of k{sub eff}. The effects of following parameters are investigated: cooling time, inclusion/exclusion of FP nuclides and axial burnup profile, and inclusion of axial profile of void fraction or constant void fractions during burnup. Axial profiles of fractional fission rates are further requested for five cases out of the 22 problems. Twenty-one sets of results are presented, contributed by 17 institutes from 9 countries. The relative dispersion of k{sub eff} values calculated by the participants from the mean value is almost within the band of {+-}1%{delta}k/k. The deviations from the averaged calculated fission rate profiles are found to be within {+-}5% for most cases. (author)

  2. Effects of existing evaluated nuclear data files on neutronics characteristics of the BFS-62-3A critical assembly benchmark model

    International Nuclear Information System (INIS)

    Semenov, Mikhail

    2002-11-01

    This report is continuation of studying of the experiments performed on BFS-62-3A critical assembly in Russia. The objective of work is definition of the cross section uncertainties on reactor neutronics parameters as applied to the hybrid core of the BN-600 reactor of Beloyarskaya NPP. Two-dimensional benchmark model of BFS-62-3A was created specially for these purposes and experimental values were reduced to it. Benchmark characteristics for this assembly are 1) criticality; 2) central fission rate ratios (spectral indices); and 3) fission rate distributions in stainless steel reflector. The effects of nuclear data libraries have been studied by comparing the results calculated using available modern data libraries - ENDF/B-V, ENDF/B-VI, ENDF/B-VI-PT, JENDL-3.2 and ABBN-93. All results were computed by Monte Carlo method with the continuous energy cross-sections. The checking of the cross sections of major isotopes on wide benchmark criticality collection was made. It was shown that ENDF/B-V data underestimate the criticality of fast reactor systems up to 2% Δk. As for the rest data, the difference between each other in criticality for BFS-62-3A is around 0.6% Δk. However, taking into account the results obtained for other fast reactor benchmarks (and steel-reflected also), it may conclude that the difference in criticality calculation results can achieve 1% Δk. This value is in a good agreement with cross section uncertainty evaluated for BN-600 hybrid core (±0.6% Δk). This work is related to the JNC-IPPE Collaboration on Experimental Investigation of Excess Weapons Grade Pu Disposition in BN-600 Reactor Using BFS-2 Facility. (author)

  3. OECD/NEA Sandia Fuel Project phase I: Benchmark of the ignition testing

    Energy Technology Data Exchange (ETDEWEB)

    Adorni, Martina, E-mail: martina_adorni@hotmail.it [UNIPI (Italy); Herranz, Luis E. [CIEMAT (Spain); Hollands, Thorsten [GRS (Germany); Ahn, Kwang-II [KAERI (Korea, Republic of); Bals, Christine [GRS (Germany); D' Auria, Francesco [UNIPI (Italy); Horvath, Gabor L. [NUBIKI (Hungary); Jaeckel, Bernd S. [PSI (Switzerland); Kim, Han-Chul; Lee, Jung-Jae [KINS (Korea, Republic of); Ogino, Masao [JNES (Japan); Techy, Zsolt [NUBIKI (Hungary); Velazquez-Lozad, Alexander; Zigh, Abdelghani [USNRC (United States); Rehacek, Radomir [OECD/NEA (France)

    2016-10-15

    Highlights: • A unique PWR spent fuel pool experimental project is analytically investigated. • Predictability of fuel clad ignition in case of a complete loss of coolant in SFPs is assessed. • Computer codes reasonably estimate peak cladding temperature and time of ignition. - Abstract: The OECD/NEA Sandia Fuel Project provided unique thermal-hydraulic experimental data associated with Spent Fuel Pool (SFP) complete drain down. The study conducted at Sandia National Laboratories (SNL) was successfully completed (July 2009 to February 2013). The accident conditions of interest for the SFP were simulated in a full scale prototypic fashion (electrically heated, prototypic assemblies in a prototypic SFP rack) so that the experimental results closely represent actual fuel assembly responses. A major impetus for this work was to facilitate severe accident code validation and to reduce modeling uncertainties within the codes. Phase I focused on axial heating and burn propagation in a single PWR 17 × 17 assembly (i.e. “hot neighbors” configuration). Phase II addressed axial and radial heating and zirconium fire propagation including effects of fuel rod ballooning in a 1 × 4 assembly configuration (i.e. single, hot center assembly and four, “cooler neighbors”). This paper summarizes the comparative analysis regarding the final destructive ignition test of the phase I of the project. The objective of the benchmark is to evaluate and compare the predictive capabilities of computer codes concerning the ignition testing of PWR fuel assemblies. Nine institutions from eight different countries were involved in the benchmark calculations. The time to ignition and the maximum temperature are adequately captured by the calculations. It is believed that the benchmark constitutes an enlargement of the validation range for the codes to the conditions tested, thus enhancing the code applicability to other fuel assembly designs and configurations. The comparison of

  4. AER benchmark specification sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the VVER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational Fluid Dynamics (CFD) codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D CFD modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the 23rd cycle of the Paks NPP's Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (authors)

  5. Benchmark physics tests in the metallic-fueled assembly ZPPR-15

    International Nuclear Information System (INIS)

    McFarlane, H.F.; Brumbach, S.B.; Carpenter, S.G.; Collins, P.J.

    1989-01-01

    Results of the first benchmark physics tests of a metallic-fueled, demonstration-size liquid-metal reactor (LMR) are reported. A simple, two-zone, cylindrical conventional assembly was built with three distinctly different compositions to represent the stages of the Integral Fast Reactor fuel cycle. Experiments included criticality, control, power distribution, reaction rate ratios, reactivity coefficients, shielding, kinetics, and spectrum. Analysis was done with three-dimensional nodal diffusion calculations and ENDF/B-V.2 cross sections. Predictions of the ZPPR-15 reactor physics parameters agreed sufficiently well with the measured values to justify confidence in design analyses for metallic-fueled LMRs

  6. Benchmark physics tests in the metallic-fuelled assembly ZPPR-15

    International Nuclear Information System (INIS)

    McFarlane, H.F.; Brumbach, S.B.; Carpenter, S.G.; Collins, P.J.

    1987-01-01

    Results of the first benchmark physics tests of a metallic-fueled, demonstration-size, liquid metal reactor are reported. A simple, two-zone, cylindrical conventional assembly was built with three distinctly different compositions to represent the stages of the Integral Fast Reactor fuel cycle. Experiments included criticality, control, power distribution, reaction rate ratios, reactivity coefficients, shielding, kinetics and spectrum. Analysis was done with 3-D nodal diffusion calculations and ENDFIB-V.2 cross sections. Predictions of the ZPPR-15 reactor physics parameters agreed sufficiently well with the measured values to justify confidence in design analyses for metallic-fueled LMRs

  7. Thermal Hydraulic Computational Fluid Dynamics Simulations and Experimental Investigation of Deformed Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Mays, Brian [AREVA Federal Services, Lynchburg, VA (United States); Jackson, R. Brian [TerraPower, Bellevue, WA (United States)

    2017-03-08

    The project, Toward a Longer Life Core: Thermal Hydraulic CFD Simulations and Experimental Investigation of Deformed Fuel Assemblies, DOE Project code DE-NE0008321, was a verification and validation project for flow and heat transfer through wire wrapped simulated liquid metal fuel assemblies that included both experiments and computational fluid dynamics simulations of those experiments. This project was a two year collaboration between AREVA, TerraPower, Argonne National Laboratory and Texas A&M University. Experiments were performed by AREVA and Texas A&M University. Numerical simulations of these experiments were performed by TerraPower and Argonne National Lab. Project management was performed by AREVA Federal Services. The first of a kind project resulted in the production of both local point temperature measurements and local flow mixing experiment data paired with numerical simulation benchmarking of the experiments. The project experiments included the largest wire-wrapped pin assembly Mass Index of Refraction (MIR) experiment in the world, the first known wire-wrapped assembly experiment with deformed duct geometries and the largest numerical simulations ever produced for wire-wrapped bundles.

  8. Core Benchmarks Descriptions

    International Nuclear Information System (INIS)

    Pavlovichev, A.M.

    2001-01-01

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented

  9. Benchmarking Brain-Computer Interfaces Outside the Laboratory: The Cybathlon 2016

    Directory of Open Access Journals (Sweden)

    Domen Novak

    2018-01-01

    Full Text Available This paper presents a new approach to benchmarking brain-computer interfaces (BCIs outside the lab. A computer game was created that mimics a real-world application of assistive BCIs, with the main outcome metric being the time needed to complete the game. This approach was used at the Cybathlon 2016, a competition for people with disabilities who use assistive technology to achieve tasks. The paper summarizes the technical challenges of BCIs, describes the design of the benchmarking game, then describes the rules for acceptable hardware, software and inclusion of human pilots in the BCI competition at the Cybathlon. The 11 participating teams, their approaches, and their results at the Cybathlon are presented. Though the benchmarking procedure has some limitations (for instance, we were unable to identify any factors that clearly contribute to BCI performance, it can be successfully used to analyze BCI performance in realistic, less structured conditions. In the future, the parameters of the benchmarking game could be modified to better mimic different applications (e.g., the need to use some commands more frequently than others. Furthermore, the Cybathlon has the potential to showcase such devices to the general public.

  10. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A. [Los Alamos National Laboratory (LANL); Macfarlane, R E [Los Alamos National Laboratory (LANL); Mosteller, R D [Los Alamos National Laboratory (LANL); Kiedrowski, B C [Los Alamos National Laboratory (LANL); Frankle, S C [Los Alamos National Laboratory (LANL); Chadwick, M. B. [Los Alamos National Laboratory (LANL); Mcknight, R D [Argonne National Laboratory (ANL); Lell, R M [Argonne National Laboratory (ANL); Palmiotti, G [Idaho National Laboratory (INL); Hiruta, h [Idaho National Laboratory (INL); Herman, Micheal W [Brookhaven National Laboratory (BNL); Arcilla, r [Brookhaven National Laboratory (BNL); Mughabghab, S F [Brookhaven National Laboratory (BNL); Sublet, J C [Culham Science Center, Abington, UK; Trkov, A. [Jozef Stefan Institute, Slovenia; Trumbull, T H [Knolls Atomic Power Laboratory; Dunn, Michael E [ORNL

    2011-01-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical

  11. Benchmark experiment on vanadium assembly with D-T neutrons. Leakage neutron spectrum measurement

    Energy Technology Data Exchange (ETDEWEB)

    Kokooo; Murata, I.; Nakano, D.; Takahashi, A. [Osaka Univ., Suita (Japan); Maekawa, F.; Ikeda, Y.

    1998-03-01

    The fusion neutronics benchmark experiments have been done for vanadium and vanadium alloy by using the slab assembly and time-of-flight (TOF) method. The leakage neutron spectra were measured from 50 keV to 15 MeV and comparison were done with MCNP-4A calculations which was made by using evaluated nuclear data of JENDL-3.2, JENDL-Fusion File and FENDL/E-1.0. (author)

  12. The OECD/NEA Data Bank, its computer program services and benchmarking activities

    International Nuclear Information System (INIS)

    Sartori, E.; Galan, J.M.

    1998-01-01

    The OECD/NEA Data Bank collects, tests and distributes computer programs and numerical data in the field of nuclear energy applications. This activity is coordinated with several similar centres in the United States (ESTSC, NNDC, RSIC) and outside the OECD area through an arrangement with the IAEA. This information is shared worldwide for the benefit of scientists and engineers working on the safe and economic use of nuclear energy. The OECD/NEA Nuclear Science Committee the supervising body of the Data Bank has conducted a series of international computer code benchmark exercises with the aim of verifying the correctness of codes, of building confidence in models used for predicting macroscopic behaviour of nuclear systems and to drive towards refinement of models where necessary. Exercises involving nuclear cross section predictions, in-core reactor physics issues, such as pin cells for different type of reactors, plutonium recycling, reconstruction of pin power within assemblies, core transients, reactor shielding and dosimetry, away from reactor issues such as criticality safety for transport and storage of spent fuel, shielding of radioactive material packages and other problems connected with the back end of the fuel cycle, are listed and the relevant references provided. (author)

  13. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also

  14. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  15. Computational benchmark problems: a review of recent work within the American Nuclear Society Mathematics and Computation Division

    International Nuclear Information System (INIS)

    Dodds, H.L. Jr.

    1977-01-01

    An overview of the recent accomplishments of the Computational Benchmark Problems Committee of the American Nuclear Society Mathematics and Computation Division is presented. Solutions of computational benchmark problems in the following eight areas are presented and discussed: (a) high-temperature gas-cooled reactor neutronics, (b) pressurized water reactor (PWR) thermal hydraulics, (c) PWR neutronics, (d) neutron transport in a cylindrical ''black'' rod, (e) neutron transport in a boiling water reactor (BWR) rod bundle, (f) BWR transient neutronics with thermal feedback, (g) neutron depletion in a heavy water reactor, and (h) heavy water reactor transient neutronics. It is concluded that these problems and solutions are of considerable value to the nuclear industry because they have been and will continue to be useful in the development, evaluation, and verification of computer codes and numerical-solution methods

  16. Benchmark study of some thermal and structural computer codes for nuclear shipping casks

    International Nuclear Information System (INIS)

    Ikushima, Takeshi; Kanae, Yoshioki; Shimada, Hirohisa; Shimoda, Atsumu; Halliquist, J.O.

    1984-01-01

    There are many computer codes which could be applied to the design and analysis of nuclear material shipping casks. One of problems which the designer of shipping cask faces is the decision regarding the choice of the computer codes to be used. For this situation, the thermal and structural benchmark tests for nuclear shipping casks are carried out to clarify adequacy of the calculation results. The calculation results are compared with the experimental ones. This report describes the results and discussion of the benchmark test. (author)

  17. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  18. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    International Nuclear Information System (INIS)

    Bess, John D.; Marshall, Margaret A.; Gorham, Mackenzie L.; Christensen, Joseph; Turnbull, James C.; Clark, Kim

    2011-01-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) (1) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) (2) were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  19. Models of natural computation : gene assembly and membrane systems

    NARCIS (Netherlands)

    Brijder, Robert

    2008-01-01

    This thesis is concerned with two research areas in natural computing: the computational nature of gene assembly and membrane computing. Gene assembly is a process occurring in unicellular organisms called ciliates. During this process genes are transformed through cut-and-paste operations. We

  20. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  1. Criticality Benchmark Results Using Various MCNP Data Libraries

    International Nuclear Information System (INIS)

    Frankle, Stephanie C.

    1999-01-01

    A suite of 86 criticality benchmarks has been recently implemented in MCNPtrademark as part of the nuclear data validation effort. These benchmarks have been run using two sets of MCNP continuous-energy neutron data: ENDF/B-VI based data through Release 2 (ENDF60) and the ENDF/B-V based data. New evaluations were completed for ENDF/B-VI for a number of the important nuclides such as the isotopes of H, Be, C, N, O, Fe, Ni, 235,238 U, 237 Np, and 239,240 Pu. When examining the results of these calculations for the five manor categories of 233 U, intermediate-enriched 235 U (IEU), highly enriched 235 U (HEU), 239 Pu, and mixed metal assembles, we find the following: (1) The new evaluations for 9 Be, 12 C, and 14 N show no net effect on k eff ; (2) There is a consistent decrease in k eff for all of the solution assemblies for ENDF/B-VI due to 1 H and 16 O, moving k eff further from the benchmark value for uranium solutions and closer to the benchmark value for plutonium solutions; (3) k eff decreased for the ENDF/B-VI Fe isotopic data, moving the calculated k eff further from the benchmark value; (4) k eff decreased for the ENDF/B-VI Ni isotopic data, moving the calculated k eff closer to the benchmark value; (5) The W data remained unchanged and tended to calculate slightly higher than the benchmark values; (6) For metal uranium systems, the ENDF/B-VI data for 235 U tends to decrease k eff while the 238 U data tends to increase k eff . The net result depends on the energy spectrum and material specifications for the particular assembly; (7) For more intermediate-energy systems, the changes in the 235,238 U evaluations tend to increase k eff . For the mixed graphite and normal uranium-reflected assembly, a large increase in k eff due to changes in the 238 U evaluation moved the calculated k eff much closer to the benchmark value. (8) There is little change in k eff for the uranium solutions due to the new 235,238 U evaluations; and (9) There is little change in k eff

  2. Benchmark problem suite for reactor physics study of LWR next generation fuels

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Ikehara, Tadashi; Ito, Takuya; Saji, Etsuro

    2002-01-01

    This paper proposes a benchmark problem suite for studying the physics of next-generation fuels of light water reactors. The target discharge burnup of the next-generation fuel was set to 70 GWd/t considering the increasing trend in discharge burnup of light water reactor fuels. The UO 2 and MOX fuels are included in the benchmark specifications. The benchmark problem consists of three different geometries: fuel pin cell, PWR fuel assembly and BWR fuel assembly. In the pin cell problem, detailed nuclear characteristics such as burnup dependence of nuclide-wise reactivity were included in the required calculation results to facilitate the study of reactor physics. In the assembly benchmark problems, important parameters for in-core fuel management such as local peaking factors and reactivity coefficients were included in the required results. The benchmark problems provide comprehensive test problems for next-generation light water reactor fuels with extended high burnup. Furthermore, since the pin cell, the PWR assembly and the BWR assembly problems are independent, analyses of the entire benchmark suite is not necessary: e.g., the set of pin cell and PWR fuel assembly problems will be suitable for those in charge of PWR in-core fuel management, and the set of pin cell and BWR fuel assembly problems for those in charge of BWR in-core fuel management. (author)

  3. Development of a computer program for drop time and impact velocity of the rod cluster control assembly

    International Nuclear Information System (INIS)

    Choi, K.-S.; Yim, J.-S.; Kim, I.-K.; Kim, K.-T.

    1993-01-01

    In PWR the rod cluster control assembly (RCCA) for shutdown is released upon the action of the control drive mechanism and falls down through the guide thimble by its weight. Drop time and impact velocity of the RCCA are two key parameters with respect to reactivity insertion time and the mechanical integrity of fuel assembly. Therefore, the precise control of the drop time and impact velocity is prerequisite to modifying the existing design features of the RCCA and guide thimble or newly designing them. During its falling down into the core, the RCCA is retarded by various forces acting on it such as flow resistance and friction caused by the RCCA movement, buoyancy mechanical friction caused by contacting inner surface of the guide thimble, etc. However, complicated coupling of the various forces makes it difficult to derive an analytical dynamic equation for the drop time and impact velocity. This paper deals with the development of a computer program containing an analytical dynamic equation applicable to the Korean Fuel Assembly (KOFA) loaded in the Korean nuclear power plants. The computer program is benchmarked with an available single control rod drop tests. Since the predicted values are in good agreements with the test results, the computer program developed in this paper can be employed to modify the existing design features of the RCCA and guide thimble and to develop their new design features for advanced nuclear reactors. (author)

  4. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  5. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  6. Benchmarking of FA2D/PARCS Code Package

    International Nuclear Information System (INIS)

    Grgic, D.; Jecmenica, R.; Pevec, D.

    2006-01-01

    FA2D/PARCS code package is used at Faculty of Electrical Engineering and Computing (FER), University of Zagreb, for static and dynamic reactor core analyses. It consists of two codes: FA2D and PARCS. FA2D is a multigroup two dimensional transport theory code for burn-up calculations based on collision probability method, developed at FER. It generates homogenised cross sections both of single pins and entire fuel assemblies. PARCS is an advanced nodal code developed at Purdue University for US NRC and it is based on neutron diffusion theory for three dimensional whole core static and dynamic calculations. It is modified at FER to enable internal 3D depletion calculation and usage of neutron cross section data in a format produced by FA2D and interface codes. The FA2D/PARCS code system has been validated on NPP Krsko operational data (Cycles 1 and 21). As we intend to use this code package for development of IRIS reactor loading patterns the first logical step was to validate the FA2D/PARCS code package on a set of IRIS benchmarks, starting from simple unit fuel cell, via fuel assembly, to full core benchmark. The IRIS 17x17 fuel with erbium burnable absorber was used in last full core benchmark. The results of modelling the IRIS full core benchmark using FA2D/PARCS code package have been compared with reference data showing the adequacy of FA2D/PARCS code package model for IRIS reactor core design.(author)

  7. Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flapper, Joris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry – including four dairy processes – cheese, fluid milk, butter, and milk powder.

  8. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    Science.gov (United States)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  9. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  10. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  11. The analysis of one-dimensional reactor kinetics benchmark computations

    International Nuclear Information System (INIS)

    Sidell, J.

    1975-11-01

    During March 1973 the European American Committee on Reactor Physics proposed a series of simple one-dimensional reactor kinetics problems, with the intention of comparing the relative efficiencies of the numerical methods employed in various codes, which are currently in use in many national laboratories. This report reviews the contributions submitted to this benchmark exercise and attempts to assess the relative merits and drawbacks of the various theoretical and computer methods. (author)

  12. COSA II Further benchmark exercises to compare geomechanical computer codes for salt

    International Nuclear Information System (INIS)

    Lowe, M.J.S.; Knowles, N.C.

    1989-01-01

    Project COSA (COmputer COdes COmparison for SAlt) was a benchmarking exercise involving the numerical modelling of the geomechanical behaviour of heated rock salt. Its main objective was to assess the current European capability to predict the geomechanical behaviour of salt, in the context of the disposal of heat-producing radioactive waste in salt formations. Twelve organisations participated in the exercise in which their solutions to a number of benchmark problems were compared. The project was organised in two distinct phases: The first, from 1984-1986, concentrated on the verification of the computer codes. The second, from 1986-1988 progressed to validation, using three in-situ experiments at the Asse research facility in West Germany as a basis for comparison. This document reports the activities of the second phase of the project and presents the results, assessments and conclusions

  13. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  14. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    International Nuclear Information System (INIS)

    Abanades, Alberto; Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto; Bornos, Victor; Kiyavitskaya, Anna; Carta, Mario; Janczyszyn, Jerzy; Maiorino, Jose; Pyeon, Cheolho; Stanculescu, Alexander; Titarenko, Yury; Westmeier, Wolfram

    2008-01-01

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  15. Benchmarking Severe Accident Computer Codes for Heavy Water Reactor Applications

    International Nuclear Information System (INIS)

    2013-12-01

    Requests for severe accident investigations and assurance of mitigation measures have increased for operating nuclear power plants and the design of advanced nuclear power plants. Severe accident analysis investigations necessitate the analysis of the very complex physical phenomena that occur sequentially during various stages of accident progression. Computer codes are essential tools for understanding how the reactor and its containment might respond under severe accident conditions. The IAEA organizes coordinated research projects (CRPs) to facilitate technology development through international collaboration among Member States. The CRP on Benchmarking Severe Accident Computer Codes for HWR Applications was planned on the advice and with the support of the IAEA Nuclear Energy Department's Technical Working Group on Advanced Technologies for HWRs (the TWG-HWR). This publication summarizes the results from the CRP participants. The CRP promoted international collaboration among Member States to improve the phenomenological understanding of severe core damage accidents and the capability to analyse them. The CRP scope included the identification and selection of a severe accident sequence, selection of appropriate geometrical and boundary conditions, conduct of benchmark analyses, comparison of the results of all code outputs, evaluation of the capabilities of computer codes to predict important severe accident phenomena, and the proposal of necessary code improvements and/or new experiments to reduce uncertainties. Seven institutes from five countries with HWRs participated in this CRP

  16. Benchmarking severe accident computer codes for heavy water reactor applications

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.H. [International Atomic Energy Agency, Vienna (Austria)

    2010-07-01

    Consideration of severe accidents at a nuclear power plant (NPP) is an essential component of the defence in depth approach used in nuclear safety. Severe accident analysis involves very complex physical phenomena that occur sequentially during various stages of accident progression. Computer codes are essential tools for understanding how the reactor and its containment might respond under severe accident conditions. International cooperative research programmes are established by the IAEA in areas that are of common interest to a number of Member States. These co-operative efforts are carried out through coordinated research projects (CRPs), typically 3 to 6 years in duration, and often involving experimental activities. Such CRPs allow a sharing of efforts on an international basis, foster team-building and benefit from the experience and expertise of researchers from all participating institutes. The IAEA is organizing a CRP on benchmarking severe accident computer codes for heavy water reactor (HWR) applications. The CRP scope includes defining the severe accident sequence and conducting benchmark analyses for HWRs, evaluating the capabilities of existing computer codes to predict important severe accident phenomena, and suggesting necessary code improvements and/or new experiments to reduce uncertainties. The CRP has been planned on the advice and with the support of the IAEA Nuclear Energy Department's Technical Working Groups on Advanced Technologies for HWRs. (author)

  17. Benchmark testing and independent verification of the VS2DT computer code

    International Nuclear Information System (INIS)

    McCord, J.T.

    1994-11-01

    The finite difference flow and transport simulator VS2DT was benchmark tested against several other codes which solve the same equations (Richards equation for flow and the Advection-Dispersion equation for transport). The benchmark problems investigated transient two-dimensional flow in a heterogeneous soil profile with a localized water source at the ground surface. The VS2DT code performed as well as or better than all other codes when considering mass balance characteristics and computational speed. It was also rated highly relative to the other codes with regard to ease-of-use. Following the benchmark study, the code was verified against two analytical solutions, one for two-dimensional flow and one for two-dimensional transport. These independent verifications show reasonable agreement with the analytical solutions, and complement the one-dimensional verification problems published in the code's original documentation

  18. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  19. Evaluation of the computer code system RADHEAT-V4 by analysing benchmark problems on radiation shielding

    International Nuclear Information System (INIS)

    Sakamoto, Yukio; Naito, Yoshitaka

    1990-11-01

    A computer code system RADHEAT-V4 has been developed for safety evaluation on radiation shielding of nuclear fuel facilities. To evaluate the performance of the code system, 18 benchmark problem were selected and analysed. Evaluated radiations are neutron and gamma-ray. Benchmark problems consist of penetration, streaming and skyshine. The computed results show more accurate than those by the Sn codes ANISN and DOT3.5 or the Monte Carlo code MORSE. Big core memory and many times I/O are, however, required for RADHEAT-V4. (author)

  20. The solution of the LEU and MOX WWER-1000 calculation benchmark with the CARATE - multicell code

    International Nuclear Information System (INIS)

    Hordosy, G.; Maraczy, Cs.

    2000-01-01

    Preparations for disposition of weapons grade plutonium in WWER-1000 reactors are in progress. Benchmark: Defined by the Kurchatov Institute (S. Bychkov, M. Kalugin, A. Lazarenko) to assess the applicability of computer codes for weapons grade MOX assembly calculations. Framework: 'Task force on reactor-based plutonium disposition' of OECD Nuclear Energy Agency. (Authors)

  1. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  2. Benchmarking with high-order nodal diffusion methods

    International Nuclear Information System (INIS)

    Tomasevic, D.; Larsen, E.W.

    1993-01-01

    Significant progress in the solution of multidimensional neutron diffusion problems was made in the late 1970s with the introduction of nodal methods. Modern nodal reactor analysis codes provide significant improvements in both accuracy and computing speed over earlier codes based on fine-mesh finite difference methods. In the past, the performance of advanced nodal methods was determined by comparisons with fine-mesh finite difference codes. More recently, the excellent spatial convergence of nodal methods has permitted their use in establishing reference solutions for some important bench-mark problems. The recent development of the self-consistent high-order nodal diffusion method and its subsequent variational formulation has permitted the calculation of reference solutions with one node per assembly mesh size. In this paper, we compare results for four selected benchmark problems to those obtained by high-order response matrix methods and by two well-known state-of-the-art nodal methods (the open-quotes analyticalclose quotes and open-quotes nodal expansionclose quotes methods)

  3. Heavy nucleus resonant absorption calculation benchmarks

    International Nuclear Information System (INIS)

    Tellier, H.; Coste, H.; Raepsaet, C.; Van der Gucht, C.

    1993-01-01

    The calculation of the space and energy dependence of the heavy nucleus resonant absorption in a heterogeneous lattice is one of the hardest tasks in reactor physics. Because of the computer time and memory needed, it is impossible to represent finely the cross-section behavior in the resonance energy range for everyday computations. Consequently, reactor physicists use a simplified formalism, the self-shielding formalism. As no clean and detailed experimental results are available to validate the self-shielding calculations, Monte Carlo computations are used as a reference. These results, which were obtained with the TRIPOLI continuous-energy Monte Carlo code, constitute a set of numerical benchmarks than can be used to evaluate the accuracy of the techniques or formalisms that are included in any reactor physics codes. Examples of such evaluations, for the new assembly code APOLLO2 and the slowing-down code SECOL, are given for cases of 238 U and 232 Th fuel elements

  4. VENUS-2 Benchmark Problem Analysis with HELIOS-1.9

    International Nuclear Information System (INIS)

    Jeong, Hyeon-Jun; Choe, Jiwon; Lee, Deokjung

    2014-01-01

    Since there are reliable results of benchmark data from the OECD/NEA report of the VENUS-2 MOX benchmark problem, by comparing benchmark results users can identify the credibility of code. In this paper, the solution of the VENUS-2 benchmark problem from HELIOS 1.9 using the ENDF/B-VI library(NJOY91.13) is compared with the result from HELIOS 1.7 with consideration of the MCNP-4B result as reference data. The comparison contains the results of pin cell calculation, assembly calculation, and core calculation. The eigenvalues from those are considered by comparing the results from other codes. In the case of UOX and MOX assemblies, the differences from the MCNP-4B results are about 10 pcm. However, there is some inaccuracy in baffle-reflector condition, and relatively large differences were found in the MOX-reflector assembly and core calculation. Although HELIOS 1.9 utilizes an inflow transport correction, it seems that it has a limited effect on the error in baffle-reflector condition

  5. Reevaluation of the Jezebel Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-10

    Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.

  6. Benchmark criticality experiments for fast fission configuration with high enriched nuclear fuel

    International Nuclear Information System (INIS)

    Sikorin, S.N.; Mandzik, S.G.; Polazau, S.A.; Hryharovich, T.K.; Damarad, Y.V.; Palahina, Y.A.

    2014-01-01

    Benchmark criticality experiments of fast heterogeneous configuration with high enriched uranium (HEU) nuclear fuel were performed using the 'Giacint' critical assembly of the Joint Institute for Power and Nuclear Research - Sosny (JIPNR-Sosny) of the National Academy of Sciences of Belarus. The critical assembly core comprised fuel assemblies without a casing for the 34.8 mm wrench. Fuel assemblies contain 19 fuel rods of two types. The first type is metal uranium fuel rods with 90% enrichment by U-235; the second one is dioxide uranium fuel rods with 36% enrichment by U-235. The total fuel rods length is 620 mm, and the active fuel length is 500 mm. The outer fuel rods diameter is 7 mm, the wall is 0.2 mm thick, and the fuel material diameter is 6.4 mm. The clad material is stainless steel. The side radial reflector: the inner layer of beryllium, and the outer layer of stainless steel. The top and bottom axial reflectors are of stainless steel. The analysis of the experimental results obtained from these benchmark experiments by developing detailed calculation models and performing simulations for the different experiments is presented. The sensitivity of the obtained results for the material specifications and the modeling details were examined. The analyses used the MCNP and MCU computer programs. This paper presents the experimental and analytical results. (authors)

  7. Ad hoc committee on reactor physics benchmarks

    International Nuclear Information System (INIS)

    Diamond, D.J.; Mosteller, R.D.; Gehin, J.C.

    1996-01-01

    In the spring of 1994, an ad hoc committee on reactor physics benchmarks was formed under the leadership of two American Nuclear Society (ANS) organizations. The ANS-19 Standards Subcommittee of the Reactor Physics Division and the Computational Benchmark Problem Committee of the Mathematics and Computation Division had both seen a need for additional benchmarks to help validate computer codes used for light water reactor (LWR) neutronics calculations. Although individual organizations had employed various means to validate the reactor physics methods that they used for fuel management, operations, and safety, additional work in code development and refinement is under way, and to increase accuracy, there is a need for a corresponding increase in validation. Both organizations thought that there was a need to promulgate benchmarks based on measured data to supplement the LWR computational benchmarks that have been published in the past. By having an organized benchmark activity, the participants also gain by being able to discuss their problems and achievements with others traveling the same route

  8. Development and validation of the computer program TNHXY

    International Nuclear Information System (INIS)

    Xolocostli M, V.; Valle G, E. del; Alonso V, G.

    2003-01-01

    This work describes the development and validation of the computer program TNHXY (Neutron Transport with Nodal Hybrid schemes in X Y geometry), which solves the discrete-ordinates neutron transport equations using a discontinuous Bi-Linear (DBiL) nodal hybrid method. One of the immediate applications of TNHXY is in the analysis of nuclear fuel assemblies, in particular those of BWRs. Its validation was carried out by reproducing some results for test or benchmark problems that some authors have solved using other numerical techniques. This allows to ensure that the program will provide results with similar accuracy for other problems of the same type. To accomplish this two benchmark problems have been solved. The first problem consists in a BWR fuel assembly in a 7x7 array without and with control rod. The results obtained with TNHXY are consistent with those reported for the TWOTRAN code. The second benchmark problem is a Mixed Oxide (MOX) fuel assembly in a 10x10 array. This last problem is known as the WPPR benchmark problem of the NEA Data Bank and the results are compared with those obtained with commercial codes like HELIOS, MCNP-4B and CPM-3. (Author)

  9. Elasto-plastic benchmark calculations. Step 1: verification of the numerical accuracy of the computer programs

    International Nuclear Information System (INIS)

    Corsi, F.

    1985-01-01

    In connection with the design of nuclear reactors components operating at elevated temperature, design criteria need a level of realism in the prediction of inelastic structural behaviour. This concept leads to the necessity of developing non linear computer programmes, and, as a consequence, to the problems of verification and qualification of these tools. Benchmark calculations allow to carry out these two actions, involving at the same time an increased level of confidence in complex phenomena analysis and in inelastic design calculations. With the financial and programmatic support of the Commission of the European Communities (CEE) a programme of elasto-plastic benchmark calculations relevant to the design of structural components for LMFBR has been undertaken by those Member States which are developing a fast reactor project. Four principal progressive aims were initially pointed out that brought to the decision to subdivide the Benchmark effort in a calculations series of four sequential steps: step 1 to 4. The present document tries to summarize Step 1 of the Benchmark exercise, to derive some conclusions on Step 1 by comparison of the results obtained with the various codes and to point out some concluding comments on the first action. It is to point out that even if the work was designed to test the capabilities of the computer codes, another aim was to increase the skill of the users concerned

  10. Parameters calculation of fuel assembly with complex geometry

    International Nuclear Information System (INIS)

    Wu Hongchun; Ju Haitao; Yao Dong

    2006-01-01

    The code DRAGON was developed for CANDU reactor by Ecole Polytechnique de Montreal of Canada. In order to validate the DRAGON code's applicability for complex geometry fuel assembly calculation, the rod shape fuel assembly of PWR benchmark problem and the plate shape fuel assembly of MTR benchmark problem were analyzed by DRAGON code. Some other shape fuel assemblies were also discussed simply. Calculation results show that the DRAGON code can be used to calculate variform fuel assembly and the precision is high. (authors)

  11. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    Energy Technology Data Exchange (ETDEWEB)

    Abanades, Alberto [Universidad Politecnica de Madrid (Spain); Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto [ANL, Argonne (United States); Bornos, Victor; Kiyavitskaya, Anna [Joint Institute of Power Eng. and Nucl. Research ' Sosny' , Minsk (Belarus); Carta, Mario [ENEA, Casaccia (Italy); Janczyszyn, Jerzy [AGH-University of Science and Technology, Krakow (Poland); Maiorino, Jose [IPEN, Sao Paulo (Brazil); Pyeon, Cheolho [Kyoto University (Japan); Stanculescu, Alexander [IAEA, Vienna (Austria); Titarenko, Yury [ITEP, Moscow (Russian Federation); Westmeier, Wolfram [Wolfram Westmeier GmbH, Ebsdorfergrund (Germany)

    2008-07-01

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  12. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  13. A benchmarking tool to evaluate computer tomography perfusion infarct core predictions against a DWI standard.

    Science.gov (United States)

    Cereda, Carlo W; Christensen, Søren; Campbell, Bruce Cv; Mishra, Nishant K; Mlynash, Michael; Levi, Christopher; Straka, Matus; Wintermark, Max; Bammer, Roland; Albers, Gregory W; Parsons, Mark W; Lansberg, Maarten G

    2016-10-01

    Differences in research methodology have hampered the optimization of Computer Tomography Perfusion (CTP) for identification of the ischemic core. We aim to optimize CTP core identification using a novel benchmarking tool. The benchmarking tool consists of an imaging library and a statistical analysis algorithm to evaluate the performance of CTP. The tool was used to optimize and evaluate an in-house developed CTP-software algorithm. Imaging data of 103 acute stroke patients were included in the benchmarking tool. Median time from stroke onset to CT was 185 min (IQR 180-238), and the median time between completion of CT and start of MRI was 36 min (IQR 25-79). Volumetric accuracy of the CTP-ROIs was optimal at an rCBF threshold of benchmarking tool can play an important role in optimizing CTP software as it provides investigators with a novel method to directly compare the performance of alternative CTP software packages. © The Author(s) 2015.

  14. Development of computer code SIMPSEX for simulation of FBR fuel reprocessing flowsheets: II. additional benchmarking results

    International Nuclear Information System (INIS)

    Shekhar Kumar; Koganti, S.B.

    2003-07-01

    Benchmarking and application of a computer code SIMPSEX for high plutonium FBR flowsheets was reported recently in an earlier report (IGC-234). Improvements and recompilation of the code (Version 4.01, March 2003) required re-validation with the existing benchmarks as well as additional benchmark flowsheets. Improvements in the high Pu region (Pu Aq >30 g/L) resulted in better results in the 75% Pu flowsheet benchmark. Below 30 g/L Pu Aq concentration, results were identical to those from the earlier version (SIMPSEX Version 3, code compiled in 1999). In addition, 13 published flowsheets were taken as additional benchmarks. Eleven of these flowsheets have a wide range of feed concentrations and few of them are β-γ active runs with FBR fuels having a wide distribution of burnup and Pu ratios. A published total partitioning flowsheet using externally generated U(IV) was also simulated using SIMPSEX. SIMPSEX predictions were compared with listed predictions from conventional SEPHIS, PUMA, PUNE and PUBG. SIMPSEX results were found to be comparable and better than the result from above listed codes. In addition, recently reported UREX demo results along with AMUSE simulations are also compared with SIMPSEX predictions. Results of the benchmarking SIMPSEX with these 14 benchmark flowsheets are discussed in this report. (author)

  15. In-cylinder diesel spray combustion simulations using parallel computation: A performance benchmarking study

    International Nuclear Information System (INIS)

    Pang, Kar Mun; Ng, Hoon Kiat; Gan, Suyin

    2012-01-01

    Highlights: ► A performance benchmarking exercise is conducted for diesel combustion simulations. ► The reduced chemical mechanism shows its advantages over base and skeletal models. ► High efficiency and great reduction of CPU runtime are achieved through 4-node solver. ► Increasing ISAT memory from 0.1 to 2 GB reduces the CPU runtime by almost 35%. ► Combustion and soot processes are predicted well with minimal computational cost. - Abstract: In the present study, in-cylinder diesel combustion simulation was performed with parallel processing on an Intel Xeon Quad-Core platform to allow both fluid dynamics and chemical kinetics of the surrogate diesel fuel model to be solved simultaneously on multiple processors. Here, Cartesian Z-Coordinate was selected as the most appropriate partitioning algorithm since it computationally bisects the domain such that the dynamic load associated with fuel particle tracking was evenly distributed during parallel computations. Other variables examined included number of compute nodes, chemistry sizes and in situ adaptive tabulation (ISAT) parameters. Based on the performance benchmarking test conducted, parallel configuration of 4-compute node was found to reduce the computational runtime most efficiently whereby a parallel efficiency of up to 75.4% was achieved. The simulation results also indicated that accuracy level was insensitive to the number of partitions or the partitioning algorithms. The effect of reducing the number of species on computational runtime was observed to be more significant than reducing the number of reactions. Besides, the study showed that an increase in the ISAT maximum storage of up to 2 GB reduced the computational runtime by 50%. Also, the ISAT error tolerance of 10 −3 was chosen to strike a balance between results accuracy and computational runtime. The optimised parameters in parallel processing and ISAT, as well as the use of the in-house reduced chemistry model allowed accurate

  16. High Energy Physics (HEP) benchmark program

    International Nuclear Information System (INIS)

    Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.

    1993-01-01

    High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)

  17. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    International Nuclear Information System (INIS)

    Schaefer, R. W.; McKnight, R. D.

    2000-01-01

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of k eff . Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for k eff , f 28 /f 25 , c 28 /f 25 , and β eff . These limited results demonstrate the importance of studying other integral parameters in addition to k eff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files

  18. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-03-01

    The purpose of this research is to provide a fundamental computational investigation into the possible integration of experimental activities with the Advanced Test Reactor Critical (ATR-C) facility with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of integral data for improving neutron cross sections. Further assessment of oscillation

  19. Theory of Connectivity: Nature and Nurture of Cell Assemblies and Cognitive Computation.

    Science.gov (United States)

    Li, Meng; Liu, Jun; Tsien, Joe Z

    2016-01-01

    Richard Semon and Donald Hebb are among the firsts to put forth the notion of cell assembly-a group of coherently or sequentially-activated neurons-to represent percept, memory, or concept. Despite the rekindled interest in this century-old idea, the concept of cell assembly still remains ill-defined and its operational principle is poorly understood. What is the size of a cell assembly? How should a cell assembly be organized? What is the computational logic underlying Hebbian cell assemblies? How might Nature vs. Nurture interact at the level of a cell assembly? In contrast to the widely assumed randomness within the mature but naïve cell assembly, the Theory of Connectivity postulates that the brain consists of the developmentally pre-programmed cell assemblies known as the functional connectivity motif (FCM). Principal cells within such FCM is organized by the power-of-two-based mathematical principle that guides the construction of specific-to-general combinatorial connectivity patterns in neuronal circuits, giving rise to a full range of specific features, various relational patterns, and generalized knowledge. This pre-configured canonical computation is predicted to be evolutionarily conserved across many circuits, ranging from these encoding memory engrams and imagination to decision-making and motor control. Although the power-of-two-based wiring and computational logic places a mathematical boundary on an individual's cognitive capacity, the fullest intellectual potential can be brought about by optimized nature and nurture. This theory may also open up a new avenue to examining how genetic mutations and various drugs might impair or improve the computational logic of brain circuits.

  20. Thermal and fast reactor benchmark testing of ENDF/B-6.4

    International Nuclear Information System (INIS)

    Liu Guisheng

    1999-01-01

    The benchmark testing for B-6.4 was done with the same benchmark experiments and calculating method as for B-6.2. The effective multiplication factors k eff , central reaction rate ratios of fast assemblies and lattice cell reaction rate ratios of thermal lattice cell assemblies were calculated and compared with testing results of B-6.2 and CENDL-2. It is obvious that 238 U data files are most important for the calculations of large fast reactors and lattice thermal reactors. However, 238 U data in the new version of ENDF/B-6 have not been renewed. Only data of 235 U, 27 Al, 14 N and 2 D have been renewed in ENDF/B-6.4. Therefor, it will be shown that the thermal reactor benchmark testing results are remarkably improved and the fast reactor benchmark testing results are not improved

  1. Experience in programming Assembly language of CDC CYBER 170/750 computer

    International Nuclear Information System (INIS)

    Caldeira, A.D.

    1987-10-01

    Aiming to optimize processing time of BCG computer code in the CDC CYBER 170/750 computer, the FORTRAN-V language of INTERP subroutine was converted to Assembly language. The BCG code was developed for solving neutron transport equation by iterative method, and the INTERP subroutine is innermost loop of the code carrying out 5 interpolation types. The central processor unit Assembly language of the CDC CYBER 170/750 computer and its application in implementing the interpolation subroutine of BCG code are described. (M.C.K.)

  2. A benchmark test of computer codes for calculating average resonance parameters

    International Nuclear Information System (INIS)

    Ribon, P.; Thompson, A.

    1983-01-01

    A set of resonance parameters has been generated from known, but secret, average values; the parameters have then been adjusted to mimic experimental data by including the effects of Doppler broadening, resolution broadening and statistical fluctuations. Average parameters calculated from the dataset by various computer codes are compared with each other, and also with the true values. The benchmark test is fully described in the report NEANDC160-U (NEA Data Bank Newsletter No. 27 July 1982); the present paper is a summary of this document. (Auth.)

  3. 3-D extension C5G7 MOX benchmark results using PARTISN

    Energy Technology Data Exchange (ETDEWEB)

    Dahl, J.A. [Los Alamos National Laboratory, CCS-4 Transport Methods Group, Los Alamos, NM (United States)

    2005-07-01

    We have participated in the Expert Group of 3-D Radiation Transport Benchmarks' proposed 3-dimensional Extension C5G7 MOX problems using the discrete ordinate transport code PARTISN. The computational mesh was created using the FRAC-IN-THE-BOX code, which produces a volume fraction Cartesian mesh from combinatorial geometry descriptions. k{sub eff} eigenvalues, maximum pin powers, and average fuel assembly powers are reported and compared to a benchmark quality Monte Carlo solution. We also present a two dimensional mesh convergence study examining the affects of using volume fractions to approximate the water-pin cell interface. It appears that the control rod pin cell must be meshed twice as fine as a fuel pin cell in order to achieve the same spatial error when using the volume fraction method to define water channel-pin cell interfaces. It is noted that the previous PARTISN results provided to the OECD/NEA Expert Group on 3-dimensional Radiation Benchmarks contained a cross section error, and therefore should be disregarded.

  4. 3-D extension C5G7 MOX benchmark results using PARTISN

    International Nuclear Information System (INIS)

    Dahl, J.A.

    2005-01-01

    We have participated in the Expert Group of 3-D Radiation Transport Benchmarks' proposed 3-dimensional Extension C5G7 MOX problems using the discrete ordinate transport code PARTISN. The computational mesh was created using the FRAC-IN-THE-BOX code, which produces a volume fraction Cartesian mesh from combinatorial geometry descriptions. k eff eigenvalues, maximum pin powers, and average fuel assembly powers are reported and compared to a benchmark quality Monte Carlo solution. We also present a two dimensional mesh convergence study examining the affects of using volume fractions to approximate the water-pin cell interface. It appears that the control rod pin cell must be meshed twice as fine as a fuel pin cell in order to achieve the same spatial error when using the volume fraction method to define water channel-pin cell interfaces. It is noted that the previous PARTISN results provided to the OECD/NEA Expert Group on 3-dimensional Radiation Benchmarks contained a cross section error, and therefore should be disregarded

  5. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  6. Theory of Connectivity: Nature and Nurture of Cell Assemblies and Cognitive Computation

    Directory of Open Access Journals (Sweden)

    Meng eLi

    2016-04-01

    Full Text Available Richard Semon and Donald Hebb are among the firsts to put forth the notion of cell assembly – a group of coherently or sequentially-activated neurons– to represent percept, memory, or concept. Despite the rekindled interest in this age-old idea, the concept of cell assembly still remains ill-defined and its operational principle is poorly understood. What is the size of a cell assembly? How should a cell assembly be organized? What is the computational logic underlying Hebbian cell assemblies? How might Nature vs Nurture interact at the level of a cell assembly? In contrast to the widely assumed local randomness within the mature but naïve cell assembly, the recent Theory of Connectivity postulates that the brain consists of the developmentally pre-programmed cell assemblies known as the functional connectivity motif (FCM. Principal cells within such FCM is organized by the power-of-two-based mathematical principle that guides the construction of specific-to-general combinatorial connectivity patterns in neuronal circuits, giving rise to a full range of specific features, various relational patterns, and generalized knowledge. This pre-configured canonical computation is predicted to be evolutionarily conserved across many circuits, ranging from these encoding memory engrams and imagination to decision-making and motor control. Although the power-of-two-based wiring and computational logic places a mathematical boundary on an individual’s cognitive capacity, the fullest intellectual potential can be brought about by optimized nature and nurture. This theory may also open up a new avenue to examining how genetic mutations and various drugs might impair or enhance the computational logic of brain circuits.

  7. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    International Nuclear Information System (INIS)

    Benelli, G

    2010-01-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  8. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  9. Quantum computing applied to calculations of molecular energies: CH2 benchmark.

    Science.gov (United States)

    Veis, Libor; Pittner, Jiří

    2010-11-21

    Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.

  10. Optimizing and benchmarking de novo transcriptome sequencing: from library preparation to assembly evaluation.

    Science.gov (United States)

    Hara, Yuichiro; Tatsumi, Kaori; Yoshida, Michio; Kajikawa, Eriko; Kiyonari, Hiroshi; Kuraku, Shigehiro

    2015-11-18

    RNA-seq enables gene expression profiling in selected spatiotemporal windows and yields massive sequence information with relatively low cost and time investment, even for non-model species. However, there remains a large room for optimizing its workflow, in order to take full advantage of continuously developing sequencing capacity. Transcriptome sequencing for three embryonic stages of Madagascar ground gecko (Paroedura picta) was performed with the Illumina platform. The output reads were assembled de novo for reconstructing transcript sequences. In order to evaluate the completeness of transcriptome assemblies, we prepared a reference gene set consisting of vertebrate one-to-one orthologs. To take advantage of increased read length of >150 nt, we demonstrated shortened RNA fragmentation time, which resulted in a dramatic shift of insert size distribution. To evaluate products of multiple de novo assembly runs incorporating reads with different RNA sources, read lengths, and insert sizes, we introduce a new reference gene set, core vertebrate genes (CVG), consisting of 233 genes that are shared as one-to-one orthologs by all vertebrate genomes examined (29 species)., The completeness assessment performed by the computational pipelines CEGMA and BUSCO referring to CVG, demonstrated higher accuracy and resolution than with the gene set previously established for this purpose. As a result of the assessment with CVG, we have derived the most comprehensive transcript sequence set of the Madagascar ground gecko by means of assembling individual libraries followed by clustering the assembled sequences based on their overall similarities. Our results provide several insights into optimizing de novo RNA-seq workflow, including the coordination between library insert size and read length, which manifested in improved connectivity of assemblies. The approach and assembly assessment with CVG demonstrated here would be applicable to transcriptome analysis of other species as

  11. Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C

    OpenAIRE

    Blaha, Stephen

    2002-01-01

    We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.

  12. Calculation of the 5th AER dynamic benchmark with APROS

    International Nuclear Information System (INIS)

    Puska, E.K.; Kontio, H.

    1998-01-01

    The model used for calculation of the 5th AER dynamic benchmark with APROS code is presented. In the calculation of the 5th AER dynamic benchmark the three-dimensional neutronics model of APROS was used. The core was divided axially into 20 nodes according to the specifications of the benchmark and each six identical fuel assemblies were placed into one one-dimensional thermal hydraulic channel. The five-equation thermal hydraulic model was used in the benchmark. The plant process and automation was described with a generic VVER-440 plant model created by IVO PE. (author)

  13. Benchmark exercise for fluid flow simulations in a liquid metal fast reactor fuel assembly

    Energy Technology Data Exchange (ETDEWEB)

    Merzari, E., E-mail: emerzari@anl.gov [Mathematics and Computer Science Division, Argonne National Laboratory, 9700 S. Cass Avenue, Lemont, IL 60439 (United States); Fischer, P. [Mathematics and Computer Science Division, Argonne National Laboratory, 9700 S. Cass Avenue, Lemont, IL 60439 (United States); Yuan, H. [Nuclear Engineering Division, Argonne National Laboratory, Lemont, IL (United States); Van Tichelen, K.; Keijers, S. [SCK-CEN, Boeretang 200, Mol (Belgium); De Ridder, J.; Degroote, J.; Vierendeels, J. [Ghent University, Ghent (Belgium); Doolaard, H.; Gopala, V.R.; Roelofs, F. [NRG, Petten (Netherlands)

    2016-03-15

    Highlights: • A EUROTAM-US INERI consortium has performed a benchmark exercise related to fast reactor assembly simulations. • LES calculations for a wire-wrapped rod bundle are compared with RANS calculations. • Results show good agreement for velocity and cross flows. - Abstract: As part of a U.S. Department of Energy International Nuclear Energy Research Initiative (I-NERI), Argonne National Laboratory (Argonne) is collaborating with the Dutch Nuclear Research and consultancy Group (NRG), the Belgian Nuclear Research Centre (SCK·CEN), and Ghent University (UGent) in Belgium to perform and compare a series of fuel-pin-bundle calculations representative of a fast reactor core. A wire-wrapped fuel bundle is a complex configuration for which little data is available for verification and validation of new simulation tools. UGent and NRG performed their simulations with commercially available computational fluid dynamics (CFD) codes. The high-fidelity Argonne large-eddy simulations were performed with Nek5000, used for CFD in the Simulation-based High-efficiency Advanced Reactor Prototyping (SHARP) suite. SHARP is a versatile tool that is being developed to model the core of a wide variety of reactor types under various scenarios. It is intended both to serve as a surrogate for physical experiments and to provide insight into experimental results. Comparison of the results obtained by the different participants with the reference Nek5000 results shows good agreement, especially for the cross-flow data. The comparison also helps highlight issues with current modeling approaches. The results of the study will be valuable in the design and licensing process of MYRRHA, a flexible fast research reactor under design at SCK·CEN that features wire-wrapped fuel bundles cooled by lead-bismuth eutectic.

  14. Benchmark studies of computer prediction techniques for equilibrium chemistry and radionuclide transport in groundwater flow

    International Nuclear Information System (INIS)

    Broyd, T.W.

    1988-01-01

    A brief review of two recent benchmark exercises is presented. These were separately concerned with the equilibrium chemistry of groundwater and the geosphere migration of radionuclides, and involved the use of a total of 19 computer codes by 11 organisations in Europe and Canada. A similar methodology was followed for each exercise, in that series of hypothetical test cases were used to explore the limits of each code's application, and so provide an overview of current modelling potential. Aspects of the user-friendliness of individual codes were also considered. The benchmark studies have benefited participating organisations by providing a means of verifying current codes, and have provided problem data sets by which future models may be compared. (author)

  15. The Use of Hebbian Cell Assemblies for Nonlinear Computation

    DEFF Research Database (Denmark)

    Tetzlaff, Christian; Dasgupta, Sakyasingha; Kulvicius, Tomas

    2015-01-01

    When learning a complex task our nervous system self-organizes large groups of neurons into coherent dynamic activity patterns. During this, a network with multiple, simultaneously active, and computationally powerful cell assemblies is created. How such ordered structures are formed while preser...... computing complex non-linear transforms and - for execution - must cooperate with each other without interference. This mechanism, thus, permits the self-organization of computationally powerful sub-structures in dynamic networks for behavior control....

  16. Benchmark experiment on vanadium assembly with D-T neutrons. In-situ measurement

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Kasugai, Yoshimi; Konno, Chikara; Wada, Masayuki; Oyama, Yukio; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Murata, Isao; Kokooo; Takahashi, Akito

    1998-03-01

    Fusion neutronics benchmark experimental data on vanadium were obtained for neutrons in almost entire energies as well as secondary gamma-rays. Benchmark calculations for the experiment were performed to investigate validity of recent nuclear data files, i.e., JENDL Fusion File, FENDL/E-1.0 and EFF-3. (author)

  17. Modeling biological problems in computer science: a case study in genome assembly.

    Science.gov (United States)

    Medvedev, Paul

    2018-01-30

    As computer scientists working in bioinformatics/computational biology, we often face the challenge of coming up with an algorithm to answer a biological question. This occurs in many areas, such as variant calling, alignment and assembly. In this tutorial, we use the example of the genome assembly problem to demonstrate how to go from a question in the biological realm to a solution in the computer science realm. We show the modeling process step-by-step, including all the intermediate failed attempts. Please note this is not an introduction to how genome assembly algorithms work and, if treated as such, would be incomplete and unnecessarily long-winded. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  19. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-07-01

    The purpose of this document is to identify some suggested types of experiments that can be performed in the Advanced Test Reactor Critical (ATR-C) facility. A fundamental computational investigation is provided to demonstrate possible integration of experimental activities in the ATR-C with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of

  20. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  1. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  2. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  3. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  4. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  5. Calculation of the fifth atomic energy research dynamic benchmark with APROS

    International Nuclear Information System (INIS)

    Puska Eija Karita; Kontio Harii

    1998-01-01

    The band-out presents the model used for calculation of the fifth atomic energy research dynamic benchmark with APROS code. In the calculation of the fifth atomic energy research dynamic benchmark the three-dimensional neutronics model of APROS was used. The core was divided axially into 20 nodes according to the specifications of the benchmark and each six identical fuel assemblies were placed into one one-dimensional thermal hydraulic channel. The five-equation thermal hydraulic model was used in the benchmark. The plant process and automation was described with a generic WWER-440 plant model created by IVO Power Engineering Ltd. - Finland. (Author)

  6. Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C Language

    OpenAIRE

    Blaha, Stephen

    2002-01-01

    We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.

  7. Assessment of TRAC-PF1/MOD3 Mark-22 assembly model using SRL ''A'' tank single-assembly flow experiments

    International Nuclear Information System (INIS)

    Fischer, S.R.; Lam, K.; Lin, J.C.

    1991-01-01

    This paper summarizes the results of an assessment of our TRAC-PF1/MOD3 Mark-22 prototype fuel assembly model against single-assembly data obtained from the ''A'' Tank single-assembly tests that were performed at the Savannah River Laboratory. We felt the data characterize prototypic assembly behavior over a range of air-water flow conditions of interest for loss-of-coolant accident (LOCA) calculations. This study was part of a benchmarking effort performed to evaluate and validate a multiple-assembly, full-plant model that is being developed by Los Alamos National Laboratory to study various aspects of the Savannah River plant operating conditions, including LOCA transients, using TRAC-PF1/MOD3 Version 1.10. The results of this benchmarking effort demonstrate that TRAC-PF1/MOD3 is capable pf calculating plenum conditions and assembly flows during conditions thought to be typical of the Emergency Cooling System (ECS) phase of a LOCA. 10 refs., 12 fig

  8. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  9. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    Baker, S. P.; Carter, R. G.; Watkins, K. E.; Jones, D. B.

    2004-01-01

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  10. Burn-up Credit Criticality Safety Benchmark-Phase II-E. Impact of Isotopic Inventory Changes due to Control Rod Insertions on Reactivity and the End Effect in PWR UO2 Fuel Assemblies

    International Nuclear Information System (INIS)

    Neuber, Jens Christian; Tippl, Wolfgang; Hemptinne, Gwendoline de; Maes, Philippe; Ranta-aho, Anssu; Peneliau, Yannick; Jutier, Ludyvine; Tardy, Marcel; Reiche, Ingo; Kroeger, Helge; Nakata, Tetsuo; Armishaw, Malcom; Miller, Thomas M.

    2015-01-01

    The report describes the final results of the Phase II-E Burn-up Credit Criticality Benchmark conducted by the Expert Group on Burn-up Credit Criticality Safety. The objective of Phase II of the Burn-up Credit Criticality Safety programme is to study the impact of axial burn-up profiles of PWR UO 2 spent fuel assemblies on the reactivity of PWR UO 2 spent fuel assembly configurations. The objective of the Phase II-E benchmark was to study the impact of changes on the spent nuclear fuel isotopic composition due to control rod insertion during depletion on the reactivity and the end effect of spent fuel assemblies with realistic axial burn-up profiles for different control rod insertion depths ranging from 0 cm (no insertion) to full insertion (i.e. to the case that the fuel assemblies were exposed to control rod insertion over their full active length). For this purpose two axial burn-up profiles have been extracted from an AREVA-NP-GmbH-owned 17x17-(24+1) PWR UO 2 spent fuel assembly burn-up profile database. One profile has an average burn-up of 30 MWd/kg U, the other profile is related to an average burn-up of 50 MWd/kg U. Two profiles with different average burn-up values were selected because the shape of the burn-up profile is affected by the average burn-up and the end effect depends on the average burn-up of the fuel. The Phase II-E benchmark exercise complements the Phase II-C and Phase II-D benchmark exercises. In Phase II-D different irradiation histories were analysed using different control rod insertion histories during depletion as well as irradiation histories without control rod insertion. But in all the histories analysed a uniform distribution of the burn-up and hence a uniform distribution of the isotopic composition were assumed; and in all the histories including any usage of control rods full insertion of the control rods was assumed. In Phase II-C the impact of the asymmetry of axial burn-up profiles on the reactivity and the end effect of

  11. A proposal of a benchmark for calculation of the power distribution next to the absorber

    International Nuclear Information System (INIS)

    Temesvari, E.; Hordosy, G.; Maraczy, Cs.; Hegyi, Gy.; Kereszturi, A.

    1999-01-01

    A proposal of a new benchmark problem was formulated to consider the characteristics of the VVER-440 fuel assembly with enrichment zoning, i. e. to study the space dependence of the power distribution near to a control assembly. A quite detailed geometry and the material composition of the fuel and the control assemblies were modeled by the help of MCNP calculations in AEKI. The results of the MCNP calculations were built in the KARATE code system as the new albedo matrices. The comparison of the KARATE calculation results and the MCNP calculations for this benchmark is presented. (Authors)

  12. Compilation report of VHTRC temperature coefficient benchmark calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yasuda, Hideshi; Yamane, Tsuyoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-11-01

    A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, `Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors` to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k{sub eff}, by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other`s ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.).

  13. VERA Pin and Fuel Assembly Depletion Benchmark Calculations by McCARD and DeCART

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ho Jin; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    Monte Carlo (MC) codes have been developed and used to simulate a neutron transport since MC method was devised in the Manhattan project. Solving the neutron transport problem with the MC method is simple and straightforward to understand. Because there are few essential approximations for the 6- dimension phase of a neutron such as the location, energy, and direction in MC calculations, highly accurate solutions can be obtained through such calculations. In this work, the VERA pin and fuel assembly (FA) depletion benchmark calculations are performed to examine the depletion capability of the newly generated DeCART multi-group cross section library. To obtain the reference solutions, MC depletion calculations are conducted using McCARD. Moreover, to scrutinize the effect by stochastic uncertainty propagation, uncertainty propagation analyses are performed using a sensitivity and uncertainty (S/U) analysis method and stochastic sampling (S.S) method. It is still expensive and challenging to perform a depletion analysis by a MC code. Nevertheless, many studies and works for a MC depletion analysis have been conducted to utilize the benefits of the MC method. In this study, McCARD MC and DeCART MOC transport calculations are performed for the VERA pin and FA depletion benchmarks. The DeCART depletion calculations are conducted to examine the depletion capability of the newly generated multi-group cross section library. The DeCART depletion calculations give excellent agreement with the McCARD reference one. From the McCARD results, it is observed that the MC depletion results depend on how to split the burnup interval. First, only to quantify the effect of the stochastic uncertainty propagation at 40 DTS, the uncertainty propagation analyses are performed using the S/U and S.S. method.

  14. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  15. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-01-01

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  16. Advances in Reactor physics, mathematics and computation. Volume 3

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    These proceedings of the international topical meeting on advances in reactor physics, mathematics and computation, volume 3, are divided into sessions bearing on: - poster sessions on benchmark and codes: 35 conferences - review of status of assembly spectrum codes: 9 conferences - Numerical methods in fluid mechanics and thermal hydraulics: 16 conferences - stochastic transport and methods: 7 conferences.

  17. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  18. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  19. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  20. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  1. Spectrum integrated (n,He) cross section comparisons and least squares analyses for 6Li and 10B in benchmark fields

    International Nuclear Information System (INIS)

    Schenter, R.E.; Oliver, B.M.; Farrar, H. IV.

    1986-06-01

    Spectrum integrated cross sections for 6 Li and 10 B from five benchmark fast reactor neutron fields are compared with calculated values obtained using the ENDF/B-V Cross Section Files. The benchmark fields include the Coupled Fast Reactivity Measurements Facility (CFRMF) at the Idaho National Engineering Laboratory, the 10% Enriched U-235 Critical Assembly (BIG-10) at Los Alamos National Laboratory, the Sigma-Sigma and Fission Cavity fields of the BR-1 reactor at CEN/SCK, and the Intermediate Energy Standard Neutron Field (ISNF) at the National Bureau of Standards. Results from least square analyses using the FERRET computer code to obtain adjusted cross section values and their uncertainties are presented. Input to these calculations include the above five benchmark data sets. These analyses indicate a need for revision in the ENDF/B-V files for the 10 B and 6 Li cross sections for energies above 50 keV

  2. ZPR-3 Assembly 6F : A spherical assembly of highly enriched uranium, depleted uranium, aluminum and steel with an average {sup 235}U enrichment of 47 atom %.

    Energy Technology Data Exchange (ETDEWEB)

    Lell, R. M.; McKnight, R. D; Schaefer, R. W.; Nuclear Engineering Division

    2010-09-30

    Over a period of 30 years, more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited for nuclear data validation and to form the basis for criticality safety benchmarks. A number of the Argonne ZPR/ZPPR critical assemblies have been evaluated as ICSBEP and IRPhEP benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactor physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. ZPR-3 Assembly 6 consisted of six phases, A through F. In each phase a critical configuration was constructed to simulate a very simple shape such as a slab, cylinder or sphere that could be analyzed with the limited analytical tools available in the 1950s. In each case the configuration consisted of a core region of metal plates surrounded by a thick depleted uranium metal reflector. The average compositions of the core configurations were essentially identical in phases A - F. ZPR-3

  3. A simplified 2D HTTR benchmark problem

    International Nuclear Information System (INIS)

    Zhang, Z.; Rahnema, F.; Pounders, J. M.; Zhang, D.; Ougouag, A.

    2009-01-01

    To access the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of relevant whole core configurations. In this paper we have created a numerical benchmark problem in 2D configuration typical of a high temperature gas cooled prismatic core. This problem was derived from the HTTR start-up experiment. For code-to-code verification, complex details of geometry and material specification of the physical experiments are not necessary. To this end, the benchmark problem presented here is derived by simplifications that remove the unnecessary details while retaining the heterogeneity and major physics properties from the neutronics viewpoint. Also included here is a six-group material (macroscopic) cross section library for the benchmark problem. This library was generated using the lattice depletion code HELIOS. Using this library, benchmark quality Monte Carlo solutions are provided for three different configurations (all-rods-in, partially-controlled and all-rods-out). The reference solutions include the core eigenvalue, block (assembly) averaged fuel pin fission density distributions, and absorption rate in absorbers (burnable poison and control rods). (authors)

  4. Further development of the Dynamic Control Assemblies Worth Measurement Method for Advanced Reactivity Computers

    International Nuclear Information System (INIS)

    Petenyi, V.; Strmensky, C.; Jagrik, J.; Minarcin, M.; Sarvaic, I.

    2005-01-01

    The dynamic control assemblies worth measurement technique is a quick method for validation of predicted control assemblies worth. The dynamic control assemblies worth measurement utilize space-time corrections for the measured out of core ionization chamber readings calculated by DYN 3D computer code. The space-time correction arising from the prompt neutron density redistribution in the measured ionization chamber reading can be directly applied in the advanced reactivity computer. The second correction concerning the difference of spatial distribution of delayed neutrons can be calculated by simulation the measurement procedure by dynamic version of the DYN 3D code. In the paper some results of dynamic control assemblies worth measurement applied for NPP Mochovce are presented (Authors)

  5. An Easily Assembled Laboratory Exercise in Computed Tomography

    Science.gov (United States)

    Mylott, Elliot; Klepetka, Ryan; Dunlap, Justin C.; Widenhorn, Ralf

    2011-01-01

    In this paper, we present a laboratory activity in computed tomography (CT) primarily composed of a photogate and a rotary motion sensor that can be assembled quickly and partially automates data collection and analysis. We use an enclosure made with a light filter that is largely opaque in the visible spectrum but mostly transparent to the near…

  6. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  7. Structural characterisation of medically relevant protein assemblies by integrating mass spectrometry with computational modelling.

    Science.gov (United States)

    Politis, Argyris; Schmidt, Carla

    2018-03-20

    Structural mass spectrometry with its various techniques is a powerful tool for the structural elucidation of medically relevant protein assemblies. It delivers information on the composition, stoichiometries, interactions and topologies of these assemblies. Most importantly it can deal with heterogeneous mixtures and assemblies which makes it universal among the conventional structural techniques. In this review we summarise recent advances and challenges in structural mass spectrometric techniques. We describe how the combination of the different mass spectrometry-based methods with computational strategies enable structural models at molecular levels of resolution. These models hold significant potential for helping us in characterizing the function of protein assemblies related to human health and disease. In this review we summarise the techniques of structural mass spectrometry often applied when studying protein-ligand complexes. We exemplify these techniques through recent examples from literature that helped in the understanding of medically relevant protein assemblies. We further provide a detailed introduction into various computational approaches that can be integrated with these mass spectrometric techniques. Last but not least we discuss case studies that integrated mass spectrometry and computational modelling approaches and yielded models of medically important protein assembly states such as fibrils and amyloids. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  8. Parallel processing of neutron transport in fuel assembly calculation

    International Nuclear Information System (INIS)

    Song, Jae Seung

    1992-02-01

    Group constants, which are used for reactor analyses by nodal method, are generated by fuel assembly calculations based on the neutron transport theory, since one or a quarter of the fuel assembly corresponds to a unit mesh in the current nodal calculation. The group constant calculation for a fuel assembly is performed through spectrum calculations, a two-dimensional fuel assembly calculation, and depletion calculations. The purpose of this study is to develop a parallel algorithm to be used in a parallel processor for the fuel assembly calculation and the depletion calculations of the group constant generation. A serial program, which solves the neutron integral transport equation using the transmission probability method and the linear depletion equation, was prepared and verified by a benchmark calculation. Small changes from the serial program was enough to parallelize the depletion calculation which has inherent parallel characteristics. In the fuel assembly calculation, however, efficient parallelization is not simple and easy because of the many coupling parameters in the calculation and data communications among CPU's. In this study, the group distribution method is introduced for the parallel processing of the fuel assembly calculation to minimize the data communications. The parallel processing was performed on Quadputer with 4 CPU's operating in NURAD Lab. at KAIST. Efficiencies of 54.3 % and 78.0 % were obtained in the fuel assembly calculation and depletion calculation, respectively, which lead to the overall speedup of about 2.5. As a result, it is concluded that the computing time consumed for the group constant generation can be easily reduced by parallel processing on the parallel computer with small size CPU's

  9. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    Energy Technology Data Exchange (ETDEWEB)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar; Rathbun, Miriam; Liang, Jingang

    2018-04-11

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevant multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.

  10. Genome Assembly and Computational Analysis Pipelines for Bacterial Pathogens

    KAUST Repository

    Rangkuti, Farania Gama Ardhina

    2011-06-01

    Pathogens lie behind the deadliest pandemics in history. To date, AIDS pandemic has resulted in more than 25 million fatal cases, while tuberculosis and malaria annually claim more than 2 million lives. Comparative genomic analyses are needed to gain insights into the molecular mechanisms of pathogens, but the abundance of biological data dictates that such studies cannot be performed without the assistance of computational approaches. This explains the significant need for computational pipelines for genome assembly and analyses. The aim of this research is to develop such pipelines. This work utilizes various bioinformatics approaches to analyze the high-­throughput genomic sequence data that has been obtained from several strains of bacterial pathogens. A pipeline has been compiled for quality control for sequencing and assembly, and several protocols have been developed to detect contaminations. Visualization has been generated of genomic data in various formats, in addition to alignment, homology detection and sequence variant detection. We have also implemented a metaheuristic algorithm that significantly improves bacterial genome assemblies compared to other known methods. Experiments on Mycobacterium tuberculosis H37Rv data showed that our method resulted in improvement of N50 value of up to 9697% while consistently maintaining high accuracy, covering around 98% of the published reference genome. Other improvement efforts were also implemented, consisting of iterative local assemblies and iterative correction of contiguated bases. Our result expedites the genomic analysis of virulent genes up to single base pair resolution. It is also applicable to virtually every pathogenic microorganism, propelling further research in the control of and protection from pathogen-­associated diseases.

  11. Links among available integral benchmarks and differential date evaluations, computational biases and uncertainties, and nuclear criticality safety biases on potential MOX production throughput

    International Nuclear Information System (INIS)

    Goluoglu, S.; Hopper, C.M.

    2004-01-01

    Through the use of Oak Ridge National Laboratory's recently developed and applied sensitivity and uncertainty computational analysis techniques, this paper presents the relevance and importance of available and needed integral benchmarks and differential data evaluations impacting potential MOX production throughput determinations relative to low-moderated MOX fuel blending operations. The relevance and importance in the availability of or need for critical experiment benchmarks and data evaluations are presented in terms of computational biases as influenced by computational and experimental sensitivities and uncertainties relative to selected MOX production powder blending processes. Recent developments for estimating the safe margins of subcriticality for assuring nuclear criticality safety for process approval are presented. In addition, the impact of the safe margins (due to computational biases and uncertainties) on potential MOX production throughput will also be presented. (author)

  12. Application of FORSS sensitivity and uncertainty methodology to fast reactor benchmark analysis

    International Nuclear Information System (INIS)

    Weisbin, C.R.; Marable, J.H.; Lucius, J.L.; Oblow, E.M.; Mynatt, F.R.; Peelle, R.W.; Perey, F.G.

    1976-12-01

    FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions, and associated uncertainties. This paper presents the theory and code description as well as the first results of applying FORSS to fast reactor benchmarks. Specifically, for various assemblies and reactor performance parameters, the nuclear data sensitivities were computed by nuclide, reaction type, and energy. Comprehensive libraries of energy-dependent coefficients have been developed in a computer retrievable format and released for distribution by RSIC and NNCSC. Uncertainties induced by nuclear data were quantified using preliminary, energy-dependent relative covariance matrices evaluated with ENDF/B-IV expectation values and processed for 238 U(n,f), 238 U(n,γ), 239 Pu(n,f), and 239 Pu(ν). Nuclear data accuracy requirements to meet specified performance criteria at minimum experimental cost were determined

  13. Benchmarking: contexts and details matter.

    Science.gov (United States)

    Zheng, Siyuan

    2017-07-05

    Benchmarking is an essential step in the development of computational tools. We take this opportunity to pitch in our opinions on tool benchmarking, in light of two correspondence articles published in Genome Biology.Please see related Li et al. and Newman et al. correspondence articles: www.dx.doi.org/10.1186/s13059-017-1256-5 and www.dx.doi.org/10.1186/s13059-017-1257-4.

  14. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2009-01-01

    The classic textbook for computer systems analysis and design, Computer Organization and Design, has been thoroughly updated to provide a new focus on the revolutionary change taking place in industry today: the switch from uniprocessor to multicore microprocessors. This new emphasis on parallelism is supported by updates reflecting the newest technologies with examples highlighting the latest processor designs, benchmarking standards, languages and tools. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, compu

  15. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  16. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    Science.gov (United States)

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  17. Selection and benchmarking of computer codes for research reactor core conversions

    International Nuclear Information System (INIS)

    Yilmaz, E.; Jones, B.G.

    1983-01-01

    A group of computer codes have been selected and obtained from the Nuclear Energy Agency (NEA) Data Bank in France for the core conversion study of highly enriched research reactors. ANISN, WIMSD-4, MC 2 , COBRA-3M, FEVER, THERMOS, GAM-2, CINDER and EXTERMINATOR were selected for the study. For the final work THERMOS, GAM-2, CINDER and EXTERMINATOR have been selected and used. A one dimensional thermal hydraulics code also has been used to calculate temperature distributions in the core. THERMOS and CINDER have been modified to serve the purpose. Minor modifications have been made to GAM-2 and EXTERMINATOR to improve their utilization. All of the codes have been debugged on both CDC and IBM computers at the University of Illinois. IAEA 10 MW Benchmark problem has been solved. Results of this work has been compared with the IAEA contributor's results. Agreement is very good for highly enriched fuel (HEU). Deviations from IAEA contributor's mean value for low enriched fuel (LEU) exist but they are small enough in general

  18. Burnable absorber-integrated Guide Thimble (BigT) - 1. Design concepts and neutronic characterization on the fuel assembly benchmarks

    International Nuclear Information System (INIS)

    Yahya, Mohd-Syukri; Yu, Hwanyeal; Kim, Yonghee

    2016-01-01

    This paper presents the conceptual designs of a new burnable absorber (BA) for the pressurized water reactor (PWR), which is named 'Burnable absorber-integrated Guide Thimble' (BigT). The BigT integrates BA materials into standard guide thimble in a PWR fuel assembly. Neutronic sensitivities and practical design considerations of the BigT concept are points of highlight in the first half of the paper. Specifically, the BigT concepts are characterized in view of its BA material and spatial self-shielding variations. In addition, the BigT replaceability requirement, bottom-end design specifications and thermal-hydraulic considerations are also deliberated. Meanwhile, much of the second half of the paper is devoted to demonstrate practical viability of the BigT absorbers via comparative evaluations against the conventional BA technologies in representative 17x17 and 16x16 fuel assembly lattices. For the 17x17 lattice evaluations, all three BigT variants are benchmarked against Westinghouse's existing BA technologies, while in the 16x16 assembly analyses, the BigT designs are compared against traditional integral gadolinia-urania rod design. All analyses clearly show that the BigT absorbers perform as well as the commercial BA technologies in terms of reactivity and power peaking management. In addition, it has been shown that sufficiently high control rod worth can be obtained with the BigT absorbers in place. All neutronic simulations were completed using the Monte Carlo Serpent code with ENDF/B-VII.0 library. (author)

  19. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  20. Analytical benchmarks for nuclear engineering applications. Case studies in neutron transport theory

    International Nuclear Information System (INIS)

    2008-01-01

    The developers of computer codes involving neutron transport theory for nuclear engineering applications seldom apply analytical benchmarking strategies to ensure the quality of their programs. A major reason for this is the lack of analytical benchmarks and their documentation in the literature. The few such benchmarks that do exist are difficult to locate, as they are scattered throughout the neutron transport and radiative transfer literature. The motivation for this benchmark compendium, therefore, is to gather several analytical benchmarks appropriate for nuclear engineering applications under one cover. We consider the following three subject areas: neutron slowing down and thermalization without spatial dependence, one-dimensional neutron transport in infinite and finite media, and multidimensional neutron transport in a half-space and an infinite medium. Each benchmark is briefly described, followed by a detailed derivation of the analytical solution representation. Finally, a demonstration of the evaluation of the solution representation includes qualified numerical benchmark results. All accompanying computer codes are suitable for the PC computational environment and can serve as educational tools for courses in nuclear engineering. While this benchmark compilation does not contain all possible benchmarks, by any means, it does include some of the most prominent ones and should serve as a valuable reference. (author)

  1. Spectrum integrated (n,He) cross section comparison and least squares analysis for /sup 6/Li and /sup 10/B in benchmark fields

    International Nuclear Information System (INIS)

    Schenter, R.E.; Oliver, B.M.; Farrar, H. IV

    1987-01-01

    Spectrum integrated cross sections for /sup 6/Li and /sup 10/B from five benchmark fast reactor neutron fields are compared with calculated values obtained using the ENDF/B-V Cross Section Files. The benchmark fields include the Coupled Fast Reactivity Measurements Facility (CFRMF) at the Idaho National Engineering Laboratory, the 10% Enriched U-235 Critical Assembly (BIG-10) at Los Alamos National Laboratory, the Sigma Sigma and Fission Cavity fields of the BR-1 reactor at CEN/SCK, and the Intermediate-Energy Standard Neutron Field (ISNF) at the National Bureau of Standards. Results from least square analyses using the FERRET computer code to obtain adjusted cross section values and their uncertainties are presented. Input to these calculations include the above five benchmark data sets. These analyses indicate a need for revision in the ENDF/B-V files for the /sup 10/B cross section for energies above 50 keV

  2. A computer-oriented system for assembling and displaying land management information

    Science.gov (United States)

    Elliot L. Amidon

    1964-01-01

    Maps contain information basic to land management planning. By transforming conventional map symbols into numbers which are punched into cards, the land manager can have a computer assemble and display information required for a specific job. He can let a computer select information from several maps, combine it with such nonmap data as treatment cost or benefit per...

  3. Identifying wrong assemblies in de novo short read primary ...

    Indian Academy of Sciences (India)

    2016-08-05

    Aug 5, 2016 ... Most of these assemblies are done using some de novo short read assemblers and other related approaches. .... benchmarking projects like Assemblathon 1, Assemblathon ... from a large insert library (at least 1000 bases).

  4. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  5. Automated ensemble assembly and validation of microbial genomes

    Science.gov (United States)

    2014-01-01

    Background The continued democratization of DNA sequencing has sparked a new wave of development of genome assembly and assembly validation methods. As individual research labs, rather than centralized centers, begin to sequence the majority of new genomes, it is important to establish best practices for genome assembly. However, recent evaluations such as GAGE and the Assemblathon have concluded that there is no single best approach to genome assembly. Instead, it is preferable to generate multiple assemblies and validate them to determine which is most useful for the desired analysis; this is a labor-intensive process that is often impossible or unfeasible. Results To encourage best practices supported by the community, we present iMetAMOS, an automated ensemble assembly pipeline; iMetAMOS encapsulates the process of running, validating, and selecting a single assembly from multiple assemblies. iMetAMOS packages several leading open-source tools into a single binary that automates parameter selection and execution of multiple assemblers, scores the resulting assemblies based on multiple validation metrics, and annotates the assemblies for genes and contaminants. We demonstrate the utility of the ensemble process on 225 previously unassembled Mycobacterium tuberculosis genomes as well as a Rhodobacter sphaeroides benchmark dataset. On these real data, iMetAMOS reliably produces validated assemblies and identifies potential contamination without user intervention. In addition, intelligent parameter selection produces assemblies of R. sphaeroides comparable to or exceeding the quality of those from the GAGE-B evaluation, affecting the relative ranking of some assemblers. Conclusions Ensemble assembly with iMetAMOS provides users with multiple, validated assemblies for each genome. Although computationally limited to small or mid-sized genomes, this approach is the most effective and reproducible means for generating high-quality assemblies and enables users to

  6. Application of FORSS sensitivity and uncertainty methodology to fast reactor benchmark analysis

    Energy Technology Data Exchange (ETDEWEB)

    Weisbin, C.R.; Marable, J.H.; Lucius, J.L.; Oblow, E.M.; Mynatt, F.R.; Peelle, R.W.; Perey, F.G.

    1976-12-01

    FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions, and associated uncertainties. This paper presents the theory and code description as well as the first results of applying FORSS to fast reactor benchmarks. Specifically, for various assemblies and reactor performance parameters, the nuclear data sensitivities were computed by nuclide, reaction type, and energy. Comprehensive libraries of energy-dependent coefficients have been developed in a computer retrievable format and released for distribution by RSIC and NNCSC. Uncertainties induced by nuclear data were quantified using preliminary, energy-dependent relative covariance matrices evaluated with ENDF/B-IV expectation values and processed for /sup 238/U(n,f), /sup 238/U(n,..gamma..), /sup 239/Pu(n,f), and /sup 239/Pu(..nu..). Nuclear data accuracy requirements to meet specified performance criteria at minimum experimental cost were determined.

  7. Critical Assessment of Metagenome Interpretation-a benchmark of metagenomics software.

    Science.gov (United States)

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J; Chia, Burton K H; Denis, Bertrand; Froula, Jeff L; Wang, Zhong; Egan, Robert; Don Kang, Dongwan; Cook, Jeffrey J; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael D; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z; Cuevas, Daniel A; Edwards, Robert A; Saha, Surya; Piro, Vitor C; Renard, Bernhard Y; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C; Woyke, Tanja; Vorholt, Julia A; Schulze-Lefert, Paul; Rubin, Edward M; Darling, Aaron E; Rattei, Thomas; McHardy, Alice C

    2017-11-01

    Methods for assembly, taxonomic profiling and binning are key to interpreting metagenome data, but a lack of consensus about benchmarking complicates performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on highly complex and realistic data sets, generated from ∼700 newly sequenced microorganisms and ∼600 novel viruses and plasmids and representing common experimental setups. Assembly and genome binning programs performed well for species represented by individual genomes but were substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below family level. Parameter settings markedly affected performance, underscoring their importance for program reproducibility. The CAMI results highlight current challenges but also provide a roadmap for software selection to answer specific research questions.

  8. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we ...

  9. A benchmark on computational simulation of a CT fracture experiment

    International Nuclear Information System (INIS)

    Franco, C.; Brochard, J.; Ignaccolo, S.; Eripret, C.

    1992-01-01

    For a better understanding of the fracture behavior of cracked welds in piping, FRAMATOME, EDF and CEA have launched an important analytical research program. This program is mainly based on the analysis of the effects of the geometrical parameters (the crack size and the welded joint dimensions) and the yield strength ratio on the fracture behavior of several cracked configurations. Two approaches have been selected for the fracture analyses: on one hand, the global approach based on the concept of crack driving force J and on the other hand, a local approach of ductile fracture. In this approach the crack initiation and growth are modelized by the nucleation, growth and coalescence of cavities in front of the crack tip. The model selected in this study estimates only the growth of the cavities using the RICE and TRACEY relationship. The present study deals with a benchmark on computational simulation of CT fracture experiments using three computer codes : ALIBABA developed by EDF the CEA's code CASTEM 2000 and the FRAMATOME's code SYSTUS. The paper is split into three parts. At first, the authors present the experimental procedure for high temperature toughness testing of two CT specimens taken from a welded pipe, characteristic of pressurized water reactor primary piping. Secondly, considerations are outlined about the Finite Element analysis and the application procedure. A detailed description is given on boundary and loading conditions, on the mesh characteristics, on the numerical scheme involved and on the void growth computation. Finally, the comparisons between numerical and experimental results are presented up to the crack initiation, the tearing process being not taken into account in the present study. The variations of J and of the local variables used to estimate the damage around the crack tip (triaxiality and hydrostatic stresses, plastic deformations, void growth ...) are computed as a function of the increasing load

  10. Simplified two and three dimensional HTTR benchmark problems

    International Nuclear Information System (INIS)

    Zhang Zhan; Rahnema, Farzad; Zhang Dingkang; Pounders, Justin M.; Ougouag, Abderrafi M.

    2011-01-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  11. Benchmark of the CASMO-3G/MICROBURN-B codes for Commonwealth Edison boiling water reactors

    International Nuclear Information System (INIS)

    Wheeler, J.K.; Pallotta, A.S.

    1992-01-01

    The Commonwealth Edison Company has performed an extensive benchmark against measured data from three boiling water reactors using the Studsvik lattice physics code CASMO-3G and the Siemens Nuclear Power three-dimensional simulator code MICROBURN-B. The measured data of interest for this benchmark are the hot and cold reactivity, and the core power distributions as measured by the traversing incore probe system and gamma scan data for fuel pins and assemblies. A total of nineteen unit-cycles were evaluated. The database included fuel product lines manufactured by General Electric and Siemens Nuclear Power, wit assemblies containing 7 x 7 to 9 x 9 pin configurations, several water rod designs, various enrichments and gadolina loadings, and axially varying lattice designs throughout the enriched portion of the bundle. The results of the benchmark present evidence that the CASMO-3G/MICROBURN-B code package can adequately model the range of fuel and core types in the benchmark, and the codes are acceptable for performing neutronic analyses of Commonwealth Edison's boiling water reactors

  12. Analysis of neutronics benchmarks for the utilization of mixed oxide fuel in light water reactor using DRAGON code

    International Nuclear Information System (INIS)

    Nithyadevi, Rajan; Thilagam, L.; Karthikeyan, R.; Pal, Usha

    2016-01-01

    Highlights: • Use of advanced computational code – DRAGON-5 using advanced self shielding model USS. • Testing the capability of DRAGON-5 code for the analysis of light water reactor system. • Wide variety of fuels LEU, MOX and spent fuel have been analyzed. • Parameters such as k ∞ , one, few and multi-group macroscopic cross-sections and fluxes were calculated. • Suitability of deterministic methodology employed in DRAGON-5 code is demonstrated for LWR. - Abstract: Advances in reactor physics have led to the development of new computational technologies and upgraded cross-section libraries so as to produce an accurate approximation to the true solution for the problem. Thus it is necessary to revisit the benchmark problems with the advanced computational code system and upgraded cross-section libraries to see how far they are in agreement with the earlier reported values. Present study is one such analysis with the DRAGON code employing advanced self shielding models like USS and 172 energy group ‘JEFF3.1’ cross-section library in DRAGLIB format. Although DRAGON code has already demonstrated its capability for heavy water moderator systems, it is now tested for light water reactor (LWR) and fast reactor systems. As a part of validation of DRAGON for LWR, a VVER computational benchmark titled “Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel-Volume 3” submitted by the Russian Federation has been taken up. Presently, pincell and assembly calculations are carried out considering variation in fuel temperature (both fresh and spent), moderator temperatures and boron content in the moderator. Various parameters such as infinite neutron multiplication (k ∞ ) factor, one group integrated flux, few group homogenized cross-sections (absorption, nu-fission) and reaction rates (absorption, nu-fission) of individual isotopic nuclides are calculated for different reactor states. Comparisons of results are made with the reported Monte Carlo

  13. Benchmark testing calculations for 232Th

    International Nuclear Information System (INIS)

    Liu Ping

    2003-01-01

    The cross sections of 232 Th from CNDC and JENDL-3.3 were processed with NJOY97.45 code in the ACE format for the continuous-energy Monte Carlo Code MCNP4C. The K eff values and central reaction rates based on CENDL-3.0, JENDL-3.3 and ENDF/B-6.2 were calculated using MCNP4C code for benchmark assembly, and the comparisons with experimental results are given. (author)

  14. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    Makai, M.

    1998-01-01

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  15. Thermal reactor benchmark testing of 69 group library

    International Nuclear Information System (INIS)

    Liu Guisheng; Wang Yaoqing; Liu Ping; Zhang Baocheng

    1994-01-01

    Using a code system NSLINK, AMPX master library in WIMS 69 groups structure are made from nuclides relating to 4 newest evaluated nuclear data libraries. Some integrals of 10 thermal reactor benchmark assemblies recommended by the U.S. CSEWG are calculated using rectified PASC-1 code system and compared with foreign results, the authors results are in good agreement with others. 69 group libraries of evaluated data bases in TPFAP interface file are generated with NJOY code system. The k ∞ values of 6 cell lattice assemblies are calculated by the code CBM. The calculated results are analysed and compared

  16. Benchmarking Ligand-Based Virtual High-Throughput Screening with the PubChem Database

    Directory of Open Access Journals (Sweden)

    Mariusz Butkiewicz

    2013-01-01

    Full Text Available With the rapidly increasing availability of High-Throughput Screening (HTS data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR models are built using Artificial Neural Networks (ANNs, Support Vector Machines (SVMs, Decision Trees (DTs, and Kohonen networks (KNs. Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed.

  17. Benchmark test of evaluated nuclear data files for fast reactor neutronics application

    International Nuclear Information System (INIS)

    Chiba, Go; Hazama, Taira; Iwai, Takehiko; Numata, Kazuyuki

    2007-07-01

    A benchmark test of the latest evaluated nuclear data files, JENDL-3.3, JEFF-3.1 and ENDF/B-VII.0, has been carried out for fast reactor neutronics application. For this benchmark test, experimental data obtained at fast critical assemblies and fast power reactors are utilized. In addition to comparing of numerical solutions with the experimental data, we have extracted several cross sections, in which differences between three nuclear data files affect significantly numerical solutions, by virtue of sensitivity analyses. This benchmark test concludes that ENDF/B-VII.0 predicts well the neutronics characteristics of fast neutron systems rather than the other nuclear data files. (author)

  18. An analysis of the CSNI/GREST core concrete interaction chemical thermodynamic benchmark exercise using the MPEC2 computer code

    International Nuclear Information System (INIS)

    Muramatsu, Ken; Kondo, Yasuhiko; Uchida, Masaaki; Soda, Kunihisa

    1989-01-01

    Fission product (EP) release during a core concrete interaction (CCI) is an important factor of the uncertainty associated with a source term estimation for an LWR severe accident. An analysis was made on the CCI Chemical Thermodynamic Benchmark Exercise organized by OECD/NEA/CSNI Group of Experts on Source Terms (GREST) for investigating the uncertainty in thermodynamic modeling for CCI. The benchmark exercise was to calculate the equilibrium FP vapor pressure for given system of temperature, pressure, and debris composition. The benchmark consisted of two parts, A and B. Part A was a simplified problem intended to test the numerical techniques. In part B, the participants were requested to use their own best estimate thermodynamic data base to examine the variability of the results due to the difference in thermodynamic data base. JAERI participated in this benchmark exercise with use of the MPEC2 code. Chemical thermodynamic data base needed for analysis of Part B was taken from the VENESA code. This report describes the computer code used, inputs to the code, and results from the calculation by JAERI. The present calculation indicates that the FP vapor pressure depends strongly on temperature and Oxygen potential in core debris and the pattern of dependency may be different for different FP elements. (author)

  19. Numisheet2005 Benchmark Analysis on Forming of an Automotive Deck Lid Inner Panel: Benchmark 1

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    Numerical simulations in sheet metal forming processes have been a very challenging topic in industry. There are many computer codes and modeling techniques existing today. However, there are many unknowns affecting the prediction accuracy. Systematic benchmark tests are needed to accelerate the future implementations and to provide as a reference. This report presents an international cooperative benchmark effort for an automotive deck lid inner panel. Predictions from simulations are analyzed and discussed against the corresponding experimental results. The correlations between accuracy of each parameter of interest are discussed in this report

  20. Tolerance Verification of an Industrial Assembly using Computed Tomography

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; De Chiffre, Leonardo; Regi, Francesco

    2016-01-01

    This paper reports on results of tolerance verification of a multi-material assembly by using Computed Tomography (CT). The workpiece comprises three parts which are made out of different materials. Five different measurands were inspected. The calculation of measurement uncertainties was attempted...... by way of a ball plate. Comparison between CT and results from a traditional coordinate measuring machine was also involved in this study....

  1. New calculations for critical assemblies using MCNP4B

    International Nuclear Information System (INIS)

    Adams, A.A.; Frankle, S.C.; Little, R.C.

    1997-07-01

    A suite of 41 criticality benchmarks has been modeled using MCNP trademark (version 4B). Most of the assembly specifications were obtained from the Cross Section Evaluation Working Group (CSEWG) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) compendiums of experimental benchmarks. A few assembly specifications were obtained from experimental papers. The suite contains thermal and fast assemblies, bare and reflected assemblies, and emphasizes 233 U, 235 U, 238 U, and 239 Pu. The values of k eff for each assembly in the suite were calculated using MCNP libraries derived primarily from release 2 of ENDF/B-V and release 2 of ENDF/B-VI. The results show that the new ENDF/B-VI.2 evaluations for H, O, N, B, 235 U, 238 U, and 239 Pu can have a significant impact on the values of k eff . In addition to the integral quantity k eff , several additional experimental measurements were performed and documented. These experimental measurements include central fission and reaction-rate ratios for various isotopes, and neutron leakage and flux spectra. They provide more detailed information about the accuracy of the nuclear data than can k eff . Comparison calculations were performed using both ENDF/B-V.2 and ENDF/B-VI.2-based data libraries. The purpose of this paper is to compare the results of these additional calculations with experimental data, and to use these results to assess the quality of the nuclear data

  2. Sequence assembly

    DEFF Research Database (Denmark)

    Scheibye-Alsing, Karsten; Hoffmann, S.; Frankel, Annett Maria

    2009-01-01

    Despite the rapidly increasing number of sequenced and re-sequenced genomes, many issues regarding the computational assembly of large-scale sequencing data have remain unresolved. Computational assembly is crucial in large genome projects as well for the evolving high-throughput technologies and...... in genomic DNA, highly expressed genes and alternative transcripts in EST sequences. We summarize existing comparisons of different assemblers and provide a detailed descriptions and directions for download of assembly programs at: http://genome.ku.dk/resources/assembly/methods.html....

  3. Developing and modeling of the 'Laguna Verde' BWR CRDA benchmark

    International Nuclear Information System (INIS)

    Solis-Rodarte, J.; Fu, H.; Ivanov, K.N.; Matsui, Y.; Hotta, A.

    2002-01-01

    Reactivity initiated accidents (RIA) and design basis transients are one of the most important aspects related to nuclear power reactor safety. These events are re-evaluated whenever core alterations (modifications) are made as part of the nuclear safety analysis performed to a new design. These modifications usually include, but are not limited to, power upgrades, longer cycles, new fuel assembly and control rod designs, etc. The results obtained are compared with pre-established bounding analysis values to see if the new core design fulfills the requirements of safety constraints imposed on the design. The control rod drop accident (CRDA) is the design basis transient for the reactivity events of BWR technology. The CRDA is a very localized event depending on the control rod insertion position and the fuel assemblies surrounding the control rod falling from the core. A numerical benchmark was developed based on the CRDA RIA design basis accident to further asses the performance of coupled 3D neutron kinetics/thermal-hydraulics codes. The CRDA in a BWR is a mostly neutronic driven event. This benchmark is based on a real operating nuclear power plant - unit 1 of the Laguna Verde (LV1) nuclear power plant (NPP). The definition of the benchmark is presented briefly together with the benchmark specifications. Some of the cross-sections were modified in order to make the maximum control rod worth greater than one dollar. The transient is initiated at steady-state by dropping the control rod with maximum worth at full speed. The 'Laguna Verde' (LV1) BWR CRDA transient benchmark is calculated using two coupled codes: TRAC-BF1/NEM and TRAC-BF1/ENTREE. Neutron kinetics and thermal hydraulics models were developed for both codes. Comparison of the obtained results is presented along with some discussion of the sensitivity of results to some modeling assumptions

  4. SparseBeads data: benchmarking sparsity-regularized computed tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer; Coban, Sophia B.; Lionheart, William R. B.

    2017-01-01

    -regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels...

  5. Selection and benchmarking of computer codes for research reactor core conversions

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, Emin [School of Aerospace, Mechanical and Nuclear Engineering, University of Oklahoma, Norman, OK (United States); Jones, Barclay G [Nuclear Engineering Program, University of IL at Urbana-Champaign, Urbana, IL (United States)

    1983-09-01

    A group of computer codes have been selected and obtained from the Nuclear Energy Agency (NEA) Data Bank in France for the core conversion study of highly enriched research reactors. ANISN, WIMSD-4, MC{sup 2}, COBRA-3M, FEVER, THERMOS, GAM-2, CINDER and EXTERMINATOR were selected for the study. For the final work THERMOS, GAM-2, CINDER and EXTERMINATOR have been selected and used. A one dimensional thermal hydraulics code also has been used to calculate temperature distributions in the core. THERMOS and CINDER have been modified to serve the purpose. Minor modifications have been made to GAM-2 and EXTERMINATOR to improve their utilization. All of the codes have been debugged on both CDC and IBM computers at the University of IL. IAEA 10 MW Benchmark problem has been solved. Results of this work has been compared with the IAEA contributor's results. Agreement is very good for highly enriched fuel (HEU). Deviations from IAEA contributor's mean value for low enriched fuel (LEU) exist but they are small enough in general. Deviation of k{sub eff} is about 0.5% for both enrichments at the beginning of life (BOL) and at the end of life (EOL). Flux ratios deviate only about 1.5% from IAEA contributor's mean value. (author)

  6. Selection and benchmarking of computer codes for research reactor core conversions

    International Nuclear Information System (INIS)

    Yilmaz, Emin; Jones, Barclay G.

    1983-01-01

    A group of computer codes have been selected and obtained from the Nuclear Energy Agency (NEA) Data Bank in France for the core conversion study of highly enriched research reactors. ANISN, WIMSD-4, MC 2 , COBRA-3M, FEVER, THERMOS, GAM-2, CINDER and EXTERMINATOR were selected for the study. For the final work THERMOS, GAM-2, CINDER and EXTERMINATOR have been selected and used. A one dimensional thermal hydraulics code also has been used to calculate temperature distributions in the core. THERMOS and CINDER have been modified to serve the purpose. Minor modifications have been made to GAM-2 and EXTERMINATOR to improve their utilization. All of the codes have been debugged on both CDC and IBM computers at the University of IL. IAEA 10 MW Benchmark problem has been solved. Results of this work has been compared with the IAEA contributor's results. Agreement is very good for highly enriched fuel (HEU). Deviations from IAEA contributor's mean value for low enriched fuel (LEU) exist but they are small enough in general. Deviation of k eff is about 0.5% for both enrichments at the beginning of life (BOL) and at the end of life (EOL). Flux ratios deviate only about 1.5% from IAEA contributor's mean value. (author)

  7. Boiling water reactor turbine trip (TT) benchmark

    International Nuclear Information System (INIS)

    2001-06-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts, as well as for current nuclear applications Recently developed 'best-estimate' computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for the coupling of core phenomena and system dynamics (PWR, BWR, VVER) need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for the purpose. The present volume describes the specification of such a benchmark. The transient addressed is a turbine trip (TT) in a BWR involving pressurization events in which the coupling between core phenomena and system dynamics plays an important role. In addition, the data made available from experiments carried out at the plant make the present benchmark very valuable. The data used are from events at the Peach Bottom 2 reactor (a GE-designed BWR/4). (authors)

  8. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  9. Two-group k-eigenvalue benchmark calculations for planar geometry transport in a binary stochastic medium

    International Nuclear Information System (INIS)

    Davis, I.M.; Palmer, T.S.

    2005-01-01

    Benchmark calculations are performed for neutron transport in a two material (binary) stochastic multiplying medium. Spatial, angular, and energy dependence are included. The problem considered is based on a fuel assembly of a common pressurized water reactor. The mean chord length through the assembly is determined and used as the planar geometry system length. According to assumed or calculated material distributions, this system length is populated with alternating fuel and moderator segments of random size. Neutron flux distributions are numerically computed using a discretized form of the Boltzmann transport equation employing diffusion synthetic acceleration. Average quantities (group fluxes and k-eigenvalue) and variances are calculated from an ensemble of realizations of the mixing statistics. The effects of varying two parameters in the fuel, two different boundary conditions, and three different sets of mixing statistics are assessed. A probability distribution function (PDF) of the k-eigenvalue is generated and compared with previous research. Atomic mix solutions are compared with these benchmark ensemble average flux and k-eigenvalue solutions. Mixing statistics with large standard deviations give the most widely varying ensemble solutions of the flux and k-eigenvalue. The shape of the k-eigenvalue PDF qualitatively agrees with previous work. Its overall shape is independent of variations in fuel cross-sections for the problems considered, but its width is impacted by these variations. Statistical distributions with smaller standard deviations alter the shape of this PDF toward a normal distribution. The atomic mix approximation yields large over-predictions of the ensemble average k-eigenvalue and under-predictions of the flux. Qualitatively correct flux shapes are obtained in some cases. These benchmark calculations indicate that a model which includes higher statistical moments of the mixing statistics is needed for accurate predictions of binary

  10. Bench-marking beam-beam simulations using coherent quadrupole effects

    International Nuclear Information System (INIS)

    Krishnagopal, S.; Chin, Y.H.

    1992-06-01

    Computer simulations are used extensively in the study of the beam-beam interaction. The proliferation of such codes raises the important question of their reliability, and motivates the development of a dependable set of bench-marks. We argue that rather than detailed quantitative comparisons, the ability of different codes to predict the same qualitative physics should be used as a criterion for such bench-marks. We use the striking phenomenon of coherent quadrupole oscillations as one such bench-mark, and demonstrate that our codes do indeed observe this behaviour. We also suggest some other tests that could be used as bench-marks

  11. Bench-marking beam-beam simulations using coherent quadrupole effects

    International Nuclear Information System (INIS)

    Krishnagopal, S.; Chin, Y.H.

    1992-01-01

    Computer simulations are used extensively in the study of the beam-beam interaction. The proliferation of such codes raises the important question of their reliability, and motivates the development of a dependable set of bench-marks. We argue that rather than detailed quantitative comparisons, the ability of different codes to predict the same qualitative physics should be used as a criterion for such bench-marks. We use the striking phenomenon of coherent quadrupole oscillations as one such bench-mark, and demonstrate that our codes do indeed observe this behavior. We also suggest some other tests that could be used as bench-marks

  12. Benchmark physics tests in the metallic-fuelled assembly ZPPR-15

    International Nuclear Information System (INIS)

    McFarlane, H.F.; Brumbach, S.B.; Carpenter, S.G.; Collins, P.J.

    1987-01-01

    In the last two years a shift in emphasis to inherent safety and economic competitiveness has led to a resurgence in US interest in metallic-alloy fuels for LMRs. Argonne National Laboratory initiated an extensive testing program for metallic-fuelled LMR technology that has included benchmark physics as one component. The tests done in the ZPPR-15 Program produced the first physics results in over 20 years for a metal-composition LMR core

  13. Use of Monte Carlo computation in benchmarking radiotherapy treatment planning system algorithms

    International Nuclear Information System (INIS)

    Lewis, R.D.; Ryde, S.J.S.; Seaby, A.W.; Hancock, D.A.; Evans, C.J.

    2000-01-01

    Radiotherapy treatments are becoming more complex, often requiring the dose to be calculated in three dimensions and sometimes involving the application of non-coplanar beams. The ability of treatment planning systems to accurately calculate dose under a range of these and other irradiation conditions requires evaluation. Practical assessment of such arrangements can be problematical, especially when a heterogeneous medium is used. This work describes the use of Monte Carlo computation as a benchmarking tool to assess the dose distribution of external photon beam plans obtained in a simple heterogeneous phantom by several commercially available 3D and 2D treatment planning system algorithms. For comparison, practical measurements were undertaken using film dosimetry. The dose distributions were calculated for a variety of irradiation conditions designed to show the effects of surface obliquity, inhomogeneities and missing tissue above tangential beams. The results show maximum dose differences of 47% between some planning algorithms and film at a point 1 mm below a tangentially irradiated surface. Overall, the dose distribution obtained from film was most faithfully reproduced by the Monte Carlo N-Particle results illustrating the potential of Monte Carlo computation in evaluating treatment planning system algorithms. (author)

  14. Benchmarking therapeutic drug monitoring software: a review of available computer tools.

    Science.gov (United States)

    Fuchs, Aline; Csajka, Chantal; Thoma, Yann; Buclin, Thierry; Widmer, Nicolas

    2013-01-01

    Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare

  15. Impact of cross-section generation procedures on the simulation of the VVER 1000 pump startup experiment in the OECD/DOE/CEA V1000CT benchmark by coupled 3-D thermal hydraulics/ neutron kinetics models

    International Nuclear Information System (INIS)

    Boyan D Ivanov; Kostadin N Ivanov; Sylvie Aniel; Eric Royer

    2005-01-01

    Full text of publication follows: In the framework of joint effort between the Nuclear Energy Agency (NEA) of OECD, the United States Department of Energy (US DOE), and the Commissariat a l'Energie Atomique (CEA), France a coupled 3-D thermal hydraulics/neutron kinetics benchmark was defined. The overall objective OECD/NEA V1000CT benchmark is to assess computer codes used in analysis of VVER-1000 reactivity transients where mixing phenomena (mass flow and temperature) in the reactor pressure vessel are complex. Original data from the Kozloduy-6 Nuclear Power Plant are available for the validation of computer codes: one experiment of pump start-up (V1000CT-1) and one experiment of steam generator isolation (V1000CT-2). Additional scenarios are defined for code-to-code comparison. As a 3D core model is necessary for a best-estimate computation of all the scenarios of the V1000CT benchmark, all participants were asked to develop their own core coupled 3-D thermal hydraulics/ neutron kinetics models based on the data available in the benchmark specifications. The first code to code comparisons based on the V1000CT-1 Exercise 2 specifications exhibited unacceptable discrepancies between 2 sets of results, one of them being close to experimental results. The present paper focuses first on the analysis of the observed discrepancies. The VVER 1000 3-D thermal hydraulics/neutron kinetics models are based on thermal-hydraulic and neutronic data homogenized at the assembly scale. The neutronic data, provided as part of the benchmark specifications, consist thus in a set of parametrized 2 group cross sections libraries representing the different assemblies and the reflectors. The origin of the high observed discrepancies was found to lie in the use of these neutronic libraries. The concern was then to find a way to provide neutronic data, compatible with all the benchmark participants neutronic models, that enable also comparisons with experimental results. An analysis of the

  16. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    Karlson, C.F.; Reed, M.L.; Webb, J.R.; Elzea, J.D.

    1990-01-01

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  17. Computer Aided Design of the Link-Fork Head-Piston Assembly of the Kaplan Turbine with Solidworks

    Directory of Open Access Journals (Sweden)

    Camelia Jianu

    2010-10-01

    Full Text Available The paper presents the steps for 3D computer aided design (CAD of the link-fork head-piston assembly of the Kaplan turbine made in SolidWorks.The present paper is a tutorial for a Kaplan turbine assembly 3D geometry, which is dedicated to the Assembly design and Drawing Geometry and Drawing Annotation.

  18. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  19. Sensitivity study applied to the CB4 VVER-440 benchmark on burnup credit

    International Nuclear Information System (INIS)

    Markova, Ludmila

    2003-01-01

    A brief overview of four completed portions (CB1, CB2, CB3, CB3+, CB4) of the international VVER-440 benchmark focused on burnup credit and a sensitivity study as one of the final views of the benchmark results are presented in the paper. Finally, the influence of real and conservative VVER-440 fuel assembly models taken for the isotopics calculation by SCALE sas2 on the system k eff is shown in the paper. (author)

  20. Analysis of a computational benchmark for a high-temperature reactor using SCALE

    International Nuclear Information System (INIS)

    Goluoglu, S.

    2006-01-01

    Several proposed advanced reactor concepts require methods to address effects of double heterogeneity. In doubly heterogeneous systems, heterogeneous fuel particles in a moderator matrix form the fuel region of the fuel element and thus constitute the first level of heterogeneity. Fuel elements themselves are also heterogeneous with fuel and moderator or reflector regions, forming the second level of heterogeneity. The fuel elements may also form regular or irregular lattices. A five-phase computational benchmark for a high-temperature reactor (HTR) fuelled with uranium or reactor-grade plutonium has been defined by the Organization for Economic Cooperation and Development, Nuclear Energy Agency (OECD NEA), Nuclear Science Committee, Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles. This paper summarizes the analysis results using the latest SCALE code system (to be released in CY 2006 as SCALE 5.1). (authors)

  1. Conclusion of the I.C.T. benchmark exercise

    International Nuclear Information System (INIS)

    Giacometti, A.

    1991-01-01

    The ICT Benchmark exercise made within the RIV working group of ESARDA on reprocessing data supplied by COGEMA for 53 routines reprocessing input batches made of 110 irradiated fuel assemblies from KWO Nuclear Power Plant was finally evaluated. The conclusions are: all seven different ICT methods applied verified the operator data on plutonium within about one percent; anomalies intentionally introduced to the operator data were detected in 90% of the cases; the nature of the introduced anomalies, which were unknown to the participants, was completely resolved for the safeguards relevant cases; the false alarm rate was in a few percent range. The ICT Benchmark results shows that this technique is capable of detecting and resolving anomalies in the reprocessing input data to the order of a percent

  2. Proposal and analysis of the benchmark problem suite for reactor physics study of LWR next generation fuels

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-10-01

    In order to investigate the calculation accuracy of the nuclear characteristics of LWR next generation fuels, the Research Committee on Reactor Physics organized by JAERI has established the Working Party on Reactor Physics for LWR Next Generation Fuels. The next generation fuels mean the ones aiming for further extended burn-up such as 70 GWd/t over the current design. The Working Party has proposed six benchmark problems, which consists of pin-cell, PWR fuel assembly and BWR fuel assembly geometries loaded with uranium and MOX fuels, respectively. The specifications of the benchmark problem neglect some of the current limitations such as 5 wt% {sup 235}U to achieve the above-mentioned target. Eleven organizations in the Working Party have carried out the analyses of the benchmark problems. As a result, status of accuracy with the current data and method and some problems to be solved in the future were clarified. In this report, details of the benchmark problems, result by each organization, and their comparisons are presented. (author)

  3. Theoretical analysis of time-dependent neutron spectra in bulk assemblies

    International Nuclear Information System (INIS)

    Akimoto, Tadashi; Ogawa, Yuichi; Togawa, Orihiko.

    1988-01-01

    Time-dependent neutron spectra in an iron assembly and in a graphite assembly are obtained with the one-dimensional S N calculation, in order an attempt to investigate the availability of these spectra to the benchmark test by the LINAC-TOF method for evaluation of nuclear data and numerical methods. The group constants are taken from the JAERI FAST SET Version 1, 2 and the ABBN SET. It was demonstrated by a sensitivity test that the time-dependent neutron spectra are sensitive to changes in the inelastic scattering cross section data in the iron assembly and to changes in the elastic scattering cross section data in the graphite assembly. Moreover, it is shown that the time-dependent spectra in the graphite assembly are sensitive to the group structure. Because some information about the neutron transport phenomena which has not been obtained in the stationary spectra is observed in the time-dependent spectra, the availability of the benchmark test based on the time-dependent spectra is indicated from the theoretical analysis. (author)

  4. Computing sextic centrifugal distortion constants by DFT: A benchmark analysis on halogenated compounds

    Science.gov (United States)

    Pietropolli Charmet, Andrea; Stoppa, Paolo; Tasinato, Nicola; Giorgianni, Santi

    2017-05-01

    This work presents a benchmark study on the calculation of the sextic centrifugal distortion constants employing cubic force fields computed by means of density functional theory (DFT). For a set of semi-rigid halogenated organic compounds several functionals (B2PLYP, B3LYP, B3PW91, M06, M06-2X, O3LYP, X3LYP, ωB97XD, CAM-B3LYP, LC-ωPBE, PBE0, B97-1 and B97-D) were used for computing the sextic centrifugal distortion constants. The effects related to the size of basis sets and the performances of hybrid approaches, where the harmonic data obtained at higher level of electronic correlation are coupled with cubic force constants yielded by DFT functionals, are presented and discussed. The predicted values were compared to both the available data published in the literature and those obtained by calculations carried out at increasing level of electronic correlation: Hartree-Fock Self Consistent Field (HF-SCF), second order Møller-Plesset perturbation theory (MP2), and coupled-cluster single and double (CCSD) level of theory. Different hybrid approaches, having the cubic force field computed at DFT level of theory coupled to harmonic data computed at increasing level of electronic correlation (up to CCSD level of theory augmented by a perturbational estimate of the effects of connected triple excitations, CCSD(T)) were considered. The obtained results demonstrate that they can represent reliable and computationally affordable methods to predict sextic centrifugal terms with an accuracy almost comparable to that yielded by the more expensive anharmonic force fields fully computed at MP2 and CCSD levels of theory. In view of their reduced computational cost, these hybrid approaches pave the route to the study of more complex systems.

  5. 2-d and 1-d Nanomaterials Construction through Peptide Computational Design and Solution Assembly

    Science.gov (United States)

    Pochan, Darrin

    Self-assembly of molecules is an attractive materials construction strategy due to its simplicity in application. By considering peptidic molecules in the bottom-up materials self-assembly design process, one can take advantage of inherently biomolecular attributes; intramolecular folding events, secondary structure, and electrostatic/H-bonding/hydrophobic interactions to define hierarchical material structure and consequent properties. Importantly, while biomimicry has been a successful strategy for the design of new peptide molecules for intermolecular assembly, computational tools have been developed to de novo design peptide molecules required for construction of pre-determined, desired nanostructures and materials. A new system comprised of coiled coil bundle motifs theoretically designed to assemble into designed, one and two-dimensional nanostructures will be introduced. The strategy provides the opportunity for arbitrary nanostructure formation, i.e. structures not observed in nature, with peptide molecules. Importantly, the desired nanostructure was chosen first while the peptides needed for coiled coil formation and subsequent nanomaterial formation were determined computationally. Different interbundle, two-dimensional nanostructures are stabilized by differences in amino acid composition exposed on the exterior of the coiled coil bundles. Computation was able to determine molecules required for different interbundle symmetries within two-dimensional sheets stabilized by subtle differences in amino acid composition of the inherent peptides. Finally, polymers were also created through covalent interactions between bundles that allowed formation of architectures spanning flexible network forming chains to ultra-stiff polymers, all with the same building block peptides. The success of the computational design strategy is manifested in the nanomaterial results as characterized by electron microscopy, scattering methods, and biophysical techniques. Support

  6. Solution of the 'MIDICORE' WWER-1000 core periphery power distribution benchmark by KARATE and MCNP

    International Nuclear Information System (INIS)

    Temesvari, E.; Hegyi, G.; Hordosy, G.; Maraczy, C.

    2011-01-01

    The 'MIDICORE' WWER-1000 core periphery power distribution benchmark was proposed by Mr. Mikolas on the twentieth Symposium of AER in Finland in 2010. This MIDICORE benchmark is a two-dimensional calculation benchmark based on the WWER-1000 reactor core cold state geometry with taking into account the geometry of explicit radial reflector. The main task of the benchmark is to test the pin by pin power distribution in selected fuel assemblies at the periphery of the WWER-1000 core. In this paper we present our results (k eff , integral fission power) calculated by MCNP and the KARATE code system in KFKI-AEKI and the comparison to the preliminary reference Monte Carlo calculation results made by NRI, Rez. (Authors)

  7. Development of parallel benchmark code by sheet metal forming simulator 'ITAS'

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Suzuki, Shintaro; Minami, Kazuo

    1999-03-01

    This report describes the development of parallel benchmark code by sheet metal forming simulator 'ITAS'. ITAS is a nonlinear elasto-plastic analysis program by the finite element method for the purpose of the simulation of sheet metal forming. ITAS adopts the dynamic analysis method that computes displacement of sheet metal at every time unit and utilizes the implicit method with the direct linear equation solver. Therefore the simulator is very robust. However, it requires a lot of computational time and memory capacity. In the development of the parallel benchmark code, we designed the code by MPI programming to reduce the computational time. In numerical experiments on the five kinds of parallel super computers at CCSE JAERI, i.e., SP2, SR2201, SX-4, T94 and VPP300, good performances are observed. The result will be shown to the public through WWW so that the benchmark results may become a guideline of research and development of the parallel program. (author)

  8. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  9. Prismatic Core Coupled Transient Benchmark

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.

    2011-01-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  10. ZZ WPPR, Pu Recycling Benchmark Results

    International Nuclear Information System (INIS)

    Lutz, D.; Mattes, M.; Delpech, Marc; Juanola, Marc

    2002-01-01

    Description of program or function: The NEA NSC Working Party on Physics of Plutonium Recycling has commissioned a series of benchmarks covering: - Plutonium recycling in pressurized-water reactors; - Void reactivity effect in pressurized-water reactors; - Fast Plutonium-burner reactors: beginning of life; - Plutonium recycling in fast reactors; - Multiple recycling in advanced pressurized-water reactors. The results have been published (see references). ZZ-WPPR-1-A/B contains graphs and tables relative to the PWR Mox pin cell benchmark, representing typical fuel for plutonium recycling, one corresponding to a first cycle, the second for a fifth cycle. These computer readable files contain the complete set of results, while the printed report contains only a subset. ZZ-WPPR-2-CYC1 are the results from cycle 1 of the multiple recycling benchmarks

  11. Benchmark referencing of neutron dosimetry measurements

    International Nuclear Information System (INIS)

    Eisenhauer, C.M.; Grundl, J.A.; Gilliam, D.M.; McGarry, E.D.; Spiegel, V.

    1980-01-01

    The concept of benchmark referencing involves interpretation of dosimetry measurements in applied neutron fields in terms of similar measurements in benchmark fields whose neutron spectra and intensity are well known. The main advantage of benchmark referencing is that it minimizes or eliminates many types of experimental uncertainties such as those associated with absolute detection efficiencies and cross sections. In this paper we consider the cavity external to the pressure vessel of a power reactor as an example of an applied field. The pressure vessel cavity is an accessible location for exploratory dosimetry measurements aimed at understanding embrittlement of pressure vessel steel. Comparisons with calculated predictions of neutron fluence and spectra in the cavity provide a valuable check of the computational methods used to estimate pressure vessel safety margins for pressure vessel lifetimes

  12. Compilation of benchmark results for fusion related Nuclear Data

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Wada, Masayuki; Oyama, Yukio; Ichihara, Chihiro; Makita, Yo; Takahashi, Akito

    1998-11-01

    This report compiles results of benchmark tests for validation of evaluated nuclear data to be used in nuclear designs of fusion reactors. Parts of results were obtained under activities of the Fusion Neutronics Integral Test Working Group organized by the members of both Japan Nuclear Data Committee and the Reactor Physics Committee. The following three benchmark experiments were employed used for the tests: (i) the leakage neutron spectrum measurement experiments from slab assemblies at the D-T neutron source at FNS/JAERI, (ii) in-situ neutron and gamma-ray measurement experiments (so-called clean benchmark experiments) also at FNS, and (iii) the pulsed sphere experiments for leakage neutron and gamma-ray spectra at the D-T neutron source facility of Osaka University, OKTAVIAN. Evaluated nuclear data tested were JENDL-3.2, JENDL Fusion File, FENDL/E-1.0 and newly selected data for FENDL/E-2.0. Comparisons of benchmark calculations with the experiments for twenty-one elements, i.e., Li, Be, C, N, O, F, Al, Si, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zr, Nb, Mo, W and Pb, are summarized. (author). 65 refs

  13. Benchmarking multimedia performance

    Science.gov (United States)

    Zandi, Ahmad; Sudharsanan, Subramania I.

    1998-03-01

    With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.

  14. Integral measurements on Caliban and Prospero assemblies for nuclear data validation

    International Nuclear Information System (INIS)

    Casoli, P.; Authier, N.; Richard, B.; Ducauze-Philippe, M.; Cartier, J.

    2011-01-01

    How can the quality of nuclear data libraries be checked? Performing reference experiments also called benchmarks allows the testing of evaluated data. During these experiments, integral values such as reaction rates or neutron effective multiplication coefficients are measured. In this paper, the principles of benchmark construction are explained and illustrated with several works performed on the CALIBAN et PROSPERO critical assemblies operated by the Valduc center: benchmarks for dosimetry, activation reactions studies, neutron noise measurements. (authors)

  15. DNA-programmed dynamic assembly of quantum dots for molecular computation.

    Science.gov (United States)

    He, Xuewen; Li, Zhi; Chen, Muzi; Ma, Nan

    2014-12-22

    Despite the widespread use of quantum dots (QDs) for biosensing and bioimaging, QD-based bio-interfaceable and reconfigurable molecular computing systems have not yet been realized. DNA-programmed dynamic assembly of multi-color QDs is presented for the construction of a new class of fluorescence resonance energy transfer (FRET)-based QD computing systems. A complete set of seven elementary logic gates (OR, AND, NOR, NAND, INH, XOR, XNOR) are realized using a series of binary and ternary QD complexes operated by strand displacement reactions. The integration of different logic gates into a half-adder circuit for molecular computation is also demonstrated. This strategy is quite versatile and straightforward for logical operations and would pave the way for QD-biocomputing-based intelligent molecular diagnostics. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A Computer Model for Analyzing Volatile Removal Assembly

    Science.gov (United States)

    Guo, Boyun

    2010-01-01

    A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.

  17. Benchmarking of the FENDL-3 Neutron Cross-section Data Starter Library for Fusion Applications

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, U., E-mail: ulrich.fischer@kit.edu [Association KIT-Euratom, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Angelone, M. [Associazione ENEA-Euratom, ENEA Fusion Division, Via E. Fermi 27, I-00044 Frascati (Italy); Bohm, T. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States); Kondo, K. [Association KIT-Euratom, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Konno, C. [Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan); Sawan, M. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States); Villari, R. [Associazione ENEA-Euratom, ENEA Fusion Division, Via E. Fermi 27, I-00044 Frascati (Italy); Walker, B. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States)

    2014-06-15

    This paper summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) on a computational ITER benchmark and a series of 14 MeV neutron benchmark experiments. The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses. In general, FENDL-3 shows an improved performance for fusion neutronics applications.

  18. International Benchmark on Pressurised Water Reactor Sub-channel and Bundle Tests. Volume II: Benchmark Results of Phase I: Void Distribution

    International Nuclear Information System (INIS)

    Rubin, Adam; Avramova, Maria; Velazquez-Lozada, Alexander

    2016-03-01

    This report summarised the first phase of the Nuclear Energy Agency (NEA) and the US Nuclear Regulatory Commission Benchmark based on NUPEC PWR Sub-channel and Bundle Tests (PSBT), which was intended to provide data for the verification of void distribution models in participants' codes. This phase was composed of four exercises; Exercise 1: steady-state single sub-channel benchmark, Exercise 2: steady-state rod bundle benchmark, Exercise 3: transient rod bundle benchmark and Exercise 4: a pressure drop benchmark. The experimental data provided to the participants of this benchmark is from a series of void measurement tests using full-size mock-up tests for both Boiling Water Reactors (BWRs) and Pressurised Water Reactors (PWRs). These tests were performed from 1987 to 1995 by the Nuclear Power Engineering Corporation (NUPEC) in Japan and made available by the Japan Nuclear Energy Safety Organisation (JNES) for the purposes of this benchmark, which was organised by Pennsylvania State University. Twenty-one institutions from nine countries participated in this benchmark. Seventeen different computer codes were used in Exercises 1, 2, 3 and 4. Among the computer codes were porous media, sub-channel, systems thermal-hydraulic code and Computational Fluid Dynamics (CFD) codes. It was observed that the codes tended to overpredict the thermal equilibrium quality at lower elevations and under predict it at higher elevations. There was also a tendency to overpredict void fraction at lower elevations and underpredict it at high elevations for the bundle test cases. The overprediction of void fraction at low elevations is likely caused by the x-ray densitometer measurement method used. Under sub-cooled boiling conditions, the voids accumulate at heated surfaces (and are therefore not seen in the centre of the sub-channel, where the measurements are being taken), so the experimentally-determined void fractions will be lower than the actual void fraction. Some of the best

  19. ZPR-6 assembly 7 high {sup 240} PU core : a cylindrical assemby with mixed (PU, U)-oxide fuel and a central high {sup 240} PU zone.

    Energy Technology Data Exchange (ETDEWEB)

    Lell, R. M.; Schaefer, R. W.; McKnight, R. D.; Tsiboulia, A.; Rozhikhin, Y.; Nuclear Engineering Division; Inst. of Physics and Power Engineering

    2007-10-01

    Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactor physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of

  20. Analysis of the ITER computational shielding benchmark with the Monte Carlo TRIPOLI-4® neutron gamma coupled calculations

    International Nuclear Information System (INIS)

    Lee, Yi-Kang

    2016-01-01

    Highlights: • Verification and validation of TRIPOLI-4 radiation transport calculations for ITER shielding benchmark. • Evaluation of CEA-V5.1.1 and FENDL-3.0 nuclear data libraries on D–T fusion neutron continuous energy transport calculations. • Advances in nuclear analyses for nuclear heating and radiation damage in iron. • This work also demonstrates that the “safety factors” concept is necessary in the nuclear analyses of ITER. - Abstract: With the growing interest in using the continuous-energy TRIPOLI-4 ® Monte Carlo radiation transport code for ITER applications, a key issue that arises is whether or not the released TRIPOLI-4 code and its associated nuclear data libraries are verified and validated for the D–T fusion neutronics calculations. Previous published benchmark results of TRIPOLI-4 code on the ITER related activities have concentrated on the first wall loading, the reactor dosimetry, the nuclear heating, and the tritium breeding ratio. To enhance the TRIPOLI-4 verification and validation on neutron-gamma coupled calculations for fusion device application, the computational ITER shielding benchmark of M. E. Sawan was performed in this work by using the 2013 released TRIPOLI-4.9S code and the associated CEA-V5.1.1 data library. First wall, blanket, vacuum vessel and toroidal field magnet of the inboard and outboard components were fully modelled in this 1-D toroidal cylindrical benchmark. The 14.1 MeV source neutrons were sampled from a uniform isotropic distribution in the plasma zone. Nuclear responses including neutron and gamma fluxes, nuclear heating, and material damage indicator were benchmarked against previous published results. The capabilities of the TRIPOLI-4 code on the evaluation of above physics parameters were presented. The nuclear data library from the new FENDL-3.0 evaluation was also benchmarked against the CEA-V5.1.1 results for the neutron transport calculations. The results show that both data libraries can be

  1. Local approach of cleavage fracture applied to a vessel with subclad flaw. A benchmark on computational simulation

    International Nuclear Information System (INIS)

    Moinereau, D.; Brochard, J.; Guichard, D.; Bhandari, S.; Sherry, A.; France, C.

    1996-10-01

    A benchmark on the computational simulation of a cladded vessel with a 6.2 mm sub-clad flaw submitted to a thermal transient has been conducted. Two-dimensional elastic and elastic-plastic finite element computations of the vessel have been performed by the different partners with respective finite element codes ASTER (EDF), CASTEM 2000 (CEA), SYSTUS (Framatome) and ABAQUS (AEA Technology). Main results have been compared: temperature field in the vessel, crack opening, opening stress at crack tips, stress intensity factor in cladding and base metal, Weibull stress σ w and probability of failure in base metal, void growth rate R/R 0 in cladding. This comparison shows an excellent agreement on main results, in particular on results obtained with local approach. (K.A.)

  2. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  3. A Context-Aware Ubiquitous Learning Approach for Providing Instant Learning Support in Personal Computer Assembly Activities

    Science.gov (United States)

    Hsu, Ching-Kun; Hwang, Gwo-Jen

    2014-01-01

    Personal computer assembly courses have been recognized as being essential in helping students understand computer structure as well as the functionality of each computer component. In this study, a context-aware ubiquitous learning approach is proposed for providing instant assistance to individual students in the learning activity of a…

  4. Verification of the code DYN3D/R with the help of international benchmarks

    International Nuclear Information System (INIS)

    Grundmann, U.; Rohde, U.

    1997-10-01

    Different benchmarks for reactors with quadratic fuel assemblies were calculated with the code DYN3D/R. In this report comparisons with the results of the reference solutions are carried out. The results of DYN3D/R and the reference calculation for the eigenvalue k eff and the power distribution are shown for the steady-state 3-dimensional IAEA-Benchmark. The results of NEACRP-Benchmarks on control rod ejections in a standard PWR were compared with the reference solutions published by the NEA Data Bank. For assessing the accuracy of DYN3D/R results in comparison to other codes the deviations to the reference solutions are considered. Detailed comparisons with the published reference solutions of the NEA-NSC Benchmarks on uncontrolled withdrawal of control rods are made. The influence of the axial nodalization is also investigated. All in all, a good agreement of the DYN3D/R results with the reference solutions can be seen for the considered benchmark problems. (orig.) [de

  5. Boiling water reactor turbine trip (TT) benchmark

    International Nuclear Information System (INIS)

    2005-01-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts as well as for current applications. Recently developed 'best-estimate' computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for coupling core phenomena and system dynamics (PWR, BWR, VVER) need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for this purpose. The present report is the second in a series of four and summarises the results of the first benchmark exercise, which identifies the key parameters and important issues concerning the thermalhydraulic system modelling of the transient, with specified core average axial power distribution and fission power time transient history. The transient addressed is a turbine trip in a boiling water reactor, involving pressurization events in which the coupling between core phenomena and system dynamics plays an important role. In addition, the data made available from experiments carried out at the Peach Bottom 2 reactor (a GE-designed BWR/4) make the present benchmark particularly valuable. (author)

  6. 3-D extension C5G7 MOX benchmark calculation using threedant code

    International Nuclear Information System (INIS)

    Kim, H.Ch.; Han, Ch.Y.; Kim, J.K.; Na, B.Ch.

    2005-01-01

    It pursued the benchmark on deterministic 3-D MOX fuel assembly transport calculations without spatial homogenization (C5G7 MOX Benchmark Extension). The goal of this benchmark is to provide a more through test results for the abilities of current available 3-D methods to handle the spatial heterogeneities of reactor core. The benchmark requires solutions in the form of normalized pin powers as well as the eigenvalue for each of the control rod configurations; without rod, with A rods, and with B rods. In this work, the DANTSYS code package was applied to analyze the 3-D Extension C5G7 MOX Benchmark problems. The THREEDANT code within the DANTSYS code package, which solves the 3-D transport equation in x-y-z, and r-z-theta geometries, was employed to perform the benchmark calculations. To analyze the benchmark with the THREEDANT code, proper spatial and angular approximations were made. Several calculations were performed to investigate the effects of the different spatial approximations on the accuracy. The results from these sensitivity studies were analyzed and discussed. From the results, it is found that the 4*4 grid per pin cell is sufficiently refined so that very little benefit is obtained by increasing the mesh size. (authors)

  7. Analyses and results of the OECD/NEA WPNCS EGUNF benchmark phase II. Technical report; Analysen und Ergebnisse zum OECD/NEA WPNCS EGUNF Benchmark Phase II. Technischer Bericht

    Energy Technology Data Exchange (ETDEWEB)

    Hannstein, Volker; Sommer, Fabian

    2017-05-15

    The report summarizes the performed studies and results in the frame of the phase II benchmarks of the expert group of used nuclear fuel (EGUNF) of the working party of nuclear criticality safety (WPNCS) of the nuclear energy agency (NEA) of the organization for economic co-operation and development (OECD). The studies specified within the benchmarks have been realized to the full extent. The scope of the benchmarks was the comparison of a generic BWR fuel element with gadolinium containing fuel rods with several computer codes and cross section libraries of different international working groups and institutions. The used computational model allows the evaluation of the accuracy of fuel rod and their influence of the inventory calculations and the respective influence on BWR burnout credit calculations.

  8. Parallel computing by Monte Carlo codes MVP/GMVP

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu; Nakagawa, Masayuki; Mori, Takamasa

    2001-01-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of parallel computing platforms or by using a standard parallelization library MPI. The platforms used for benchmark calculations are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel paragon and a distributed-memory scalar-parallel computer Hitachi SR2201, IBM SP2. As mentioned generally, linear speedup could be obtained for large-scale problems but parallelization efficiency decreased as the batch size per a processing element(PE) was smaller. It was also found that the statistical uncertainty for assembly powers was less than 0.1% by the PWR full-core calculation with more than 10 million histories and it took about 1.5 hours by massively parallel computing. (author)

  9. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1997-01-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices. The number of fuel pins in these experiments is relatively low, corresponding to fewer than 4 typical pressurized-water-reactor fuel assemblies. Accordingly, they are more appropriate as benchmarks for lattice-physics codes than for reactor-core simulator codes. Unfortunately, the CSEWG specifications retain the full three-dimensional (3D) detail of the experiments, while lattice-physics codes almost universally are limited to two dimensions (2D). This paper proposes an extension of the benchmark specifications to include a 2D model, and it justifies that extension by comparing results from the MCNP Monte Carlo code for the 2D and 3D specifications

  10. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry

    International Nuclear Information System (INIS)

    Ait Abderrahim, H.; D'Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The 'Concrete Benchmark' experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the 'Concrete Benchmark' experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs

  11. Mapsembler, targeted and micro assembly of large NGS datasets on a desktop computer

    Directory of Open Access Journals (Sweden)

    Peterlongo Pierre

    2012-03-01

    Full Text Available Abstract Background The analysis of next-generation sequencing data from large genomes is a timely research topic. Sequencers are producing billions of short sequence fragments from newly sequenced organisms. Computational methods for reconstructing whole genomes/transcriptomes (de novo assemblers are typically employed to process such data. However, these methods require large memory resources and computation time. Many basic biological questions could be answered targeting specific information in the reads, thus avoiding complete assembly. Results We present Mapsembler, an iterative micro and targeted assembler which processes large datasets of reads on commodity hardware. Mapsembler checks for the presence of given regions of interest that can be constructed from reads and builds a short assembly around it, either as a plain sequence or as a graph, showing contextual structure. We introduce new algorithms to retrieve approximate occurrences of a sequence from reads and construct an extension graph. Among other results presented in this paper, Mapsembler enabled to retrieve previously described human breast cancer candidate fusion genes, and to detect new ones not previously known. Conclusions Mapsembler is the first software that enables de novo discovery around a region of interest of repeats, SNPs, exon skipping, gene fusion, as well as other structural events, directly from raw sequencing reads. As indexing is localized, the memory footprint of Mapsembler is negligible. Mapsembler is released under the CeCILL license and can be freely downloaded from http://alcovna.genouest.org/mapsembler/.

  12. Quantitative Performance Analysis of the SPEC OMPM2001 Benchmarks

    Directory of Open Access Journals (Sweden)

    Vishal Aslot

    2003-01-01

    Full Text Available The state of modern computer systems has evolved to allow easy access to multiprocessor systems by supporting multiple processors on a single physical package. As the multiprocessor hardware evolves, new ways of programming it are also developed. Some inventions may merely be adopting and standardizing the older paradigms. One such evolving standard for programming shared-memory parallel computers is the OpenMP API. The Standard Performance Evaluation Corporation (SPEC has created a suite of parallel programs called SPEC OMP to compare and evaluate modern shared-memory multiprocessor systems using the OpenMP standard. We have studied these benchmarks in detail to understand their performance on a modern architecture. In this paper, we present detailed measurements of the benchmarks. We organize, summarize, and display our measurements using a Quantitative Model. We present a detailed discussion and derivation of the model. Also, we discuss the important loops in the SPEC OMPM2001 benchmarks and the reasons for less than ideal speedup on our platform.

  13. SUMMARY OF GENERAL WORKING GROUP A+B+D: CODES BENCHMARKING.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.; SHAPOSHNIKOVA, E.; ZIMMERMANN, F.; HOFMANN, I.

    2006-05-29

    Computer simulation is an indispensable tool in assisting the design, construction, and operation of accelerators. In particular, computer simulation complements analytical theories and experimental observations in understanding beam dynamics in accelerators. The ultimate function of computer simulation is to study mechanisms that limit the performance of frontier accelerators. There are four goals for the benchmarking of computer simulation codes, namely debugging, validation, comparison and verification: (1) Debugging--codes should calculate what they are supposed to calculate; (2) Validation--results generated by the codes should agree with established analytical results for specific cases; (3) Comparison--results from two sets of codes should agree with each other if the models used are the same; and (4) Verification--results from the codes should agree with experimental measurements. This is the summary of the joint session among working groups A, B, and D of the HI32006 Workshop on computer codes benchmarking.

  14. The International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Briggs, J.B.

    2003-01-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)

  15. Application of the coupled code Athlet-Quabox/Cubbox for the extreme scenarios of the OECD/NRC BWR turbine trip benchmark and its performance on multi-processor computers

    International Nuclear Information System (INIS)

    Langenbuch, S.; Schmidt, K.D.; Velkov, K.

    2003-01-01

    The OECD/NRC BWR Turbine Trip (TT) Benchmark is investigated to perform code-to-code comparison of coupled codes including a comparison to measured data which are available from turbine trip experiments at Peach Bottom 2. This Benchmark problem for a BWR over-pressure transient represents a challenging application of coupled codes which integrate 3-dimensional neutron kinetics into thermal-hydraulic system codes for best-estimate simulation of plant transients. This transient represents a typical application of coupled codes which are usually performed on powerful workstations using a single CPU. Nowadays, the availability of multi-CPUs is much easier. Indeed, powerful workstations already provide 4 to 8 CPU, computer centers give access to multi-processor systems with numbers of CPUs in the order of 16 up to several 100. Therefore, the performance of the coupled code Athlet-Quabox/Cubbox on multi-processor systems is studied. Different cases of application lead to changing requirements of the code efficiency, because the amount of computer time spent in different parts of the code is varying. This paper presents main results of the coupled code Athlet-Quabox/Cubbox for the extreme scenarios of the BWR TT Benchmark together with evaluations of the code performance on multi-processor computers. (authors)

  16. Summary Report of Consultants' Meeting on Accuracy of Experimental and Theoretical Nuclear Cross-Section Data for Ion Beam Analysis and Benchmarking

    International Nuclear Information System (INIS)

    Abriola, Daniel; Dimitriou, Paraskevi; Gurbich, Alexander F.

    2013-11-01

    A summary is given of a Consultants' Meeting assembled to assess the accuracy of experimental and theoretical nuclear cross-section data for Ion Beam Analysis and the role of benchmarking experiments. The participants discussed the different approaches to assigning uncertainties to evaluated data, and presented results of benchmark experiments performed in their laboratories. They concluded that priority should be given to the validation of cross- section data by benchmark experiments, and recommended that an experts meeting be held to prepare the guidelines, methodology and work program of a future coordinated project on benchmarking.

  17. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  18. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  19. Benchmark experiment on molybdenum with graphite by using DT neutrons at JAEA/FNS

    Energy Technology Data Exchange (ETDEWEB)

    Ohta, Masayuki, E-mail: ohta.masayuki@qst.go.jp [National Institutes for Quantum and Radiological Science and Technology, 2-166 Oaza-Obuchi-Aza-Omotedate, Rokkasho-mura, Kamikita-gun, Aomori (Japan); Kwon, Saerom; Sato, Satoshi [National Institutes for Quantum and Radiological Science and Technology, 2-166 Oaza-Obuchi-Aza-Omotedate, Rokkasho-mura, Kamikita-gun, Aomori (Japan); Konno, Chikara [Japan Atomic Energy Agency, 2-4 Shirakata-Shirane, Tokai-mura, Naka-gun, Ibaraki (Japan); Ochiai, Kentaro [National Institutes for Quantum and Radiological Science and Technology, 2-166 Oaza-Obuchi-Aza-Omotedate, Rokkasho-mura, Kamikita-gun, Aomori (Japan)

    2017-01-15

    Highlights: • A new benchmark experiment on molybdenum was conducted with DT neutron at JAEA/FNS. • Dosimetry reaction and fission rates were measured in the molybdenum assembly. • Calculated results with MCNP5 code were compared with the measured ones. • A problem on the capture cross section data of molybdenum was pointed out. - Abstract: In our previous benchmark experiment on Mo at JAEA/FNS, we found problems of the (n,2n) and (n,γ) reaction cross sections of Mo in JENDL-4.0 above a few hundred eV. We perform a new benchmark experiment on Mo with a Mo assembly covered with graphite and Li{sub 2}O blocks in order to validate the nuclear data of Mo in lower energy region than in the previous experiment. Several dosimetry reaction and fission rates are measured and compared with calculated ones with the MCNP5-1.40 code and the recent nuclear data libraries, ENDF/B-VII.1, JEFF-3.2, and JENDL-4.0. It is suggested that the (n,γ) reaction cross section of {sup 95}Mo should be larger in the tail region below the large resonance of 45 eV in these nuclear data libraries.

  20. Analysis of the ITER computational shielding benchmark with the Monte Carlo TRIPOLI-4{sup ®} neutron gamma coupled calculations

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yi-Kang, E-mail: yi-kang.lee@cea.fr

    2016-11-01

    Highlights: • Verification and validation of TRIPOLI-4 radiation transport calculations for ITER shielding benchmark. • Evaluation of CEA-V5.1.1 and FENDL-3.0 nuclear data libraries on D–T fusion neutron continuous energy transport calculations. • Advances in nuclear analyses for nuclear heating and radiation damage in iron. • This work also demonstrates that the “safety factors” concept is necessary in the nuclear analyses of ITER. - Abstract: With the growing interest in using the continuous-energy TRIPOLI-4{sup ®} Monte Carlo radiation transport code for ITER applications, a key issue that arises is whether or not the released TRIPOLI-4 code and its associated nuclear data libraries are verified and validated for the D–T fusion neutronics calculations. Previous published benchmark results of TRIPOLI-4 code on the ITER related activities have concentrated on the first wall loading, the reactor dosimetry, the nuclear heating, and the tritium breeding ratio. To enhance the TRIPOLI-4 verification and validation on neutron-gamma coupled calculations for fusion device application, the computational ITER shielding benchmark of M. E. Sawan was performed in this work by using the 2013 released TRIPOLI-4.9S code and the associated CEA-V5.1.1 data library. First wall, blanket, vacuum vessel and toroidal field magnet of the inboard and outboard components were fully modelled in this 1-D toroidal cylindrical benchmark. The 14.1 MeV source neutrons were sampled from a uniform isotropic distribution in the plasma zone. Nuclear responses including neutron and gamma fluxes, nuclear heating, and material damage indicator were benchmarked against previous published results. The capabilities of the TRIPOLI-4 code on the evaluation of above physics parameters were presented. The nuclear data library from the new FENDL-3.0 evaluation was also benchmarked against the CEA-V5.1.1 results for the neutron transport calculations. The results show that both data libraries

  1. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  2. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.; FINAL

    International Nuclear Information System (INIS)

    Domm, T.C.; Underwood, R.S.

    1999-01-01

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  3. A rod-airfoil experiment as a benchmark for broadband noise modeling

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, M.C. [Ecole Centrale de Lyon, Laboratoire de Mecanique des Fluides et d' Acoustique, Ecully Cedex (France); Universite Claude Bernard/Lyon I, Villeurbanne Cedex (France); Boudet, J.; Michard, M. [Ecole Centrale de Lyon, Laboratoire de Mecanique des Fluides et d' Acoustique, Ecully Cedex (France); Casalino, D. [Ecole Centrale de Lyon, Laboratoire de Mecanique des Fluides et d' Acoustique, Ecully Cedex (France); Fluorem SAS, Ecully Cedex (France)

    2005-07-01

    A low Mach number rod-airfoil experiment is shown to be a good benchmark for numerical and theoretical broadband noise modeling. The benchmarking approach is applied to a sound computation from a 2D unsteady-Reynolds-averaged Navier-Stokes (U-RANS) flow field, where 3D effects are partially compensated for by a spanwise statistical model and by a 3D large eddy simulation. The experiment was conducted in the large anechoic wind tunnel of the Ecole Centrale de Lyon. Measurements taken included particle image velocity (PIV) around the airfoil, single hot wire, wall pressure coherence, and far field pressure. These measurements highlight the strong 3D effects responsible for spectral broadening around the rod vortex shedding frequency in the subcritical regime, and the dominance of the noise generated around the airfoil leading edge. The benchmarking approach is illustrated by two examples: the validation of a stochastical noise generation model applied to a 2D U-RANS computation; the assessment of a 3D LES computation using a new subgrid scale (SGS) model coupled to an advanced-time Ffowcs-Williams and Hawkings sound computation. (orig.)

  4. Stationary PWR-calculations by means of LWRSIM at the NEACRP 3D-LWRCT benchmark

    International Nuclear Information System (INIS)

    Van de Wetering, T.F.H.

    1993-01-01

    Within the framework of participation in an international benchmark, calculations were executed by means of an adjusted version of the computer code Light Water Reactor SIMulation (LWRSIM) for three-dimensional reactor core calculations of pressurized water reactors. The 3-D LWR Core Transient Benchmark was set up aimed at the comparison of 3-D computer codes for transient calculations in LWRs. Participation in the benchmark provided more insight in the accuracy of the code when applied for other pressurized water reactors than applied for the nuclear power plant Borssele in the Netherlands, for which the code has been developed and used originally

  5. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  6. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2012-01-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  7. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry; Experience ``Benchmark beton`` pour la dosimetrie hors cuve dans les reacteurs a eau legere

    Energy Technology Data Exchange (ETDEWEB)

    Ait Abderrahim, H.; D`Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The `Concrete Benchmark` experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the `Concrete Benchmark` experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs.

  8. Diversity of bilateral synaptic assemblies for binaural computation in midbrain single neurons.

    Science.gov (United States)

    He, Na; Kong, Lingzhi; Lin, Tao; Wang, Shaohui; Liu, Xiuping; Qi, Jiyao; Yan, Jun

    2017-11-01

    Binaural hearing confers many beneficial functions but our understanding of its underlying neural substrates is limited. This study examines the bilateral synaptic assemblies and binaural computation (or integration) in the central nucleus of the inferior colliculus (ICc) of the auditory midbrain, a key convergent center. Using in-vivo whole-cell patch-clamp, the excitatory and inhibitory postsynaptic potentials (EPSPs/IPSPs) of single ICc neurons to contralateral, ipsilateral and bilateral stimulation were recorded. According to the contralateral and ipsilateral EPSP/IPSP, 7 types of bilateral synaptic assemblies were identified. These include EPSP-EPSP (EE), E-IPSP (EI), E-no response (EO), II, IE, IO and complex-mode (CM) neurons. The CM neurons showed frequency- and/or amplitude-dependent EPSPs/IPSPs to contralateral or ipsilateral stimulation. Bilateral stimulation induced EPSPs/IPSPs that could be larger than (facilitation), similar to (ineffectiveness) or smaller than (suppression) those induced by contralateral stimulation. Our findings have allowed our group to characterize novel neural circuitry for binaural computation in the midbrain. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Burn-up Credit Criticality Safety Benchmark Phase III-C. Nuclide Composition and Neutron Multiplication Factor of a Boiling Water Reactor Spent Fuel Assembly for Burn-up Credit and Criticality Control of Damaged Nuclear Fuel

    International Nuclear Information System (INIS)

    Suyama, K.; Uchida, Y.; Kashima, T.; Ito, T.; Miyaji, T.

    2016-01-01

    Criticality control of damaged nuclear fuel is one of the key issues in the decommissioning operation of the Fukushima Daiichi Nuclear Power Station accident. The average isotopic composition of spent nuclear fuel as a function of burn-up is required in order to evaluate criticality parameters of the mixture of damaged nuclear fuel with other materials. The NEA Expert Group on Burn-up Credit Criticality (EGBUC) has organised several international benchmarks to assess the accuracy of burn-up calculation methodologies. For BWR fuel, the Phase III-B benchmark, published in 2002, was a remarkable landmark that provided general information on the burn-up properties of BWR spent fuel based on the 8x8 type fuel assembly. Since the publication of the Phase III-B benchmark, all major nuclear data libraries have been revised; in Japan from JENDL-3.2 to JENDL-4, in Europe from JEF-2.2 to JEFF-3.1 and in the US from ENDF/B-VI to ENDF/B-VII.1. Burn-up calculation methodologies have been improved by adopting continuous-energy Monte Carlo codes and modern neutronics calculation methods. Considering the importance of the criticality control of damaged fuel in the Fukushima Daiichi Nuclear Power Station accident, a new international burn-up calculation benchmark for the 9 x 9 STEP-3 BWR fuel assemblies was organised to carry out the inter-comparison of the averaged isotopic composition in the interest of the burnup credit criticality safety community. Benchmark specifications were proposed and approved at the EGBUC meeting in September 2012 and distributed in October 2012. The deadline for submitting results was set at the end of February 2013. The basic model for the benchmark problem is an infinite two-dimensional array of BWR fuel assemblies consisting of a 9 x 9 fuel rod array with a water channel in the centre. The initial uranium enrichment of fuel rods without gadolinium is 4.9, 4.4, 3.9, 3.4 and 2.1 wt% and 3.4 wt% for the rods using gadolinium. The burn-up conditions are

  10. QUAST: quality assessment tool for genome assemblies.

    Science.gov (United States)

    Gurevich, Alexey; Saveliev, Vladislav; Vyahhi, Nikolay; Tesler, Glenn

    2013-04-15

    Limitations of genome sequencing techniques have led to dozens of assembly algorithms, none of which is perfect. A number of methods for comparing assemblers have been developed, but none is yet a recognized benchmark. Further, most existing methods for comparing assemblies are only applicable to new assemblies of finished genomes; the problem of evaluating assemblies of previously unsequenced species has not been adequately considered. Here, we present QUAST-a quality assessment tool for evaluating and comparing genome assemblies. This tool improves on leading assembly comparison software with new ideas and quality metrics. QUAST can evaluate assemblies both with a reference genome, as well as without a reference. QUAST produces many reports, summary tables and plots to help scientists in their research and in their publications. In this study, we used QUAST to compare several genome assemblers on three datasets. QUAST tables and plots for all of them are available in the Supplementary Material, and interactive versions of these reports are on the QUAST website. http://bioinf.spbau.ru/quast . Supplementary data are available at Bioinformatics online.

  11. Solution of a benchmark set problems for BWR and PWR reactors with UO2 and MOX fuels using CASMO-4

    International Nuclear Information System (INIS)

    Martinez F, M.A.; Valle G, E. del; Alonso V, G.

    2007-01-01

    In this work some of the results for a group of benchmark problems of light water reactors that allow to study the physics of the fuels of these reactors are presented. These benchmark problems were proposed by Akio Yamamoto and collaborators in 2002 and they include two fuel types; uranium dioxide (UO 2 ) and mixed oxides (MOX). The range of problems that its cover embraces three different configurations: unitary cell for a fuel bar, fuel assemble of PWR and fuel assemble of BWR what allows to carry out an understanding analysis of the problems related with the fuel performance of new generation in light water reactors with high burnt. Also these benchmark problems help to understand the fuel administration in core of a BWR like of a PWR. The calculations were carried out with CMS (of their initials in English Core Management Software), particularly with CASMO-4 that is a code designed to carry out analysis of fuels burnt of fuel bars cells as well as fuel assemblies as much for PWR as for BWR and that it is part in turn of the CMS code. (Author)

  12. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  13. HELIOS2: Benchmarking Against Experiments for Hexagonal and Square Lattices

    International Nuclear Information System (INIS)

    Simeonov, T.

    2009-01-01

    HELIOS2, is a 2D transport theory program for fuel burnup and gamma-flux calculation. It solves the neutron and gamma transport equations in a general, two-dimensional geometry bounded by a polygon of straight lines. The applied transport solver may be chosen between: The Method of Collision Probabilities and The Method of Characteristics. The former is well known for its successful application for preparation of cross section data banks for 3D simulators for all types lattices for WWER's, PWR's, BWR's, AGR's, RBMK and CANDU reactors. The later, method of characteristics, helps in the areas where the requirements of collision probability for computational power become too large of practical application. The application of HELIOS2 and The method of characteristics for some large from calculation point of view benchmarks is presented in this paper. The analysis combines comparisons to measured data from the Hungarian ZR-6 reactor and JAERI's facility of tanktype critical assembly to verify and validate HELIOS2 and method of characteristics for WWER assembly imitators; configurations with different absorber types-ZrB2, B4C, Eu2O3 and Gd2O3; and critical configurations with stainless steel in the reflector. Core eigenvalues and reaction rates are compared. With the account for the uncertainties the results are generally excellent. Special place in this paper is given to the effect of Iron-made radial reflector. Comparisons to measurements from The Temporary International Collective and tanktype critical assembly for stainless steel and Iron reflected cores are presented. The calculated by HELIOS-2 reactivity effect is in very good agreement with the measurements. (Authors)

  14. Computer Tomography Analysis of Fastrac Composite Thrust Chamber Assemblies

    Science.gov (United States)

    Beshears, Ronald D.

    2000-01-01

    Computed tomography (CT) inspection has been integrated into the production process for NASA's Fastrac composite thrust chamber assemblies (TCAs). CT has been proven to be uniquely qualified to detect the known critical flaw for these nozzles, liner cracks that are adjacent to debonds between the liner and overwrap. CT is also being used as a process monitoring tool through analysis of low density indications in the nozzle overwraps. 3d reconstruction of CT images to produce models of flawed areas is being used to give program engineers better insight into the location and nature of nozzle flaws.

  15. CEA-IPSN Participation in the MSLB Benchmark

    International Nuclear Information System (INIS)

    Royer, E.; Raimond, E.; Caruge, D.

    2001-01-01

    The OECD/NEA Main Steam Line Break (MSLB) Benchmark allows the comparison of state-of-the-art and best-estimate models used to compute reactivity accidents. The three exercises of the MSLB benchmark are defined with the aim of analyzing the space and time effects in the core and their modeling with computational tools. Point kinetics (exercise 1) simulation results in a return to power (RTP) after scram, whereas 3-D kinetics (exercises 2 and 3) does not display any RTP. The objective is to understand the reasons for the conservative solution of point kinetics and to assess the benefits of best-estimate models. First, the core vessel mixing model is analyzed; second, sensitivity studies on point kinetics are compared to 3-D kinetics; third, the core thermal hydraulics model and coupling with neutronics is presented; finally, RTP and a suitable model for MSLB are discussed

  16. Computational fluid dynamics modeling of two-phase flow in a BWR fuel assembly

    International Nuclear Information System (INIS)

    Andrey Ioilev; Maskhud Samigulin; Vasily Ustinenko; Simon Lo; Adrian Tentner

    2005-01-01

    Full text of publication follows: The goal of this project is to develop an advanced Computational Fluid Dynamics (CFD) computer code (CFD-BWR) that allows the detailed analysis of the two-phase flow and heat transfer phenomena in a Boiling Water Reactor (BWR) fuel bundle under various operating conditions. This code will include more fundamental physical models than the current generation of sub-channel codes and advanced numerical algorithms for improved computational accuracy, robustness, and speed. It is highly desirable to understand the detailed two-phase flow phenomena inside a BWR fuel bundle. These phenomena include coolant phase changes and multiple flow regimes which directly influence the coolant interaction with fuel assembly and, ultimately, the reactor performance. Traditionally, the best analysis tools for the analysis of two-phase flow phenomena inside the BWR fuel assembly have been the sub-channel codes. However, the resolution of these codes is still too coarse for analyzing the detailed intra-assembly flow patterns, such as flow around a spacer element. Recent progress in Computational Fluid Dynamics (CFD), coupled with the rapidly increasing computational power of massively parallel computers, shows promising potential for the fine-mesh, detailed simulation of fuel assembly two-phase flow phenomena. However, the phenomenological models available in the commercial CFD programs are not as advanced as those currently being used in the sub-channel codes used in the nuclear industry. In particular, there are no models currently available which are able to reliably predict the nature of the flow regimes, and use the appropriate sub-models for those flow regimes. The CFD-BWR code is being developed as a customized module built on the foundation of the commercial CFD Code STAR-CD which provides general two-phase flow modeling capabilities. The paper describes the model development strategy which has been adopted by the development team for the

  17. Preliminary analysis of the proposed BN-600 benchmark core

    International Nuclear Information System (INIS)

    John, T.M.

    2000-01-01

    The Indira Gandhi Centre for Atomic Research is actively involved in the design of Fast Power Reactors in India. The core physics calculations are performed by the computer codes that are developed in-house or by the codes obtained from other laboratories and suitably modified to meet the computational requirements. The basic philosophy of the core physics calculations is to use the diffusion theory codes with the 25 group nuclear cross sections. The parameters that are very sensitive is the core leakage, like the power distribution at the core blanket interface etc. are calculated using transport theory codes under the DSN approximations. All these codes use the finite difference approximation as the method to treat the spatial variation of the neutron flux. Criticality problems having geometries that are irregular to be represented by the conventional codes are solved using Monte Carlo methods. These codes and methods have been validated by the analysis of various critical assemblies and calculational benchmarks. Reactor core design procedure at IGCAR consists of: two and three dimensional diffusion theory calculations (codes ALCIALMI and 3DB); auxiliary calculations, (neutron balance, power distributions, etc. are done by codes that are developed in-house); transport theory corrections from two dimensional transport calculations (DOT); irregular geometry treated by Monte Carlo method (KENO); cross section data library used CV2M (25 group)

  18. M4D: a powerful tool for structured programming at assembly level for MODCOMP computers

    International Nuclear Information System (INIS)

    Shah, R.R.; Basso, R.A.J.

    1984-04-01

    Structured programming techniques offer numerous benefits for software designers and form the basis of the current high level languages. However, these techniques are generally not available to assembly programmers. The M4D package was therefore developed for a large project to enable the use of structured programming constructs such as DO.WHILE-ENDDO and IF-ORIF-ORIF...-ELSE-ENDIF in the assembly code for MODCOMP computers. Programs can thus be produced that have clear semantics and are considerably easier to read than normal assembly code, resulting in reduced program development and testing effort, and in improved long-term maintainability of the code. This paper describes the M4D structured programming tool as implemented for MODCOMP'S MAX III and MAX IV assemblers, and illustrates the use of the facility with a number of examples

  19. The Monte Carlo performance benchmark test - AIMS, specifications and first results

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Faculty of Applied Sciences, Delft University of Technology (Netherlands); Martin, William R., E-mail: wrm@umich.edu [Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States); Petrovic, Bojan, E-mail: Bojan.Petrovic@gatech.edu [Nuclear and Radiological Engineering, Georgia Institute of Technology, Atlanta, GA (United States)

    2011-07-01

    The Monte Carlo performance benchmark for detailed power density calculation in a full-size reactor core is organized under the auspices of the OECD NEA Data Bank. It aims at monitoring over a range of years the increase in performance, measured in terms of standard deviation and computer time, of Monte Carlo calculation of the power density in small volumes. A short description of the reactor geometry and composition is discussed. One of the unique features of the benchmark exercise is the possibility to upload results from participants at a web site of the NEA Data Bank which enables online analysis of results and to graphically display how near we are at the goal of doing a detailed power distribution calculation with acceptable statistical uncertainty in an acceptable computing time. First results are discussed which show that 10 to 100 billion histories must be simulated to reach a standard deviation of a few percent in the estimated power of most of the requested the fuel zones. Even when using a large supercomputer, a considerable speedup is still needed to reach the target of 1 hour computer time. An outlook is given of what to expect from this benchmark exercise over the years. Possible extensions of the benchmark for specific issues relevant in current Monte Carlo calculation for nuclear reactors are also discussed. (author)

  20. The Monte Carlo performance benchmark test - AIMS, specifications and first results

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard; Martin, William R.; Petrovic, Bojan

    2011-01-01

    The Monte Carlo performance benchmark for detailed power density calculation in a full-size reactor core is organized under the auspices of the OECD NEA Data Bank. It aims at monitoring over a range of years the increase in performance, measured in terms of standard deviation and computer time, of Monte Carlo calculation of the power density in small volumes. A short description of the reactor geometry and composition is discussed. One of the unique features of the benchmark exercise is the possibility to upload results from participants at a web site of the NEA Data Bank which enables online analysis of results and to graphically display how near we are at the goal of doing a detailed power distribution calculation with acceptable statistical uncertainty in an acceptable computing time. First results are discussed which show that 10 to 100 billion histories must be simulated to reach a standard deviation of a few percent in the estimated power of most of the requested the fuel zones. Even when using a large supercomputer, a considerable speedup is still needed to reach the target of 1 hour computer time. An outlook is given of what to expect from this benchmark exercise over the years. Possible extensions of the benchmark for specific issues relevant in current Monte Carlo calculation for nuclear reactors are also discussed. (author)

  1. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  2. A computational technique to identify the optimal stiffness matrix for a discrete nuclear fuel assembly model

    International Nuclear Information System (INIS)

    Park, Nam-Gyu; Kim, Kyoung-Joo; Kim, Kyoung-Hong; Suh, Jung-Min

    2013-01-01

    Highlights: ► An identification method of the optimal stiffness matrix for a fuel assembly structure is discussed. ► The least squares optimization method is introduced, and a closed form solution of the problem is derived. ► The method can be expanded to the system with the limited number of modes. ► Identification error due to the perturbed mode shape matrix is analyzed. ► Verification examples show that the proposed procedure leads to a reliable solution. -- Abstract: A reactor core structural model which is used to evaluate the structural integrity of the core contains nuclear fuel assembly models. Since the reactor core consists of many nuclear fuel assemblies, the use of a refined fuel assembly model leads to a considerable amount of computing time for performing nonlinear analyses such as the prediction of seismic induced vibration behaviors. The computational time could be reduced by replacing the detailed fuel assembly model with a simplified model that has fewer degrees of freedom, but the dynamic characteristics of the detailed model must be maintained in the simplified model. Such a model based on an optimal design method is proposed in this paper. That is, when a mass matrix and a mode shape matrix are given, the optimal stiffness matrix of a discrete fuel assembly model can be estimated by applying the least squares minimization method. The verification of the method is completed by comparing test results and simulation results. This paper shows that the simplified model's dynamic behaviors are quite similar to experimental results and that the suggested method is suitable for identifying reliable mathematical model for fuel assemblies

  3. SparseBeads data: benchmarking sparsity-regularized computed tomography

    Science.gov (United States)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  4. Benchmark calculations for VENUS-2 MOX -fueled reactor dosimetry

    International Nuclear Information System (INIS)

    Kim, Jong Kung; Kim, Hong Chul; Shin, Chang Ho; Han, Chi Young; Na, Byung Chan

    2004-01-01

    As a part of a Nuclear Energy Agency (NEA) Project, it was pursued the benchmark for dosimetry calculation of the VENUS-2 MOX-fueled reactor. In this benchmark, the goal is to test the current state-of-the-art computational methods of calculating neutron flux to reactor components against the measured data of the VENUS-2 MOX-fuelled critical experiments. The measured data to be used for this benchmark are the equivalent fission fluxes which are the reaction rates divided by the U 235 fission spectrum averaged cross-section of the corresponding dosimeter. The present benchmark is, therefore, defined to calculate reaction rates and corresponding equivalent fission fluxes measured on the core-mid plane at specific positions outside the core of the VENUS-2 MOX-fuelled reactor. This is a follow-up exercise to the previously completed UO 2 -fuelled VENUS-1 two-dimensional and VENUS-3 three-dimensional exercises. The use of MOX fuel in LWRs presents different neutron characteristics and this is the main interest of the current benchmark compared to the previous ones

  5. DNA Self-Assembly and Computation Studied with a Coarse-grained Dynamic Bonded Model

    DEFF Research Database (Denmark)

    Svaneborg, Carsten; Fellermann, Harold; Rasmussen, Steen

    2012-01-01

    We utilize a coarse-grained directional dynamic bonding DNA model [C. Svaneborg, Comp. Phys. Comm. (In Press DOI:10.1016/j.cpc.2012.03.005)] to study DNA self-assembly and DNA computation. In our DNA model, a single nucleotide is represented by a single interaction site, and complementary sites can...

  6. Energy Efficiency Evaluation and Benchmarking of AFRL’s Condor High Performance Computer

    Science.gov (United States)

    2011-08-01

    PlayStation 3 nodes executing the HPL benchmark. When idle, the two PS3s consume 188.49 W on average. At peak HPL performance, the nodes draw an average of...AUG 2011 2. REPORT TYPE CONFERENCE PAPER (Post Print) 3. DATES COVERED (From - To) JAN 2011 – JUN 2011 4 . TITLE AND SUBTITLE ENERGY EFFICIENCY...the High Performance LINPACK (HPL) benchmark while also measuring the energy consumed to achieve such performance. Supercomputers are ranked by

  7. Effects of uncertainties of experimental data in the benchmarking of a computer code

    International Nuclear Information System (INIS)

    Meulemeester, E. de; Bouffioux, P.; Demeester, J.

    1980-01-01

    Fuel rod performance modelling is sometimes taken in an academical way. The experience of the COMETHE code development since 1967 has clearly shown that benchmarking was the most important part of modelling development. Unfortunately, it requires well characterized data. Although, the two examples presented here were not intended for benchmarking, as the COMETHE calculations were only performed for an interpretation of the results, they illustrate the effects of a lack of fuel characterization and of the power history uncertainties

  8. International Handbook of Evaluated Criticality Safety Benchmark Experiments - ICSBEP (DVD), Version 2013

    International Nuclear Information System (INIS)

    2013-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical experiment facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span nearly 66,000 pages and contain 558 evaluations with benchmark specifications for 4,798 critical, near critical or subcritical configurations, 24 criticality alarm placement/shielding configurations with multiple dose points for each and 200 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the Handbook are benchmark specifications for Critical, Bare, HEU(93.2)- Metal Sphere experiments referred to as ORSphere that were performed by a team of experimenters at Oak Ridge National Laboratory in the early 1970's. A photograph of this assembly is shown on the front cover

  9. A Solar Powered Wireless Computer Mouse: Design, Assembly and Preliminary Testing of 15 Prototypes

    NARCIS (Netherlands)

    van Sark, W.G.J.H.M.; Reich, N.H.; Alsema, E.A.; Netten, M.P.; Veefkind, M.; Silvester, S.; Elzen, B.; Verwaal, M.

    2007-01-01

    The concept and design of a solar powered wireless computer mouse has been completed, and 15 prototypes have been successfully assembled. After necessary cutting, the crystalline silicon cells show satisfactory efficiency: up to 14% when implemented into the mouse device. The implemented voltage

  10. SWAP-Assembler: scalable and efficient genome assembly towards thousands of cores.

    Science.gov (United States)

    Meng, Jintao; Wang, Bingqiang; Wei, Yanjie; Feng, Shengzhong; Balaji, Pavan

    2014-01-01

    There is a widening gap between the throughput of massive parallel sequencing machines and the ability to analyze these sequencing data. Traditional assembly methods requiring long execution time and large amount of memory on a single workstation limit their use on these massive data. This paper presents a highly scalable assembler named as SWAP-Assembler for processing massive sequencing data using thousands of cores, where SWAP is an acronym for Small World Asynchronous Parallel model. In the paper, a mathematical description of multi-step bi-directed graph (MSG) is provided to resolve the computational interdependence on merging edges, and a highly scalable computational framework for SWAP is developed to automatically preform the parallel computation of all operations. Graph cleaning and contig extension are also included for generating contigs with high quality. Experimental results show that SWAP-Assembler scales up to 2048 cores on Yanhuang dataset using only 26 minutes, which is better than several other parallel assemblers, such as ABySS, Ray, and PASHA. Results also show that SWAP-Assembler can generate high quality contigs with good N50 size and low error rate, especially it generated the longest N50 contig sizes for Fish and Yanhuang datasets. In this paper, we presented a highly scalable and efficient genome assembly software, SWAP-Assembler. Compared with several other assemblers, it showed very good performance in terms of scalability and contig quality. This software is available at: https://sourceforge.net/projects/swapassembler.

  11. A thermo mechanical benchmark calculation of a hexagonal can in the BTI accident with INCA code

    International Nuclear Information System (INIS)

    Zucchini, A.

    1988-01-01

    The thermomechanical behaviour of an hexagonal can in a benchmark problem (simulating the conditions of a BTI accident in a fuel assembly) is examined by means of the INCA code and the results systematically compared with those of ADINA

  12. ZPR-3 Assembly 11: A cylindrical sssembly of highly enriched uranium and depleted uranium with an average 235U enrichment of 12 atom % and a depleted uranium reflector

    International Nuclear Information System (INIS)

    Lell, R.M.; McKnight, R.D.; Tsiboulia, A.; Rozhikhin, Y.

    2010-01-01

    Over a period of 30 years, more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited for nuclear data validation and to form the basis for criticality safety benchmarks. A number of the Argonne ZPR/ZPPR critical assemblies have been evaluated as ICSBEP and IRPhEP benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactor physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was 235 U or 239 Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. ZPR-3 Assembly 11 (ZPR-3/11) was designed as a fast reactor physics benchmark experiment with an average core 235 U enrichment of approximately 12 at.% and a depleted uranium reflector. Approximately 79.7% of the total fissions in this assembly occur above 100 keV, approximately 20.3% occur below 100 keV, and essentially none below 0.625 eV - thus the classification as a 'fast' assembly. This assembly is Fast Reactor Benchmark No. 8 in the Cross Section Evaluation Working Group (CSEWG) Benchmark

  13. Monte Carlo benchmarking: Validation and progress

    International Nuclear Information System (INIS)

    Sala, P.

    2010-01-01

    Document available in abstract form only. Full text of publication follows: Calculational tools for radiation shielding at accelerators are faced with new challenges from the present and next generations of particle accelerators. All the details of particle production and transport play a role when dealing with huge power facilities, therapeutic ion beams, radioactive beams and so on. Besides the traditional calculations required for shielding, activation predictions have become an increasingly critical component. Comparison and benchmarking with experimental data is obviously mandatory in order to build up confidence in the computing tools, and to assess their reliability and limitations. Thin target particle production data are often the best tools for understanding the predictive power of individual interaction models and improving their performances. Complex benchmarks (e.g. thick target data, deep penetration, etc.) are invaluable in assessing the overall performances of calculational tools when all ingredients are put at work together. A review of the validation procedures of Monte Carlo tools will be presented with practical and real life examples. The interconnections among benchmarks, model development and impact on shielding calculations will be highlighted. (authors)

  14. Uranium systems to enhance benchmarks for use in the verification of criticality safety computer models. Final report, February 16, 1990--December 31, 1994

    International Nuclear Information System (INIS)

    Busch, R.D.

    1995-01-01

    Dr. Robert Busch of the Department of Chemical and Nuclear Engineering was the principal investigator on this project with technical direction provided by the staff in the Nuclear Criticality Safety Group at Los Alamos. During the period of the contract, he had a number of graduate and undergraduate students working on subtasks. The objective of this work was to develop information on uranium systems to enhance benchmarks for use in the verification of criticality safety computer models. During the first year of this project, most of the work was focused on setting up the SUN SPARC-1 Workstation and acquiring the literature which described the critical experiments. By august 1990, the Workstation was operational with the current version of TWODANT loaded on the system. MCNP, version 4 tape was made available from Los Alamos late in 1990. Various documents were acquired which provide the initial descriptions of the critical experiments under consideration as benchmarks. The next four years were spent working on various benchmark projects. A number of publications and presentations were made on this material. These are briefly discussed in this report

  15. Present Status and Extensions of the Monte Carlo Performance Benchmark

    Science.gov (United States)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  16. Present status and extensions of the Monte Carlo performance benchmark

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.; Petrovic, B.; Martin, W.R.

    2013-01-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed. (authors)

  17. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. H.B. Robinson-2 pressure vessel benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I.; Kam, F.B.K.

    1998-02-01

    The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.

  19. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    Science.gov (United States)

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  20. AutoAssemblyD: a graphical user interface system for several genome assemblers.

    Science.gov (United States)

    Veras, Adonney Allan de Oliveira; de Sá, Pablo Henrique Caracciolo Gomes; Azevedo, Vasco; Silva, Artur; Ramos, Rommel Thiago Jucá

    2013-01-01

    Next-generation sequencing technologies have increased the amount of biological data generated. Thus, bioinformatics has become important because new methods and algorithms are necessary to manipulate and process such data. However, certain challenges have emerged, such as genome assembly using short reads and high-throughput platforms. In this context, several algorithms have been developed, such as Velvet, Abyss, Euler-SR, Mira, Edna, Maq, SHRiMP, Newbler, ALLPATHS, Bowtie and BWA. However, most such assemblers do not have a graphical interface, which makes their use difficult for users without computing experience given the complexity of the assembler syntax. Thus, to make the operation of such assemblers accessible to users without a computing background, we developed AutoAssemblyD, which is a graphical tool for genome assembly submission and remote management by multiple assemblers through XML templates. AssemblyD is freely available at https://sourceforge.net/projects/autoassemblyd. It requires Sun jdk 6 or higher.

  1. ZPR-3 Assembly 11 : A cylindrical sssembly of highly enriched uranium and depleted uranium with an average {sup 235}U enrichment of 12 atom % and a depleted uranium reflector.

    Energy Technology Data Exchange (ETDEWEB)

    Lell, R. M.; McKnight, R. D.; Tsiboulia, A.; Rozhikhin, Y.; National Security; Inst. of Physics and Power Engineering

    2010-09-30

    Over a period of 30 years, more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited for nuclear data validation and to form the basis for criticality safety benchmarks. A number of the Argonne ZPR/ZPPR critical assemblies have been evaluated as ICSBEP and IRPhEP benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactor physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. ZPR-3 Assembly 11 (ZPR-3/11) was designed as a fast reactor physics benchmark experiment with an average core {sup 235}U enrichment of approximately 12 at.% and a depleted uranium reflector. Approximately 79.7% of the total fissions in this assembly occur above 100 keV, approximately 20.3% occur below 100 keV, and essentially none below 0.625 eV - thus the classification as a 'fast' assembly. This assembly is Fast Reactor Benchmark No. 8 in the Cross Section Evaluation

  2. Computational Benchmark Calculations Relevant to the Neutronic Design of the Spallation Neutron Source (SNS)

    International Nuclear Information System (INIS)

    Gallmeier, F.X.; Glasgow, D.C.; Jerde, E.A.; Johnson, J.O.; Yugo, J.J.

    1999-01-01

    The Spallation Neutron Source (SNS) will provide an intense source of low-energy neutrons for experimental use. The low-energy neutrons are produced by the interaction of a high-energy (1.0 GeV) proton beam on a mercury (Hg) target and slowed down in liquid hydrogen or light water moderators. Computer codes and computational techniques are being benchmarked against relevant experimental data to validate and verify the tools being used to predict the performance of the SNS. The LAHET Code System (LCS), which includes LAHET, HTAPE ad HMCNP (a modified version of MCNP version 3b), have been applied to the analysis of experiments that were conducted in the Alternating Gradient Synchrotron (AGS) facility at Brookhaven National Laboratory (BNL). In the AGS experiments, foils of various materials were placed around a mercury-filled stainless steel cylinder, which was bombarded with protons at 1.6 GeV. Neutrons created in the mercury target, activated the foils. Activities of the relevant isotopes were accurately measured and compared with calculated predictions. Measurements at BNL were provided in part by collaborating scientists from JAERI as part of the AGS Spallation Target Experiment (ASTE) collaboration. To date, calculations have shown good agreement with measurements

  3. Verification of the depletion capabilities of the MCNPX code on a LWR MOX fuel assembly

    International Nuclear Information System (INIS)

    Cerba, S.; Hrncir, M.; Necas, V.

    2012-01-01

    The study deals with the verification of the depletion capabilities of the MCNPX code, which is a linked Monte-Carlo depletion code. For such a purpose the IV-B phase of the OECD NEA Burnup credit benchmark has been chosen. The mentioned benchmark is a code to code comparison of the multiplication coefficient k eff and the isotopic composition of a LWR MOX fuel assembly at three given burnup levels and after five years of cooling. The benchmark consists of 6 cases, 2 different Pu vectors and 3 geometry models, however in this study only the fuel assembly calculations with two Pu vectors were performed. The aim of this study was to compare the obtained result with data from the participants of the OECD NEA Burnup Credit project and confirm the burnup capability of the MCNPX code. (Authors)

  4. Determination of Benchmarks Stability within Ahmadu Bello ...

    African Journals Online (AJOL)

    Heights of six geodetic benchmarks over a total distance of 8.6km at the Ahmadu Bello University (ABU), Zaria, Nigeria were recomputed and analysed using least squares adjustment technique. The network computations were tied to two fix primary reference pillars situated outside the campus. The two-tail Chi-square ...

  5. S/sub N/ computational benchmark solutions for slab geometry models of a gas-cooled fast reactor (GCFR) lattice cell

    International Nuclear Information System (INIS)

    McCoy, D.R.

    1981-01-01

    S/sub N/ computational benchmark solutions are generated for a onegroup and multigroup fuel-void slab lattice cell which is a rough model of a gas-cooled fast reactor (GCFR) lattice cell. The reactivity induced by the extrusion of the fuel material into the voided region is determined for a series of partially extruded lattice cell configurations. A special modified Gauss S/sub N/ ordinate array design is developed in order to obtain eigenvalues with errors less than 0.03% in all of the configurations that are considered. The modified Gauss S/sub N/ ordinate array design has a substantially improved eigenvalue angular convergence behavior when compared to existing S/sub N/ ordinate array designs used in neutron streaming applications. The angular refinement computations are performed in some cases by using a perturbation theory method which enables one to obtain high order S/sub N/ eigenvalue estimates for greatly reduced computational costs

  6. Benchmarking of the computer code and the thirty foot side drop analysis for the Shippingport (RPV/NST package)

    International Nuclear Information System (INIS)

    Bumpus, S.E.; Gerhard, M.A.; Hovingh, J.; Trummer, D.J.; Witte, M.C.

    1989-01-01

    This paper presents the benchmarking of a finite element computer code and the subsequent results from the code simulating the 30 foot side drop impact of the RPV/NST transport package from the decommissioned Shippingport Nuclear Power Station. The activated reactor pressure vessel (RPV), thermal shield, and other reactor external components were encased in concrete contained by the neutron shield tank (NST) and a lifting skirt. The Shippingport RPV/NST package, a Type B Category II package, weighs approximately 900 tons and has 17.5 ft diameter and 40.7 ft. length. For transport of the activated components from Shippingport to the burial site, the Safety Analysis Report for Packaging (SARP) demonstrated that the package can withstand the hypothetical accidents of DOE Order 5480.3 including 10 CFR 71. Mathematical simulations of these accidents can substitute for actual tests if the simulated results satisfy the acceptance criteria. Any such mathematical simulation, including the modeling of the materials, must be benchmarked to experiments that duplicate the loading conditions of the tests. Additional confidence in the simulations is justified if the test specimens are configured similar to the package

  7. Benchmark neutron porosity log calculations

    International Nuclear Information System (INIS)

    Little, R.C.; Michael, M.; Verghese, K.; Gardner, R.P.

    1989-01-01

    Calculations have been made for a benchmark neutron porosity log problem with the general purpose Monte Carlo code MCNP and the specific purpose Monte Carlo code McDNL. For accuracy and timing comparison purposes the CRAY XMP and MicroVax II computers have been used with these codes. The CRAY has been used for an analog version of the MCNP code while the MicroVax II has been used for the optimized variance reduction versions of both codes. Results indicate that the two codes give the same results within calculated standard deviations. Comparisons are given and discussed for accuracy (precision) and computation times for the two codes

  8. Proposal of a benchmark for core burnup calculations for a VVER-1000 reactor core

    International Nuclear Information System (INIS)

    Loetsch, T.; Khalimonchuk, V.; Kuchin, A.

    2009-01-01

    In the framework of a project supported by the German BMU the code DYN3D should be further validated and verified. During the work a lack of a benchmark on core burnup calculations for VVER-1000 reactors was noticed. Such a benchmark is useful for validating and verifying the whole package of codes and data libraries for reactor physics calculations including fuel assembly modelling, fuel assembly data preparation, few group data parametrisation and reactor core modelling. The benchmark proposed specifies the core loading patterns of burnup cycles for a VVER-1000 reactor core as well as a set of operational data such as load follow, boron concentration in the coolant, cycle length, measured reactivity coefficients and power density distributions. The reactor core characteristics chosen for comparison and the first results obtained during the work with the reactor physics code DYN3D are presented. This work presents the continuation of efforts of the projects mentioned to estimate the accuracy of calculated characteristics of VVER-1000 reactor cores. In addition, the codes used for reactor physics calculations of safety related reactor core characteristics should be validated and verified for the cases in which they are to be used. This is significant for safety related evaluations and assessments carried out in the framework of licensing and supervision procedures in the field of reactor physics. (authors)

  9. Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems

    International Nuclear Information System (INIS)

    Yokoyama, Kenji

    2009-07-01

    A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, S n order effect and mesh size effect were systematically evaluated and summarized in this report. (author)

  10. NODAL3 Sensitivity Analysis for NEACRP 3D LWR Core Transient Benchmark (PWR

    Directory of Open Access Journals (Sweden)

    Surian Pinem

    2016-01-01

    Full Text Available This paper reports the results of sensitivity analysis of the multidimension, multigroup neutron diffusion NODAL3 code for the NEACRP 3D LWR core transient benchmarks (PWR. The code input parameters covered in the sensitivity analysis are the radial and axial node sizes (the number of radial node per fuel assembly and the number of axial layers, heat conduction node size in the fuel pellet and cladding, and the maximum time step. The output parameters considered in this analysis followed the above-mentioned core transient benchmarks, that is, power peak, time of power peak, power, averaged Doppler temperature, maximum fuel centerline temperature, and coolant outlet temperature at the end of simulation (5 s. The sensitivity analysis results showed that the radial node size and maximum time step give a significant effect on the transient parameters, especially the time of power peak, for the HZP and HFP conditions. The number of ring divisions for fuel pellet and cladding gives negligible effect on the transient solutions. For productive work of the PWR transient analysis, based on the present sensitivity analysis results, we recommend NODAL3 users to use 2×2 radial nodes per assembly, 1×18 axial layers per assembly, the maximum time step of 10 ms, and 9 and 1 ring divisions for fuel pellet and cladding, respectively.

  11. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  12. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  13. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  14. International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook

    International Nuclear Information System (INIS)

    Bess, John D.

    2015-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy (DOE). The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirements and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross-section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span approximately 69000 pages and contain 567 evaluations with benchmark specifications for 4874 critical, near-critical or subcritical configurations, 31 criticality alarm placement/shielding configurations with multiple dose points for each, and 207 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the handbook are benchmark specifications for neutron activation foil and thermoluminescent dosimeter measurements performed at the SILENE critical assembly in Valduc, France as part of a joint venture in 2010 between the US DOE and the French Alternative Energies and Atomic Energy Commission (CEA). A photograph of this experiment is shown on the front cover. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these

  15. A Modeling of BWR-MOX assemblies based on the characteristics method combined with advanced self-shielding models

    International Nuclear Information System (INIS)

    Le Tellier, R.; Hebert, A.; Le Tellier, R.; Santamarina, A.; Litaize, O.

    2008-01-01

    Calculations based on the characteristics method and different self-shielding models are presented for 9 x 9 boiling water reactor (BWR) assemblies fully loaded with mixed-oxide (MOX) fuel. The geometry of these assemblies was recovered from the BASALA experimental program. We have focused our study on three configurations simulating the different voiding conditions that an assembly can undergo in a BWR pressure vessel. A parametric study was carried out with respect to the spatial discretization, the tracking parameters, and the anisotropy order. Comparisons with Monte Carlo calculations in terms of k eff , radiative capture, and fission rates were performed to validate the computational tools. The results are in good agreement between the stochastic and deterministic approaches. The mutual self-shielding model recently introduced within the framework of the Ribon extending self-shielding method appears to be useful for this type of assemblies. Indeed, in the calculation of these MOX benchmarks, the overlapping of resonances, especially between 238 U and 240 Pu, plays an important role due to the spectral strengthening of the flux as the voiding percentage is increased. The method of characteristics is shown to be adequate to perform accurate calculations handling a fine spatial discretization. (authors)

  16. NRC-BNL Benchmark Program on Evaluation of Methods for Seismic Analysis of Coupled Systems

    International Nuclear Information System (INIS)

    Chokshi, N.; DeGrassi, G.; Xu, J.

    1999-01-01

    A NRC-BNL benchmark program for evaluation of state-of-the-art analysis methods and computer programs for seismic analysis of coupled structures with non-classical damping is described. The program includes a series of benchmarking problems designed to investigate various aspects of complexities, applications and limitations associated with methods for analysis of non-classically damped structures. Discussions are provided on the benchmarking process, benchmark structural models, and the evaluation approach, as well as benchmarking ground rules. It is expected that the findings and insights, as well as recommendations from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems

  17. MetaQUAST: evaluation of metagenome assemblies.

    Science.gov (United States)

    Mikheenko, Alla; Saveliev, Vladislav; Gurevich, Alexey

    2016-04-01

    During the past years we have witnessed the rapid development of new metagenome assembly methods. Although there are many benchmark utilities designed for single-genome assemblies, there is no well-recognized evaluation and comparison tool for metagenomic-specific analogues. In this article, we present MetaQUAST, a modification of QUAST, the state-of-the-art tool for genome assembly evaluation based on alignment of contigs to a reference. MetaQUAST addresses such metagenome datasets features as (i) unknown species content by detecting and downloading reference sequences, (ii) huge diversity by giving comprehensive reports for multiple genomes and (iii) presence of highly relative species by detecting chimeric contigs. We demonstrate MetaQUAST performance by comparing several leading assemblers on one simulated and two real datasets. http://bioinf.spbau.ru/metaquast aleksey.gurevich@spbu.ru Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Installation and testing of the ERANOS computer code for fast reactor calculations

    International Nuclear Information System (INIS)

    Gren, Milan

    2010-12-01

    The French ERANOS computer code was acquired and tested by solving benchmark problems. Five problems were calculated: 1D XZ Model, 1D RZ Model, 3D HEX SNR 300 reactor, 2S HEX and 3D HEX VVER 440 reactor. The multi-group diffuse approximation was used. The multiplication coefficients were compared within the first problem, neutron flux density in the calculation points was obtained within the second problem, and powers in the various reactor areas and in the assemblies were calculated within the remaining problems. (P.A.)

  19. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    Science.gov (United States)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different

  20. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  1. Uranium-fuel thermal reactor benchmark testing of CENDL-3

    International Nuclear Information System (INIS)

    Liu Ping

    2001-01-01

    CENDL-3, the new version of China Evaluated Nuclear Data Library are being processed, and distributed for thermal reactor benchmark analysis recently. The processing was carried out using the NJOY nuclear data processing system. The calculations and analyses of uranium-fuel thermal assemblies TRX-1,2, BAPL-1,2,3, ZEEP-1,2,3 were done with lattice code WIMSD5A. The results were compared with the experimental results, the results of the '1986'WIMS library and the results based on ENDF/B-VI. (author)

  2. Cross-section sensitivity and uncertainty analysis of the FNG copper benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kodeli, I., E-mail: ivan.kodeli@ijs.si [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); Kondo, K. [Karlsruhe Institute of Technology, Postfach 3640, D-76021 Karlsruhe (Germany); Japan Atomic Energy Agency, Rokkasho-mura (Japan); Perel, R.L. [Racah Institute of Physics, Hebrew University of Jerusalem, IL-91904 Jerusalem (Israel); Fischer, U. [Karlsruhe Institute of Technology, Postfach 3640, D-76021 Karlsruhe (Germany)

    2016-11-01

    A neutronics benchmark experiment on copper assembly was performed end 2014–beginning 2015 at the 14-MeV Frascati neutron generator (FNG) of ENEA Frascati with the objective to provide the experimental database required for the validation of the copper nuclear data relevant for ITER design calculations, including the related uncertainties. The paper presents the pre- and post-analysis of the experiment performed using cross-section sensitivity and uncertainty codes, both deterministic (SUSD3D) and Monte Carlo (MCSEN5). Cumulative reaction rates and neutron flux spectra, their sensitivity to the cross sections, as well as the corresponding uncertainties were estimated for different selected detector positions up to ∼58 cm in the copper assembly. This permitted in the pre-analysis phase to optimize the geometry, the detector positions and the choice of activation reactions, and in the post-analysis phase to interpret the results of the measurements and the calculations, to conclude on the quality of the relevant nuclear cross-section data, and to estimate the uncertainties in the calculated nuclear responses and fluxes. Large uncertainties in the calculated reaction rates and neutron spectra of up to 50%, rarely observed at this level in the benchmark analysis using today's nuclear data, were predicted, particularly high for fast reactions. Observed C/E (dis)agreements with values as low as 0.5 partly confirm these predictions. Benchmark results are therefore expected to contribute to the improvement of both cross section as well as covariance data evaluations.

  3. Angular interpolations and splice options for three-dimensional transport computations

    International Nuclear Information System (INIS)

    Abu-Shumays, I.K.; Yehnert, C.E.

    1996-01-01

    New, accurate and mathematically rigorous angular Interpolation strategies are presented. These strategies preserve flow and directionality separately over each octant of the unit sphere, and are based on a combination of spherical harmonics expansions and least squares algorithms. Details of a three-dimensional to three-dimensional (3-D to 3-D) splice method which utilizes the new angular interpolations are summarized. The method has been implemented in a multidimensional discrete ordinates transport computer program. Various features of the splice option are illustrated by several applications to a benchmark Dog-Legged Void Neutron (DLVN) streaming and transport experimental assembly

  4. OECD/NRC Benchmark Based on NUPEC PWR Sub-channel and Bundle Test (PSBT). Volume I: Experimental Database and Final Problem Specifications

    International Nuclear Information System (INIS)

    Rubin, A.; Schoedel, A.; Avramova, M.; Utsuno, H.; Bajorek, S.; Velazquez-Lozada, A.

    2012-01-01

    The need to refine models for best-estimate calculations, based on good-quality experimental data, has been expressed in many recent meetings in the field of nuclear applications. The needs arising in this respect should not be limited to the currently available macroscopic methods but should be extended to next-generation analysis techniques that focus on more microscopic processes. One of the most valuable databases identified for the thermal-hydraulics modelling was developed by the Nuclear Power Engineering Corporation (NUPEC), Japan, which includes sub-channel void fraction and departure from nucleate boiling (DNB) measurements in a representative Pressurised Water Reactor (PWR) fuel assembly. Part of this database has been made available for this international benchmark activity entitled 'NUPEC PWR Sub-channel and Bundle Tests (PSBT) benchmark'. This international project has been officially approved by the Japanese Ministry of Economy, Trade, and Industry (METI), the US Nuclear Regulatory Commission (NRC) and endorsed by the OECD/NEA. The benchmark team has been organised based on the collaboration between Japan and the USA. A large number of international experts have agreed to participate in this programme. The fine-mesh high-quality sub-channel void fraction and departure from nucleate boiling data encourages advancement in understanding and modelling complex flow behaviour in real bundles. Considering that the present theoretical approach is relatively immature, the benchmark specification is designed so that it will systematically assess and compare the participants' analytical models on the prediction of detailed void distributions and DNB. The development of truly mechanistic models for DNB prediction is currently underway. The benchmark problem includes both macroscopic and microscopic measurement data. In this context, the sub-channel grade void fraction data are regarded as the macroscopic data and the digitised computer graphic images are the

  5. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  6. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  7. Dynamics of nuclear fuel assemblies in vertical flow channels: computer modelling and associated studies

    International Nuclear Information System (INIS)

    Mason, V.A.; Pettigrew, M.J.; Lelli, G.; Kates, L.; Reimer, E.

    1978-10-01

    A computer model, designed to predict the dynamic behaviour of nuclear fuel assemblies in axial flow, is described in this report. The numerical methods used to construct and solve the matrix equations of motion in the model are discussed together with an outline of the method used to interpret the fuel assembly stability data. The mathematics developed for forced response calculations are described in detail. Certain structural and hydrodynamic modelling parameters must be determined by experiment. These parameters are identified and the methods used for their evaluation are briefly described. Examples of typical applications of the dynamic model are presented towards the end of the report. (author)

  8. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  9. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  10. JNC results of BN-600 benchmark calculation (phase 4)

    International Nuclear Information System (INIS)

    Ishikawa, Makoto

    2003-01-01

    The present work is the results of JNC, Japan, for the Phase 4 of the BN-600 core benchmark problem (Hex-Z fully MOX fuelled core model) organized by IAEA. The benchmark specification is based on 1) the RCM report of IAEA CRP on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of LMFR Reactivity Effects, Action 3.12' (Calculations for BN-600 fully fuelled MOX core for subsequent transient analyses). JENDL-3.2 nuclear data library was used for calculating 70 group ABBN-type group constants. Cell models for fuel assembly and control rod calculations were applied: homogeneous and heterogeneous (cylindrical supercell) model. Basic diffusion calculation was three-dimensional Hex-Z model, 18 group (Citation code). Transport calculations were 18 group, three-dimensional (NSHEC code) based on Sn-transport nodal method developed at JNC. The generated thermal power per fission was based on Sher's data corrected on the basis of ENDF/B-IV data library. Calculation results are presented in Tables for intercomparison

  11. Towards evidence-based computational statistics: lessons from clinical research on the role and design of real-data benchmark studies.

    Science.gov (United States)

    Boulesteix, Anne-Laure; Wilson, Rory; Hapfelmeier, Alexander

    2017-09-09

    The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly "evidence-based". Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of "evidence-based" statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. We suggest that benchmark studies-a method of assessment of statistical methods using real-world datasets-might benefit from adopting (some) concepts from evidence-based medicine towards the goal of more evidence-based statistical research.

  12. A computational investigation on the connection between dynamics properties of ribosomal proteins and ribosome assembly.

    Directory of Open Access Journals (Sweden)

    Brittany Burton

    Full Text Available Assembly of the ribosome from its protein and RNA constituents has been studied extensively over the past 50 years, and experimental evidence suggests that prokaryotic ribosomal proteins undergo conformational changes during assembly. However, to date, no studies have attempted to elucidate these conformational changes. The present work utilizes computational methods to analyze protein dynamics and to investigate the linkage between dynamics and binding of these proteins during the assembly of the ribosome. Ribosomal proteins are known to be positively charged and we find the percentage of positive residues in r-proteins to be about twice that of the average protein: Lys+Arg is 18.7% for E. coli and 21.2% for T. thermophilus. Also, positive residues constitute a large proportion of RNA contacting residues: 39% for E. coli and 46% for T. thermophilus. This affirms the known importance of charge-charge interactions in the assembly of the ribosome. We studied the dynamics of three primary proteins from E. coli and T. thermophilus 30S subunits that bind early in the assembly (S15, S17, and S20 with atomic molecular dynamic simulations, followed by a study of all r-proteins using elastic network models. Molecular dynamics simulations show that solvent-exposed proteins (S15 and S17 tend to adopt more stable solution conformations than an RNA-embedded protein (S20. We also find protein residues that contact the 16S rRNA are generally more mobile in comparison with the other residues. This is because there is a larger proportion of contacting residues located in flexible loop regions. By the use of elastic network models, which are computationally more efficient, we show that this trend holds for most of the 30S r-proteins.

  13. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  14. Benchmark comparisons of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Resler, D.A.; Howerton, R.J.; White, R.M.

    1994-05-01

    With the availability and maturity of several evaluated nuclear data files, it is timely to compare the results of integral tests with calculations using these different files. We discuss here our progress in making integral benchmark tests of the following nuclear data files: ENDL-94, ENDF/B-V and -VI, JENDL-3, JEF-2, and BROND-2. The methods used to process these evaluated libraries in a consistent way into applications files for use in Monte Carlo calculations is presented. Using these libraries, we are calculating and comparing to experiment k eff for 68 fast critical assemblies of 233,235 U and 239 Pu with reflectors of various material and thickness

  15. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  16. Single molecule sequencing-guided scaffolding and correction of draft assemblies.

    Science.gov (United States)

    Zhu, Shenglong; Chen, Danny Z; Emrich, Scott J

    2017-12-06

    Although single molecule sequencing is still improving, the lengths of the generated sequences are inevitably an advantage in genome assembly. Prior work that utilizes long reads to conduct genome assembly has mostly focused on correcting sequencing errors and improving contiguity of de novo assemblies. We propose a disassembling-reassembling approach for both correcting structural errors in the draft assembly and scaffolding a target assembly based on error-corrected single molecule sequences. To achieve this goal, we formulate a maximum alternating path cover problem. We prove that this problem is NP-hard, and solve it by a 2-approximation algorithm. Our experimental results show that our approach can improve the structural correctness of target assemblies in the cost of some contiguity, even with smaller amounts of long reads. In addition, our reassembling process can also serve as a competitive scaffolder relative to well-established assembly benchmarks.

  17. Benchmarking comparison and validation of MCNP photon interaction data

    Directory of Open Access Journals (Sweden)

    Colling Bethany

    2017-01-01

    Full Text Available The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p. Suitable benchmark experiments (iron and water were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p with MCNP6 and 84p if using MCNP-5.

  18. Benchmarking comparison and validation of MCNP photon interaction data

    Science.gov (United States)

    Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.

    2017-09-01

    The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.

  19. A BENCHMARK PROGRAM FOR EVALUATION OF METHODS FOR COMPUTING SEISMIC RESPONSE OF COUPLED BUILDING-PIPING/EQUIPMENT WITH NON-CLASSICAL DAMPING

    International Nuclear Information System (INIS)

    Xu, J.; Degrassi, G.; Chokshi, N.

    2001-01-01

    Under the auspices of the US Nuclear Regulatory Commission (NRC), Brookhaven National Laboratory (BNL) developed a comprehensive program to evaluate state-of-the-art methods and computer programs for seismic analysis of typical coupled nuclear power plant (NPP) systems with nonclassical damping. In this program, four benchmark models of coupled building-piping/equipment systems with different damping characteristics were analyzed for a suite of earthquakes by program participants applying their uniquely developed methods and computer programs. This paper presents the results of their analyses, and their comparison to the benchmark solutions generated by BNL using time domain direct integration methods. The participant's analysis results established using complex modal time history methods showed good comparison with the BNL solutions, while the analyses produced with either complex-mode response spectrum methods or classical normal-mode response spectrum method, in general, produced more conservative results, when averaged over a suite of earthquakes. However, when coupling due to damping is significant, complex-mode response spectrum methods performed better than the classical normal-mode response spectrum method. Furthermore, as part of the program objectives, a parametric assessment is also presented in this paper, aimed at evaluation of the applicability of various analysis methods to problems with different dynamic characteristics unique to coupled NPP systems. It is believed that the findings and insights learned from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems

  20. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  1. The VENUS-7 benchmarks. Results from state-of-the-art transport codes and nuclear data

    International Nuclear Information System (INIS)

    Zwermann, Winfried; Pautz, Andreas; Timm, Wolf

    2010-01-01

    For the validation of both nuclear data and computational methods, comparisons with experimental data are necessary. Most advantageous are assemblies where not only the multiplication factors or critical parameters were measured, but also additional quantities like reactivity differences or pin-wise fission rate distributions have been assessed. Currently there is a comprehensive activity to evaluate such measure-ments and incorporate them in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. A large number of such experiments was performed at the VENUS zero power reactor at SCK/CEN in Belgium in the sixties and seventies. The VENUS-7 series was specified as an international benchmark within the OECD/NEA Working Party on Scientific Issues of Reactor Systems (WPRS), and results obtained with various codes and nuclear data evaluations were summarized. In the present paper, results of high-accuracy transport codes with full spatial resolution with up-to-date nuclear data libraries from the JEFF and ENDF/B evaluations are presented. The comparisons of the results, both code-to-code and with the measured data are augmented by uncertainty and sensitivity analyses with respect to nuclear data uncertainties. For the multiplication factors, these are performed with the TSUNAMI-3D code from the SCALE system. In addition, uncertainties in the reactivity differences are analyzed with the TSAR code which is available from the current SCALE-6 version. (orig.)

  2. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  3. Piping benchmark problems for the Westinghouse AP600 Standardized Plant

    International Nuclear Information System (INIS)

    Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K.

    1997-01-01

    To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for the Westinghouse AP600 Standardized Plant, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the AP600 standard design. It will be required that the combined license licensees demonstrate that their solutions to these problems are in agreement with the benchmark problem set

  4. Comparative analysis of results between CASMO, MCNP and Serpent for a suite of Benchmark problems on BWR reactors; Analisis comparativo de resultados entre CASMO, MCNP y SERPENT para una suite de problemas Benchmark en reactores BWR

    Energy Technology Data Exchange (ETDEWEB)

    Xolocostli M, J. V.; Vargas E, S.; Gomez T, A. M. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Reyes F, M. del C.; Del Valle G, E., E-mail: vicente.xolocostli@inin.gob.mx [IPN, Escuela Superior de Fisica y Matematicas, UP - Adolfo Lopez Mateos, Edif. 9, 07738 Mexico D. F. (Mexico)

    2014-10-15

    In this paper a comparison is made in analyzing the suite of Benchmark problems for reactors type BWR between CASMO-4, MCNP6 and Serpent code. The Benchmark problem consists of two different geometries: a fuel cell of a pin and assembly type BWR. To facilitate the study of reactors physics in the fuel pin their nuclear characteristics are provided to detail, such as burnt dependence, the reactivity of selected nuclide, etc. With respect to the fuel assembly, the presented results are regarding to infinite multiplication factor for burning different steps and different vacuum conditions. Making the analysis of this set of Benchmark problems provides comprehensive test problems for the next fuels generation of BWR reactors with high extended burned. It is important to note that when making this comparison the purpose is to validate the methodologies used in modeling for different operating conditions, if the case is of other BWR assembly. The results will be within a range with some uncertainty, considering that does not depend on code that is used. Escuela Superior de Fisica y Matematicas of Instituto Politecnico Nacional (IPN (Mexico) has accumulated some experience in using Serpent, due to the potential of this code over other commercial codes such as CASMO and MCNP. The obtained results for the infinite multiplication factor are encouraging and motivate the studies to continue with the generation of the X S of a core to a next step a respective nuclear data library is constructed and this can be used by codes developed as part of the development project of the Mexican Analysis Platform of Nuclear Reactors AZTLAN. (Author)

  5. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  6. Experimental Benchmarking of Fire Modeling Simulations. Final Report

    International Nuclear Information System (INIS)

    Greiner, Miles; Lopez, Carlos

    2003-01-01

    A series of large-scale fire tests were performed at Sandia National Laboratories to simulate a nuclear waste transport package under severe accident conditions. The test data were used to benchmark and adjust the Container Analysis Fire Environment (CAFE) computer code. CAFE is a computational fluid dynamics fire model that accurately calculates the heat transfer from a large fire to a massive engulfed transport package. CAFE will be used in transport package design studies and risk analyses

  7. Criticality benchmark comparisons leading to cross-section upgrades

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Heinrichs, D.P.; Lloyd, W.R.; Lent, E.M.

    1993-01-01

    For several years criticality benchmark calculations with COG. COG is a point-wise Monte Carlo code developed at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The principle consideration in developing COG was that the resulting calculation would be as accurate as the point-wise cross-sectional data, since no physics computational approximations were used. The objective of this paper is to report on COG results for criticality benchmark experiments in concert with MCNP comparisons which are resulting in corrections an upgrades to the point-wise ENDL cross-section data libraries. Benchmarking discrepancies reported here indicated difficulties in the Evaluated Nuclear Data Livermore (ENDL) cross-sections for U-238 at thermal neutron energy levels. This led to a re-evaluation and selection of the appropriate cross-section values from several cross-section sets available (ENDL, ENDF/B-V). Further cross-section upgrades anticipated

  8. Results of LWR core transient benchmarks

    International Nuclear Information System (INIS)

    Finnemann, H.; Bauer, H.; Galati, A.; Martinelli, R.

    1993-10-01

    LWR core transient (LWRCT) benchmarks, based on well defined problems with a complete set of input data, are used to assess the discrepancies between three-dimensional space-time kinetics codes in transient calculations. The PWR problem chosen is the ejection of a control assembly from an initially critical core at hot zero power or at full power, each for three different geometrical configurations. The set of problems offers a variety of reactivity excursions which efficiently test the coupled neutronic/thermal - hydraulic models of the codes. The 63 sets of submitted solutions are analyzed by comparison with a nodal reference solution defined by using a finer spatial and temporal resolution than in standard calculations. The BWR problems considered are reactivity excursions caused by cold water injection and pressurization events. In the present paper, only the cold water injection event is discussed and evaluated in some detail. Lacking a reference solution the evaluation of the 8 sets of BWR contributions relies on a synthetic comparative discussion. The results of this first phase of LWRCT benchmark calculations are quite satisfactory, though there remain some unresolved issues. It is therefore concluded that even more challenging problems can be successfully tackled in a suggested second test phase. (authors). 46 figs., 21 tabs., 3 refs

  9. Analysis of the impact of correlated benchmark experiments on the validation of codes for criticality safety analysis

    International Nuclear Information System (INIS)

    Bock, M.; Stuke, M.; Behler, M.

    2013-01-01

    The validation of a code for criticality safety analysis requires the recalculation of benchmark experiments. The selected benchmark experiments are chosen such that they have properties similar to the application case that has to be assessed. A common source of benchmark experiments is the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) compiled by the 'International Criticality Safety Benchmark Evaluation Project' (ICSBEP). In order to take full advantage of the information provided by the individual benchmark descriptions for the application case, the recommended procedure is to perform an uncertainty analysis. The latter is based on the uncertainties of experimental results included in most of the benchmark descriptions. They can be performed by means of the Monte Carlo sampling technique. The consideration of uncertainties is also being introduced in the supplementary sheet of DIN 25478 'Application of computer codes in the assessment of criticality safety'. However, for a correct treatment of uncertainties taking into account the individual uncertainties of the benchmark experiments is insufficient. In addition, correlations between benchmark experiments have to be handled correctly. For example, these correlations can arise due to different cases of a benchmark experiment sharing the same components like fuel pins or fissile solutions. Thus, manufacturing tolerances of these components (e.g. diameter of the fuel pellets) have to be considered in a consistent manner in all cases of the benchmark experiment. At the 2012 meeting of the Expert Group on 'Uncertainty Analysis for Criticality Safety Assessment' (UACSA) of the OECD/NEA a benchmark proposal was outlined that aimed for the determination of the impact on benchmark correlations on the estimation of the computational bias of the neutron multiplication factor (k eff ). The analysis presented here is based on this proposal. (orig.)

  10. Development of an ICSBEP Benchmark Evaluation, Nearly 20 Years of Experience

    International Nuclear Information System (INIS)

    Briggs, J. Blair; Bess, John D.

    2011-01-01

    The basic structure of all ICSBEP benchmark evaluations is essentially the same and includes (1) a detailed description of the experiment; (2) an evaluation of the experiment, including an exhaustive effort to quantify the effects of uncertainties on measured quantities; (3) a concise presentation of benchmark-model specifications; (4) sample calculation results; and (5) a summary of experimental references. Computer code input listings and other relevant information are generally preserved in appendixes. Details of an ICSBEP evaluation is presented.

  11. Benchmark Tests on the New IBM RISC System/6000 590 Workstation

    Directory of Open Access Journals (Sweden)

    Harvey J. Wasserman

    1995-01-01

    Full Text Available The results of benchmark tests on the superscalar IBM RISC System/6000 Model 590 are presented. A set of well-characterized Fortran benchmarks spanning a range of computational characteristics was used for the study. The data from the 590 system are compared with those from a single-processor CRAY C90 system as well as with other microprocessor-based systems, such as the Digital Equipment Corporation AXP 3000/500X and the Hewlett-Packard HP/735.

  12. Benchmark calculation programme concerning typical LMFBR structures

    International Nuclear Information System (INIS)

    Donea, J.; Ferrari, G.; Grossetie, J.C.; Terzaghi, A.

    1982-01-01

    This programme, which is part of a comprehensive activity aimed at resolving difficulties encountered in using design procedures based on ASME Code Case N-47, should allow to get confidence in computer codes which are supposed to provide a realistic prediction of the LMFBR component behaviour. The calculations started on static analysis of typical structures made of non linear materials stressed by cyclic loads. The fluid structure interaction analysis is also being considered. Reasons and details of the different benchmark calculations are described, results obtained are commented and future computational exercise indicated

  13. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  14. A priori modeling of chemical reactions on computational grid platforms: Workflows and data models

    International Nuclear Information System (INIS)

    Rampino, S.; Monari, A.; Rossi, E.; Evangelisti, S.; Laganà, A.

    2012-01-01

    Graphical abstract: The quantum framework of the Grid Empowered Molecular Simulator GEMS assembled on the European Grid allows the ab initio evaluation of the dynamics of small systems starting from the calculation of the electronic properties. Highlights: ► The grid based GEMS simulator accurately models small chemical systems. ► Q5Cost and D5Cost file formats provide interoperability in the workflow. ► Benchmark runs on H + H 2 highlight the Grid empowering. ► O + O 2 and N + N 2 calculated k (T)’s fall within the error bars of the experiment. - Abstract: The quantum framework of the Grid Empowered Molecular Simulator GEMS has been assembled on the segment of the European Grid devoted to the Computational Chemistry Virtual Organization. The related grid based workflow allows the ab initio evaluation of the dynamics of small systems starting from the calculation of the electronic properties. Interoperability between computational codes across the different stages of the workflow was made possible by the use of the common data formats Q5Cost and D5Cost. Illustrative benchmark runs have been performed on the prototype H + H 2 , N + N 2 and O + O 2 gas phase exchange reactions and thermal rate coefficients have been calculated for the last two. Results are discussed in terms of the modeling of the interaction and advantages of using the Grid is highlighted.

  15. Computational and theoretical modeling of pH and flow effects on the early-stage non-equilibrium self-assembly of optoelectronic peptides

    Science.gov (United States)

    Mansbach, Rachael; Ferguson, Andrew

    Self-assembling π-conjugated peptides are attractive candidates for the fabrication of bioelectronic materials possessing optoelectronic properties due to electron delocalization over the conjugated peptide groups. We present a computational and theoretical study of an experimentally-realized optoelectronic peptide that displays triggerable assembly in low pH to resolve the microscopic effects of flow and pH on the non-equilibrium morphology and kinetics of assembly. Using a combination of molecular dynamics simulations and hydrodynamic modeling, we quantify the time and length scales at which convective flows employed in directed assembly compete with microscopic diffusion to influence assembly. We also show that there is a critical pH below which aggregation proceeds irreversibly, and quantify the relationship between pH, charge density, and aggregate size. Our work provides new fundamental understanding of pH and flow of non-equilibrium π-conjugated peptide assembly, and lays the groundwork for the rational manipulation of environmental conditions and peptide chemistry to control assembly and the attendant emergent optoelectronic properties. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award # DE-SC0011847, and by the Computational Science and Engineering Fellowship from the University of Illinois at Urbana-Champaign.

  16. A computer code package for Monte Carlo photon-electron transport simulation Comparisons with experimental benchmarks

    International Nuclear Information System (INIS)

    Popescu, Lucretiu M.

    2000-01-01

    A computer code package (PTSIM) for particle transport Monte Carlo simulation was developed using object oriented techniques of design and programming. A flexible system for simulation of coupled photon, electron transport, facilitating development of efficient simulation applications, was obtained. For photons: Compton and photo-electric effects, pair production and Rayleigh interactions are simulated, while for electrons, a class II condensed history scheme was considered, in which catastrophic interactions (Moeller electron-electron interaction, bremsstrahlung, etc.) are treated in detail and all other interactions with reduced individual effect on electron history are grouped together using continuous slowing down approximation and energy straggling theories. Electron angular straggling is simulated using Moliere theory or a mixed model in which scatters at large angles are treated as distinct events. Comparisons with experimentally benchmarks for electron transmission and bremsstrahlung emissions energy and angular spectra, and for dose calculations are presented

  17. Piping benchmark problems for the ABB/CE System 80+ Standardized Plant

    International Nuclear Information System (INIS)

    Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K.

    1994-07-01

    To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for the ABB/Combustion Engineering System 80+ Standardized Plant, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the System 80+ standard design. It will be required that the combined license licensees demonstrate that their solution to these problems are in agreement with the benchmark problem set. The first System 80+ piping benchmark is a uniform support motion response spectrum solution for one section of the feedwater piping subjected to safe shutdown seismic loads. The second System 80+ piping benchmark is a time history solution for the feedwater piping subjected to the transient loading induced by a water hammer. The third System 80+ piping benchmark is a time history solution of the pressurizer surge line subjected to the accelerations induced by a main steam line pipe break. The System 80+ reactor is an advanced PWR type

  18. Benchmarking state-of-the-art optical simulation methods for analyzing large nanophotonic structures

    DEFF Research Database (Denmark)

    Gregersen, Niels; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn

    2018-01-01

    Five computational methods are benchmarked by computing quality factors and resonance wavelengths inphotonic crystal membrane L5 and L9 line defect cavities. Careful convergence studies reveal that some methods are more suitable than others for analyzing these cavities....

  19. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  20. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flapper, Joris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variables affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and

  1. Pre-evaluation of fusion shielding benchmark experiment

    International Nuclear Information System (INIS)

    Hayashi, K.; Handa, H.; Konno, C.

    1994-01-01

    Shielding benchmark experiment is very useful to test the design code and nuclear data for fusion devices. There are many types of benchmark experiments that should be done in fusion shielding problems, but time and budget are limited. Therefore it will be important to select and determine the effective experimental configurations by precalculation before the experiment. The authors did three types of pre-evaluation to determine the experimental assembly configurations of shielding benchmark experiments planned in FNS, JAERI. (1) Void Effect Experiment - The purpose of this experiment is to measure the local increase of dose and nuclear heating behind small void(s) in shield material. Dimension of the voids and its arrangements were decided as follows. Dose and nuclear heating were calculated both for with and without void(s). Minimum size of the void was determined so that the ratio of these two results may be larger than error of the measurement system. (2) Auxiliary Shield Experiment - The purpose of this experiment is to measure shielding properties of B 4 C, Pb, W, and dose around superconducting magnet (SCM). Thickness of B 4 C, Pb, W and their arrangement including multilayer configuration were determined. (3) SCM Nuclear Heating Experiment - The purpose of this experiment is to measure nuclear heating and dose distribution in SCM material. Because it is difficult to use liquid helium as a part of SCM mock up material, material composition of SCM mock up are surveyed to have similar nuclear heating property of real SCM composition

  2. Calculation of Savannah River K Reactor Mark-22 assembly LOCA/ECS power limits

    International Nuclear Information System (INIS)

    Fischer, S.R.; Farman, R.F.; Birdsell, S.A.

    1992-01-01

    This paper summarizes the results of TRAC-PF1/MOD3 calculations of Mark-22 fuel assembly of loss-of-coolant accident/emergency cooling system (LOCA/ECS) power limits for the Savannah River Site (SRS) K Reactor. This effort was part of a larger effort undertaken by the Los Alamos National Laboratory for the US Department of Energy to perform confirmatory power limits calculations for the SRS K Reactor. A method using a detailed three-dimensional (3D) TRAC model of the Mark-22 fuel assembly was developed to compute LOCA/ECS power limits. Assembly power was limited to ensure that no point on the fuel assembly walls would exceed the local saturation temperature. The detailed TRAC model for the Mark-22 assembly consisted of three concentric 3D vessel components which simulated the two targets, two fuel tubes, and three main flow channels of the fuel assembly. The model included 100% eccentricity between the assembly annuli and a 20% power tilt. Eccentricity in the radial alignment of the assembly annuli arises because axial spacer ribs that run the length of the fuel and targets are used. Wall-shear, interfacial-shear, and wall heat-transfer correlations were developed and implemented in TRAC-PF1/MOD3 specifically for modeling flow and heat transfer in the narrow ribbed annuli encountered in the Mark-22 fuel assembly design. We established the validity of these new constitutive models using separate-effects benchmarks. TRAC system calculations of K Reactor indicated that the limiting ECS-phase accident is a double-ended guillonite break in a process water line at the pump discharge (i.e., a PDLOCA). The fuel assembly with the minimum cooling potential is identified from this system calculation. Detailed assembly calculations then were performed using appropriate boundary conditions obtained from this limiting system LOCA. Coolant flow rates and pressure boundary conditions were obtained from this system calculation and applied to the detailed assembly model

  3. Monte Carlo code criticality benchmark comparisons for waste packaging

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.

    1992-07-01

    COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock ampersand Wilcox Co. (B ampersand W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented

  4. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  5. Calculation of the local power peaking near WWER-440 control assemblies with Hf plates

    International Nuclear Information System (INIS)

    Hegyi, Gy.; Hordosy, G.; Kereszturi, A.; Maraszy, Cs.; Temesvari, E.

    2003-01-01

    The original coupler design of the WWER-440 assemblies had the following well known deficiency: The relatively large amount of water in the coupler between the absorber and fuel port of the control assembly can cause undesirably sharp power peaking in the fuel rods next to the coupler. The power peaking can be especially high after control rod withdrawal when the coupler reached low burnup level region of the adjacent assembly. The modernized coupler design overcomes the original problem by applying a thin Hf plate in the critical region. The very complicated structure of the coupler requires the verification of the core design methods by high precision 3D Monte Carlo calculations. The paper presents an MCNP reference calculation on the control rod coupler benchmark with Hf absorber plates. The benchmark solution with the KARATE-440 code system is also presented. The need for treating the Hf burnout in the reflector region is investigated (Authors)

  6. Combining Self-Explaining with Computer Architecture Diagrams to Enhance the Learning of Assembly Language Programming

    Science.gov (United States)

    Hung, Y.-C.

    2012-01-01

    This paper investigates the impact of combining self explaining (SE) with computer architecture diagrams to help novice students learn assembly language programming. Pre- and post-test scores for the experimental and control groups were compared and subjected to covariance (ANCOVA) statistical analysis. Results indicate that the SE-plus-diagram…

  7. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  8. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  9. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Science.gov (United States)

    Tanabe, Akifumi S; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate

  10. Towards evidence-based computational statistics: lessons from clinical research on the role and design of real-data benchmark studies

    Directory of Open Access Journals (Sweden)

    Anne-Laure Boulesteix

    2017-09-01

    Full Text Available Abstract Background The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly “evidence-based”. Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. Main message In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of “evidence-based” statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. Conclusion We suggest that benchmark studies—a method of assessment of statistical methods using real-world datasets—might benefit from adopting (some concepts from evidence-based medicine towards the goal of more evidence-based statistical research.

  11. Benchmarking and monitoring framework for interconnected file synchronization and sharing services

    DEFF Research Database (Denmark)

    Mrówczyński, Piotr; Mościcki, Jakub T.; Lamanna, Massimo

    2018-01-01

    computing and storage infrastructure in the research labs. In this work we present a benchmarking and monitoring framework for file synchronization and sharing services. It allows service providers to monitor the operational status of their services, understand the service behavior under different load...... types and with different network locations of the synchronization clients. The framework is designed as a monitoring and benchmarking tool to provide performance and robustness metrics for interconnected file synchronization and sharing services such as Open Cloud Mesh....

  12. COXPRO-II: a computer program for calculating radiation and conduction heat transfer in irradiated fuel assemblies

    International Nuclear Information System (INIS)

    Rhodes, C.A.

    1984-12-01

    This report describes the computer program COXPRO-II, which was written for performing thermal analyses of irradiated fuel assemblies in a gaseous environment with no forced cooling. The heat transfer modes within the fuel pin bundle are radiation exchange among fuel pin surfaces and conduction by the stagnant gas. The array of parallel cylindrical fuel pins may be enclosed by a metal wrapper or shroud. Heat is dissipated from the outer surface of the fuel pin assembly by radiation and convection. Both equilateral triangle and square fuel pin arrays can be analyzed. Steady-state and unsteady-state conditions are included. Temperatures predicted by the COXPRO-II code have been validated by comparing them with experimental measurements. Temperature predictions compare favorably to temperature measurements in pressurized water reactor (PWR) and liquid-metal fast breeder reactor (LMFBR) simulated, electrically heated fuel assemblies. Also, temperature comparisons are made on an actual irradiated Fast-Flux Test Facility (FFTF) LMFBR fuel assembly

  13. Fission blanket benchmark experiment on spherical assembly of uranium and PE with PE reflector

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Tonghua; Lu, Xinxin; Wang, Mei; Han, Zijie, E-mail: neutron_integral@aliyun.com; Jiang, Li; Wen, Zhongwei; Liu, Rong

    2016-04-15

    Highlights: • The fission rate distribution on two depleted uranium assemblies was measured with plate fission chambers. • We do calculations using MCNP code and ENDF/B-V.0 library. • The overestimation of calculations to the measured fission rates was found. • The observed discrepancy are discussed. - Abstract: New concept of fusion-fission hybrid for energy generation has been proposed. To validate the nuclear performance of fission blanket of hybrid, as part of series of validation experiment, two types of fission blanket assemblies were setup in this work and measurements were made of the reaction rate distribution for uranium fission in the spherical assembly of depleted uranium and polyethylene by Plate Fission Chamber (PFC). There are two PFCs in experiment, one is depleted uranium chamber and the other is enriched uranium chamber. The Monte-Carlo transport code MCNP5 and continuous energy cross sections library ENDF/BV.0 were used for the analysis of fission rate distribution in the two types of assemblies. The calculated results were compared with the experimental ones. The overestimation of fission rate for depleted uranium and enriched uranium were found in the inner boundary of the two assemblies. However, the C/E ratio tends to decrease for the distance from the core slightly and the results for enriched uranium are better than that for depleted uranium.

  14. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  15. Benchmark results in radiative transfer

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Siewert, C.E.

    1986-02-01

    Several aspects of the F N method are reported, and the method is used to solve accurately some benchmark problems in radiative transfer in the field of atmospheric physics. The method was modified to solve cases of pure scattering and an improved process was developed for computing the radiation intensity. An algorithms for computing several quantities used in the F N method was done. An improved scheme to evaluate certain integrals relevant to the method is done, and a two-term recursion relation that has proved useful for the numerical evaluation of matrix elements, basic for the method, is given. The methods used to solve the encountered linear algebric equations are discussed, and the numerical results are evaluated. (M.C.K.) [pt

  16. Computer simulation of Masurca critical and subcritical experiments. Muse-4 benchmark. Final report

    International Nuclear Information System (INIS)

    2006-01-01

    The efficient and safe management of spent fuel produced during the operation of commercial nuclear power plants is an important issue. In this context, partitioning and transmutation (P and T) of minor actinides and long-lived fission products can play an important role, significantly reducing the burden on geological repositories of nuclear waste and allowing their more effective use. Various systems, including existing reactors, fast reactors and advanced systems have been considered to optimise the transmutation scheme. Recently, many countries have shown interest in accelerator-driven systems (ADS) due to their potential for transmutation of minor actinides. Much R and D work is still required in order to demonstrate their desired capability as a whole system, and the current analysis methods and nuclear data for minor actinide burners are not as well established as those for conventionally-fuelled systems. Recognizing a need for code and data validation in this area, the Nuclear Science Committee of the OECD/NEA has organised various theoretical benchmarks on ADS burners. Many improvements and clarifications concerning nuclear data and calculation methods have been achieved. However, some significant discrepancies for important parameters are not fully understood and still require clarification. Therefore, this international benchmark based on MASURCA experiments, which were carried out under the auspices of the EC 5. Framework Programme, was launched in December 2001 in co-operation with the CEA (France) and CIEMAT (Spain). The benchmark model was oriented to compare simulation predictions based on available codes and nuclear data libraries with experimental data related to TRU transmutation, criticality constants and time evolution of the neutronic flux following source variation, within liquid metal fast subcritical systems. A total of 16 different institutions participated in this first experiment based benchmark, providing 34 solutions. The large number

  17. Analysis of a multigroup stylized CANDU half-core benchmark

    International Nuclear Information System (INIS)

    Pounders, Justin M.; Rahnema, Farzad; Serghiuta, Dumitru

    2011-01-01

    Highlights: → This paper provides a benchmark that is a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. → An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core CANDU benchmark problem. → Reference eigenvalues and selected pin and bundle fission rates are included. → 2-, 4- and 47-group Monte Carlo solutions are compared to analyze homogenization-free transport approximations that result from energy condensation. - Abstract: An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem. Reference eigenvalues and selected pin and bundle fission rates are also included. This benchmark is intended to provide computational reactor physicists and methods developers with a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. In addition to transport theory code verification, the 8-group energy structure provides reactor physicist with an ideal problem for examining cross section homogenization and collapsing effects in a full-core environment. To this end, additional 2-, 4- and 47-group full-core Monte Carlo benchmark solutions are compared to analyze homogenization-free transport approximations incurred as a result of energy group condensation.

  18. SMORN-III benchmark test on reactor noise analysis methods

    International Nuclear Information System (INIS)

    Shinohara, Yoshikuni; Hirota, Jitsuya

    1984-02-01

    A computational benchmark test was performed in conjunction with the Third Specialists Meeting on Reactor Noise (SMORN-III) which was held in Tokyo, Japan in October 1981. This report summarizes the results of the test as well as the works made for preparation of the test. (author)

  19. Extreme-Scale De Novo Genome Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Georganas, Evangelos [Intel Corporation, Santa Clara, CA (United States); Hofmeyr, Steven [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Egan, Rob [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Rokhsar, Daniel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Yelick, Katherine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.

    2017-09-26

    De novo whole genome assembly reconstructs genomic sequence from short, overlapping, and potentially erroneous DNA segments and is one of the most important computations in modern genomics. This work presents HipMER, a high-quality end-to-end de novo assembler designed for extreme scale analysis, via efficient parallelization of the Meraculous code. Genome assembly software has many components, each of which stresses different components of a computer system. This chapter explains the computational challenges involved in each step of the HipMer pipeline, the key distributed data structures, and communication costs in detail. We present performance results of assembling the human genome and the large hexaploid wheat genome on large supercomputers up to tens of thousands of cores.

  20. WIPP Benchmark calculations with the large strain SPECTROM codes

    International Nuclear Information System (INIS)

    Callahan, G.D.; DeVries, K.L.

    1995-08-01

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems

  1. HEP specific benchmarks of virtual machines on multi-core CPU architectures

    International Nuclear Information System (INIS)

    Alef, M; Gable, I

    2010-01-01

    Virtualization technologies such as Xen can be used in order to satisfy the disparate and often incompatible system requirements of different user groups in shared-use computing facilities. This capability is particularly important for HEP applications, which often have restrictive requirements. The use of virtualization adds flexibility, however, it is essential that the virtualization technology place little overhead on the HEP application. We present an evaluation of the practicality of running HEP applications in multiple Virtual Machines (VMs) on a single multi-core Linux system. We use the benchmark suite used by the HEPiX CPU Benchmarking Working Group to give a quantitative evaluation relevant to the HEP community. Benchmarks are packaged inside VMs and then the VMs are booted onto a single multi-core system. Benchmarks are then simultaneously executed on each VM to simulate highly loaded VMs running HEP applications. These techniques are applied to a variety of multi-core CPU architectures and VM configurations.

  2. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  3. A Privacy-Preserving Platform for User-Centric Quantitative Benchmarking

    Science.gov (United States)

    Herrmann, Dominik; Scheuer, Florian; Feustel, Philipp; Nowey, Thomas; Federrath, Hannes

    We propose a centralised platform for quantitative benchmarking of key performance indicators (KPI) among mutually distrustful organisations. Our platform offers users the opportunity to request an ad-hoc benchmarking for a specific KPI within a peer group of their choice. Architecture and protocol are designed to provide anonymity to its users and to hide the sensitive KPI values from other clients and the central server. To this end, we integrate user-centric peer group formation, exchangeable secure multi-party computation protocols, short-lived ephemeral key pairs as pseudonyms, and attribute certificates. We show by empirical evaluation of a prototype that the performance is acceptable for reasonably sized peer groups.

  4. Piping benchmark problems for the General Electric Advanced Boiling Water Reactor

    International Nuclear Information System (INIS)

    Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K.

    1993-08-01

    To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for an advanced boiling water reactor standard design, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the advanced reactor standard design. It will be required that the combined license holders demonstrate that their solutions to these problems are in agreement with the benchmark problem set

  5. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  6. HyspIRI Low Latency Concept and Benchmarks

    Science.gov (United States)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  7. Gas cooled fast reactor benchmarks for JNC and Cea neutronic tools assessment

    International Nuclear Information System (INIS)

    Rimpault, G.; Sugino, K.; Hayashi, H.

    2005-01-01

    In order to verify the adequacy of JNC and Cea computational tools for the definition of GCFR (gas cooled fast reactor) core characteristics, GCFR neutronic benchmarks have been performed. The benchmarks have been carried out on two different cores: 1) a conventional Gas-Cooled fast Reactor (EGCR) core with pin-type fuel, and 2) an innovative He-cooled Coated-Particle Fuel (CPF) core. Core characteristics being studied include: -) Criticality (Effective multiplication factor or K-effective), -) Instantaneous breeding gain (BG), -) Core Doppler effect, and -) Coolant depressurization reactivity. K-effective and coolant depressurization reactivity at EOEC (End Of Equilibrium Cycle) state were calculated since these values are the most critical characteristics in the core design. In order to check the influence due to the difference of depletion calculation systems, a simple depletion calculation benchmark was performed. Values such as: -) burnup reactivity loss, -) mass balance of heavy metals and fission products (FP) were calculated. Results of the core design characteristics calculated by both JNC and Cea sides agree quite satisfactorily in terms of core conceptual design study. Potential features for improving the GCFR computational tools have been discovered during the course of this benchmark such as the way to calculate accurately the breeding gain. Different ways to improve the accuracy of the calculations have also been identified. In particular, investigation on nuclear data for steel is important for EGCR and for lumped fission products in both cores. The outcome of this benchmark is already satisfactory and will help to design more precisely GCFR cores. (authors)

  8. How to benchmark methods for structure-based virtual screening of large compound libraries.

    Science.gov (United States)

    Christofferson, Andrew J; Huang, Niu

    2012-01-01

    Structure-based virtual screening is a useful computational technique for ligand discovery. To systematically evaluate different docking approaches, it is important to have a consistent benchmarking protocol that is both relevant and unbiased. Here, we describe the designing of a benchmarking data set for docking screen assessment, a standard docking screening process, and the analysis and presentation of the enrichment of annotated ligands among a background decoy database.

  9. Development of ORIGEN libraries for mixed oxide (MOX) fuel assembly designs

    International Nuclear Information System (INIS)

    Mertyurek, Ugur; Gauld, Ian C.

    2016-01-01

    Highlights: • ORIGEN MOX library generation process is described. • SCALE burnup calculations are validated against measured MOX fuel samples from the MALIBU program. • ORIGEN MOX libraries are verified using the OECD Phase IV-B benchmark. • There is good agreement for calculated-to-measured isotopic distributions. - Abstract: ORIGEN cross section libraries for reactor-grade mixed oxide (MOX) fuel assembly designs have been developed to provide fast and accurate depletion calculations to predict nuclide inventories, radiation sources and thermal decay heat information needed in safety evaluations and safeguards verification measurements of spent nuclear fuel. These ORIGEN libraries are generated using two-dimensional lattice physics assembly models that include enrichment zoning and cross section data based on ENDF/B-VII.0 evaluations. Using the SCALE depletion sequence, burnup-dependent cross sections are created for selected commercial reactor assembly designs and a representative range of reactor operating conditions, fuel enrichments, and fuel burnup. The burnup dependent cross sections are then interpolated to provide problem-dependent cross sections for ORIGEN, avoiding the need for time-consuming lattice physics calculations. The ORIGEN libraries for MOX assembly designs are validated against destructive radiochemical assay measurements of MOX fuel from the MALIBU international experimental program. This program included measurements of MOX fuel from a 15 × 15 pressurized water reactor assembly and a 9 × 9 boiling water reactor assembly. The ORIGEN MOX libraries are also compared against detailed assembly calculations from the Phase IV-B numerical MOX fuel burnup credit benchmark coordinated by the Nuclear Energy Agency within the Organization for Economic Cooperation and Development. The nuclide compositions calculated by ORIGEN using the MOX libraries are shown to be in good agreement with other physics codes and with experimental data.

  10. Availability of Neutronics Benchmarks in the ICSBEP and IRPhEP Handbooks for Computational Tools Testing

    Energy Technology Data Exchange (ETDEWEB)

    Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana; Hill, Ian; Gulliford, Jim

    2017-02-01

    In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount. Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next

  11. MC21 Monte Carlo analysis of the Hoogenboom-Martin full-core PWR benchmark problem - 301

    International Nuclear Information System (INIS)

    Kelly, D.J.; Sutton, Th.M.; Trumbull, T.H.; Dobreff, P.S.

    2010-01-01

    At the 2009 American Nuclear Society Mathematics and Computation conference, Hoogenboom and Martin proposed a full-core PWR model to monitor the improvement of Monte Carlo codes to compute detailed power density distributions. This paper describes the application of the MC21 Monte Carlo code to the analysis of this benchmark model. With the MC21 code, we obtained detailed power distributions over the entire core. The model consisted of 214 assemblies, each made up of a 17x17 array of pins. Each pin was subdivided into 100 axial nodes, thus resulting in over seven million tally regions. Various cases were run to assess the statistical convergence of the model. This included runs of 10 billion and 40 billion neutron histories, as well as ten independent runs of 4 billion neutron histories each. The 40 billion neutron-history calculation resulted in 43% of all regions having a 95% confidence level of 2% or less implying a relative standard deviation of 1%. Furthermore, 99.7% of regions having a relative power density of 1.0 or greater have a similar confidence level. We present timing results that assess the MC21 performance relative to the number of tallies requested. Source convergence was monitored by analyzing plots of the Shannon entropy and eigenvalue versus active cycle. We also obtained an estimate of the dominance ratio. Additionally, we performed an analysis of the error in an attempt to ascertain the validity of the confidence intervals predicted by MC21. Finally, we look forward to the prospect of full core 3-D Monte Carlo depletion by scoping out the required problem size. This study provides an initial data point for the Hoogenboom-Martin benchmark model using a state-of-the-art Monte Carlo code. (authors)

  12. Benchmarks of Global Clean Energy Manufacturing: Summary of Findings

    Energy Technology Data Exchange (ETDEWEB)

    2017-01-01

    The Benchmarks of Global Clean Energy Manufacturing will help policymakers and industry gain deeper understanding of global manufacturing of clean energy technologies. Increased knowledge of the product supply chains can inform decisions related to manufacturing facilities for extracting and processing raw materials, making the array of required subcomponents, and assembling and shipping the final product. This brochure summarized key findings from the analysis and includes important figures from the report. The report was prepared by the Clean Energy Manufacturing Analysis Center (CEMAC) analysts at the U.S. Department of Energy's National Renewable Energy Laboratory.

  13. EA-MC Neutronic Calculations on IAEA ADS Benchmark 3.2

    Energy Technology Data Exchange (ETDEWEB)

    Dahlfors, Marcus [Uppsala Univ. (Sweden). Dept. of Radiation Sciences; Kadi, Yacine [CERN, Geneva (Switzerland). Emerging Energy Technologies

    2006-01-15

    The neutronics and the transmutation properties of the IAEA ADS benchmark 3.2 setup, the 'Yalina' experiment or ISTC project B-70, have been studied through an extensive amount of 3-D Monte Carlo calculations at CERN. The simulations were performed with the state-of-the-art computer code package EA-MC, developed at CERN. The calculational approach is outlined and the results are presented in accordance with the guidelines given in the benchmark description. A variety of experimental conditions and parameters are examined; three different fuel rod configurations and three types of neutron sources are applied to the system. Reactivity change effects introduced by removal of fuel rods in both central and peripheral positions are also computed. Irradiation samples located in a total of 8 geometrical positions are examined. Calculations of capture reaction rates in {sup 129}I, {sup 237}Np and {sup 243}Am samples and of fission reaction rates in {sup 235}U, {sup 237}Np and {sup 243}Am samples are presented. Simulated neutron flux densities and energy spectra as well as spectral indices inside experimental channels are also given according to benchmark specifications. Two different nuclear data libraries, JAR-95 and JENDL-3.2, are applied for the calculations.

  14. Experiences with installing and benchmarking SCALE 4.0 on workstations

    International Nuclear Information System (INIS)

    Montierth, L.M.; Briggs, J.B.

    1992-01-01

    The advent of economical, high-speed workstations has placed on the criticality engineer's desktop the means to perform computational analysis that was previously possible only on mainframe computers. With this capability comes the need to modify and maintain criticality codes for use on a variety of different workstations. Due to the use of nonstandard coding, compiler differences [in lieu of American National Standards Institute (ANSI) standards], and other machine idiosyncrasies, there is a definite need to systematically test and benchmark all codes ported to workstations. Once benchmarked, a user environment must be maintained to ensure that user code does not become corrupted. The goal in creating a workstation version of the criticality safety analysis sequence (CSAS) codes in SCALE 4.0 was to start with the Cray versions and change as little source code as possible yet produce as generic a code as possible. To date, this code has been ported to the IBM RISC 6000, Data General AViiON 400, Silicon Graphics 4D-35 (all using the same source code), and to the Hewlett Packard Series 700 workstations. The code is maintained under a configuration control procedure. In this paper, the authors address considerations that pertain to the installation and benchmarking of CSAS

  15. Benchmarking of the FENDL-3 Neutron Cross-Section Data Library for Fusion Applications

    International Nuclear Information System (INIS)

    Fischer, U.; Kondo, K.; Angelone, M.; Batistoni, P.; Villari, R.; Bohm, T.; Sawan, M.; Walker, B.; Konno, C.

    2014-03-01

    This report summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) with the objective to test and qualify the neutron induced general purpose FENDL-3.0 data library for fusion applications. The benchmark approach consisted of two major steps including the analysis of a simple ITER-like computational benchmark, and a series of analyses of benchmark experiments conducted previously at the 14 MeV neutron generator facilities at ENEA Frascati, Italy (FNG) and JAEA, Tokai-mura, Japan (FNS). The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses analysed. There is a slight trend, however, for an increase of the fast neutron flux in the shielding experiment and a decrease in the breeder mock-up experiments. The photon flux spectra measured in the bulk shield and the tungsten experiments are significantly better reproduced with FENDL-3.0 data. In general, FENDL-3, as compared to FENDL-2.1, shows an improved performance for fusion neutronics applications. It is thus recommended to ITER to replace FENDL-2.1 as reference data library for neutronics calculation by FENDL-3.0. (author)

  16. Benchmark calculations in multigroup and multidimensional time-dependent transport

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Musso, E.; Ravetto, P.; Sumini, M.

    1990-01-01

    It is widely recognized that reliable benchmarks are essential in many technical fields in order to assess the response of any approximation to the physics of the problem to be treated and to verify the performance of the numerical methods used. The best possible benchmarks are analytical solutions to paradigmatic problems where no approximations are actually introduced and the only error encountered is connected to the limitations of computational algorithms. Another major advantage of analytical solutions is that they allow a deeper understanding of the physical features of the model, which is essential for the intelligent use of complicated codes. In neutron transport theory, the need for benchmarks is particularly great. In this paper, the authors propose to establish accurate numerical solutions to some problems concerning the migration of neutron pulses. Use will be made of the space asymptotic theory, coupled with a Laplace transformation inverted by a numerical technique directly evaluating the inversion integral

  17. Testing New Programming Paradigms with NAS Parallel Benchmarks

    Science.gov (United States)

    Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.

    2000-01-01

    Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage

  18. OECD/NRC BWR Turbine Trip Transient Benchmark as a Basis for Comprehensive Qualification and Studying Best-Estimate Coupled Codes

    International Nuclear Information System (INIS)

    Ivanov, Kostadin; Olson, Andy; Sartori, Enrico

    2004-01-01

    An Organisation for Economic Co-operation and Development (OECD)/U.S. Nuclear Regulatory Commission (NRC)-sponsored coupled-code benchmark has been initiated for a boiling water reactor (BWR) turbine trip (TT) transient. Turbine trip transients in a BWR are pressurization events in which the coupling between core space-dependent neutronic phenomena and system dynamics plays an important role. In addition, the available real plant experimental data make this benchmark problem very valuable. Over the course of defining and coordinating the BWR TT benchmark, a systematic approach has been established to validate best-estimate coupled codes. This approach employs a multilevel methodology that not only allows for a consistent and comprehensive validation process but also contributes to the study of different numerical and computational aspects of coupled best-estimate simulations. This paper provides an overview of the OECD/NRC BWR TT benchmark activities with emphasis on the discussion of the numerical and computational aspects of the benchmark

  19. Self-Assembly of Infinite Structures

    Directory of Open Access Journals (Sweden)

    Scott M. Summers

    2009-06-01

    Full Text Available We review some recent results related to the self-assembly of infinite structures in the Tile Assembly Model. These results include impossibility results, as well as novel tile assembly systems in which shapes and patterns that represent various notions of computation self-assemble. Several open questions are also presented and motivated.

  20. Benchmark problems for repository siting models

    International Nuclear Information System (INIS)

    Ross, B.; Mercer, J.W.; Thomas, S.D.; Lester, B.H.

    1982-12-01

    This report describes benchmark problems to test computer codes used in siting nuclear waste repositories. Analytical solutions, field problems, and hypothetical problems are included. Problems are included for the following types of codes: ground-water flow in saturated porous media, heat transport in saturated media, ground-water flow in saturated fractured media, heat and solute transport in saturated porous media, solute transport in saturated porous media, solute transport in saturated fractured media, and solute transport in unsaturated porous media

  1. Helium generation reaction rates for 6Li and 10B in benchmark facilities

    International Nuclear Information System (INIS)

    Farrar, Harry IV; Oliver, B.M.; Lippincott, E.P.

    1980-01-01

    The helium generation rates for 10 B and 6 Li have been measured in two benchmark reactor facilities having neutron spectra similar to those found in a breeder reactor. The irradiations took place in the Coupled Fast Reactivity Measurements Facility (CFRMF) and in the 10% enriched 235 U critical assembly, BIG-10. The helium reaction rates were obtained by precise high-sensitivity gas mass spectrometric analyses of the helium content of numerous small samples. Comparison of these reaction rates with other reaction rates measured in the same facilities, and with rates calculated from published cross sections and from best estimates of the neutron spectral shapes, indicate significant discrepancies in the calculated values. Additional irradiations in other benchmark facilities have been undertaken to better determine the energy ranges where the discrepancies lie

  2. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  3. HELIOS2: Benchmarking against experiments for hexagonal and square lattices

    International Nuclear Information System (INIS)

    Simeonov, T.

    2009-01-01

    HELIOS2, is a 2D transport theory program for fuel burnup and gamma-flux calculation. It solves the neutron and gamma transport equations in a general, two-dimensional geometry bounded by a polygon of straight lines. The applied transport solver may be chosen between: The Method of Collision Probabilities (CP) and The Method of Characteristics(MoC). The former is well known for its successful application for preparation of cross section data banks for 3D simulators for all types lattices for WWERs, PWRs, BWRs, AGRs, RBMK and CANDU reactors. The later, MoC, helps in the areas where the requirements of CP for computational power become too large of practical application. The application of HELIOS2 and The Method of Characteristics for some large from calculation point of view benchmarks is presented in this paper. The analysis combines comparisons to measured data from the Hungarian ZR-6 reactor and JAERI facility of Tank type Critical Assembly (TCA) to verify and validate HELIOS2 and MOC for WWER assembly imitators; configurations with different absorber types- ZrB 2 , B 4 C, Eu 2 O 3 and Gd 2 O 3 ; and critical configurations with stainless steel in the reflector. Core eigenvalues and reaction rates are compared. With the account for the uncertainties the results are generally excellent. Special place in this paper is given to the effect of Iron-made radial reflector. Comparisons to measurements from TIC and TCA for stainless steel and Iron reflected cores are presented. The calculated by HELIOS-2 reactivity effect is in very good agreement with the measurements. (author)

  4. Computational fluid dynamics (CFD) round robin benchmark for a pressurized water reactor (PWR) rod bundle

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Shin K., E-mail: paengki1@tamu.edu; Hassan, Yassin A.

    2016-05-15

    Highlights: • The capabilities of steady RANS models were directly assessed for full axial scale experiment. • The importance of mesh and conjugate heat transfer was reaffirmed. • The rod inner-surface temperature was directly compared. • The steady RANS calculations showed a limitation in the prediction of circumferential distribution of the rod surface temperature. - Abstract: This study examined the capabilities and limitations of steady Reynolds-Averaged Navier–Stokes (RANS) approach for pressurized water reactor (PWR) rod bundle problems, based on the round robin benchmark of computational fluid dynamics (CFD) codes against the NESTOR experiment for a 5 × 5 rod bundle with typical split-type mixing vane grids (MVGs). The round robin exercise against the high-fidelity, broad-range (covering multi-spans and entire lateral domain) NESTOR experimental data for both the flow field and the rod temperatures enabled us to obtain important insights into CFD prediction and validation for the split-type MVG PWR rod bundle problem. It was found that the steady RANS turbulence models with wall function could reasonably predict two key variables for a rod bundle problem – grid span pressure loss and the rod surface temperature – once mesh (type, resolution, and configuration) was suitable and conjugate heat transfer was properly considered. However, they over-predicted the magnitude of the circumferential variation of the rod surface temperature and could not capture its peak azimuthal locations for a central rod in the wake of the MVG. These discrepancies in the rod surface temperature were probably because the steady RANS approach could not capture unsteady, large-scale cross-flow fluctuations and qualitative cross-flow pattern change due to the laterally confined test section. Based on this benchmarking study, lessons and recommendations about experimental methods as well as CFD methods were also provided for the future research.

  5. Depletion benchmarks calculation of random media using explicit modeling approach of RMC

    International Nuclear Information System (INIS)

    Liu, Shichang; She, Ding; Liang, Jin-gang; Wang, Kan

    2016-01-01

    Highlights: • Explicit modeling of RMC is applied to depletion benchmark for HTGR fuel element. • Explicit modeling can provide detailed burnup distribution and burnup heterogeneity. • The results would serve as a supplement for the HTGR fuel depletion benchmark. • The method of adjacent burnup regions combination is proposed for full-core problems. • The combination method can reduce memory footprint, keeping the computing accuracy. - Abstract: Monte Carlo method plays an important role in accurate simulation of random media, owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. Three stochastic geometry modeling methods including Random Lattice Method, Chord Length Sampling and explicit modeling approach with mesh acceleration technique, have been implemented in RMC to simulate the particle transport in the dispersed fuels, in which the explicit modeling method is regarded as the best choice. In this paper, the explicit modeling method is applied to the depletion benchmark for HTGR fuel element, and the method of combination of adjacent burnup regions has been proposed and investigated. The results show that the explicit modeling can provide detailed burnup distribution of individual TRISO particles, and this work would serve as a supplement for the HTGR fuel depletion benchmark calculations. The combination of adjacent burnup regions can effectively reduce the memory footprint while keeping the computational accuracy.

  6. CFD validation in OECD/NEA t-junction benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E. (Mathematics and Computer Science); (Cambridge Univ.); (Moscow Institute of Nuclar Energy Safety)

    2011-08-23

    When streams of rapidly moving flow merge in a T-junction, the potential arises for large oscillations at the scale of the diameter, D, with a period scaling as O(D/U), where U is the characteristic flow velocity. If the streams are of different temperatures, the oscillations result in experimental fluctuations (thermal striping) at the pipe wall in the outlet branch that can accelerate thermal-mechanical fatigue and ultimately cause pipe failure. The importance of this phenomenon has prompted the nuclear energy modeling and simulation community to establish a benchmark to test the ability of computational fluid dynamics (CFD) codes to predict thermal striping. The benchmark is based on thermal and velocity data measured in an experiment designed specifically for this purpose. Thermal striping is intrinsically unsteady and hence not accessible to steady state simulation approaches such as steady state Reynolds-averaged Navier-Stokes (RANS) models.1 Consequently, one must consider either unsteady RANS or large eddy simulation (LES). This report compares the results for three LES codes: Nek5000, developed at Argonne National Laboratory (USA), and Cabaret and Conv3D, developed at the Moscow Institute of Nuclear Energy Safety at (IBRAE) in Russia. Nek5000 is based on the spectral element method (SEM), which is a high-order weighted residual technique that combines the geometric flexibility of the finite element method (FEM) with the tensor-product efficiencies of spectral methods. Cabaret is a 'compact accurately boundary-adjusting high-resolution technique' for fluid dynamics simulation. The method is second-order accurate on nonuniform grids in space and time, and has a small dispersion error and computational stencil defined within one space-time cell. The scheme is equipped with a conservative nonlinear correction procedure based on the maximum principle. CONV3D is based on the immersed boundary method and is validated on a wide set of the experimental

  7. The Benchmark Test Results of QNX RTOS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jang Yeol; Lee, Young Jun; Cheon, Se Woo; Lee, Jang Soo; Kwon, Kee Choon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    A Real-Time Operating System(RTOS) is an Operating System(OS) intended for real-time applications. Benchmarking is a point of reference by which something can be measured. The QNX is a Real Time Operating System(RTOS) developed by QSSL(QNX Software Systems Ltd.) in Canada. The ELMSYS is the brand name of commercially available Personal Computer(PC) for applications such as Cabinet Operator Module(COM) of Digital Plant Protection System(DPPS) and COM of Digital Engineered Safety Features Actuation System(DESFAS). The ELMSYS PC Hardware is being qualified by KTL(Korea Testing Lab.) for use as a Cabinet Operator Module(COM). The QNX RTOS is being dedicated by Korea Atomic Energy Research Institute (KAERI). This paper describes the outline and benchmarking test results on Context Switching, Message Passing, Synchronization and Deadline Violation of QNX RTOS under the ELMSYS PC platform

  8. The Benchmark Test Results of QNX RTOS

    International Nuclear Information System (INIS)

    Kim, Jang Yeol; Lee, Young Jun; Cheon, Se Woo; Lee, Jang Soo; Kwon, Kee Choon

    2010-01-01

    A Real-Time Operating System(RTOS) is an Operating System(OS) intended for real-time applications. Benchmarking is a point of reference by which something can be measured. The QNX is a Real Time Operating System(RTOS) developed by QSSL(QNX Software Systems Ltd.) in Canada. The ELMSYS is the brand name of commercially available Personal Computer(PC) for applications such as Cabinet Operator Module(COM) of Digital Plant Protection System(DPPS) and COM of Digital Engineered Safety Features Actuation System(DESFAS). The ELMSYS PC Hardware is being qualified by KTL(Korea Testing Lab.) for use as a Cabinet Operator Module(COM). The QNX RTOS is being dedicated by Korea Atomic Energy Research Institute (KAERI). This paper describes the outline and benchmarking test results on Context Switching, Message Passing, Synchronization and Deadline Violation of QNX RTOS under the ELMSYS PC platform

  9. Summary of ACCSIM and ORBIT Benchmarking Simulations

    CERN Document Server

    AIBA, M

    2009-01-01

    We have performed a benchmarking study of ORBIT and ACCSIM which are accelerator tracking codes having routines to evaluate space charge effects. The study is motivated by the need of predicting/understanding beam behaviour in the CERN Proton Synchrotron Booster (PSB) in which direct space charge is expected to be the dominant performance limitation. Historically at CERN, ACCSIM has been employed for space charge simulation studies. A benchmark study using ORBIT has been started to confirm the results from ACCSIM and to profit from the advantages of ORBIT such as the capability of parallel processing. We observed a fair agreement in emittance evolution in the horizontal plane but not in the vertical one. This may be partly due to the fact that the algorithm to compute the space charge field is different between the two codes.

  10. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  11. BIGHORN Computational Fluid Dynamics Theory, Methodology, and Code Verification & Validation Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Yidong [Idaho National Lab. (INL), Idaho Falls, ID (United States); Andrs, David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Martineau, Richard Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for time integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.

  12. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  13. A thermo-mechanical benchmark calculation of an hexagonal can in the BTI accident with ABAQUS code

    International Nuclear Information System (INIS)

    Zucchini, A.

    1988-07-01

    The thermo-mechanical behaviour of an hexagonal can in a benchmark problem (simulating the conditions of a BTI accident in a fuel assembly) is examined by means of the ABAQUS code: the effects of the geometric nonlinearity are shown and the results are compared with those of a previous analysis performed with the INCA code. (author)

  14. Comparative analysis of results between CASMO, MCNP and Serpent for a suite of Benchmark problems on BWR reactors

    International Nuclear Information System (INIS)

    Xolocostli M, J. V.; Vargas E, S.; Gomez T, A. M.; Reyes F, M. del C.; Del Valle G, E.

    2014-10-01

    In this paper a comparison is made in analyzing the suite of Benchmark problems for reactors type BWR between CASMO-4, MCNP6 and Serpent code. The Benchmark problem consists of two different geometries: a fuel cell of a pin and assembly type BWR. To facilitate the study of reactors physics in the fuel pin their nuclear characteristics are provided to detail, such as burnt dependence, the reactivity of selected nuclide, etc. With respect to the fuel assembly, the presented results are regarding to infinite multiplication factor for burning different steps and different vacuum conditions. Making the analysis of this set of Benchmark problems provides comprehensive test problems for the next fuels generation of BWR reactors with high extended burned. It is important to note that when making this comparison the purpose is to validate the methodologies used in modeling for different operating conditions, if the case is of other BWR assembly. The results will be within a range with some uncertainty, considering that does not depend on code that is used. Escuela Superior de Fisica y Matematicas of Instituto Politecnico Nacional (IPN (Mexico) has accumulated some experience in using Serpent, due to the potential of this code over other commercial codes such as CASMO and MCNP. The obtained results for the infinite multiplication factor are encouraging and motivate the studies to continue with the generation of the X S of a core to a next step a respective nuclear data library is constructed and this can be used by codes developed as part of the development project of the Mexican Analysis Platform of Nuclear Reactors AZTLAN. (Author)

  15. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  16. Computing Moment-Based Probability Tables for Self-Shielding Calculations in Lattice Codes

    International Nuclear Information System (INIS)

    Hebert, Alain; Coste, Mireille

    2002-01-01

    As part of the self-shielding model used in the APOLLO2 lattice code, probability tables are required to compute self-shielded cross sections for coarse energy groups (typically with 99 or 172 groups). This paper describes the replacement of the multiband tables (typically with 51 subgroups) with moment-based tables in release 2.5 of APOLLO2. An improved Ribon method is proposed to compute moment-based probability tables, allowing important savings in CPU resources while maintaining the accuracy of the self-shielding algorithm. Finally, a validation is presented where the absorption rates obtained with each of these techniques are compared with exact values obtained using a fine-group elastic slowing-down calculation in the resolved energy domain. Other results, relative to the Rowland's benchmark and to three assembly production cases, are also presented

  17. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  18. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  19. Computational fluid dynamics modeling of two-phase flow in a BWR fuel assembly. Final CRADA Report

    International Nuclear Information System (INIS)

    Tentner, A.

    2009-01-01

    A direct numerical simulation capability for two-phase flows with heat transfer in complex geometries can considerably reduce the hardware development cycle, facilitate the optimization and reduce the costs of testing of various industrial facilities, such as nuclear power plants, steam generators, steam condensers, liquid cooling systems, heat exchangers, distillers, and boilers. Specifically, the phenomena occurring in a two-phase coolant flow in a BWR (Boiling Water Reactor) fuel assembly include coolant phase changes and multiple flow regimes which directly influence the coolant interaction with fuel assembly and, ultimately, the reactor performance. Traditionally, the best analysis tools for this purpose of two-phase flow phenomena inside the BWR fuel assembly have been the sub-channel codes. However, the resolution of these codes is too coarse for analyzing the detailed intra-assembly flow patterns, such as flow around a spacer element. Advanced CFD (Computational Fluid Dynamics) codes provide a potential for detailed 3D simulations of coolant flow inside a fuel assembly, including flow around a spacer element using more fundamental physical models of flow regimes and phase interactions than sub-channel codes. Such models can extend the code applicability to a wider range of situations, which is highly important for increasing the efficiency and to prevent accidents.

  20. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-01-01

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1

  1. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters.

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-09-11

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  2. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  3. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  4. Reevaluation of the case, de Hoffman, and Placzek one-group neutron transport benchmark solution in plane geometry

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1986-01-01

    In a course on neutron transport theory and also in the analytical neutron transport theory literature, the pioneering work of Case et al. (CdHP) is often referenced. This work was truly a monumental effort in that it treated the fundamental mathematical properties of the one-group neutron Boltzmann equation in detail as well as the numerical evaluation of most of the resulting solutions. Many mathematically and numerically oriented dissertations were based on this classic monograph. In light of the considerable advances made both in numerical methods and computer technology since 1953, when the historic CdHP monograph first appeared, it seems appropriate to reevaluate the numerical benchmark solutions found therein with present-day computational technology. In most transport theory courses, the subject of proper benchmarking of numerical algorithms and transport codes is seldom addressed at any great length. This may be the reason that the benchmarking procedure is so rarely practiced in the nuclear community and when practiced is improperly applied. In this presentation, the development of a new benchmark for the one-group neutron flux in an infinite medium will be detailed with emphasis placed on the educational aspects of the benchmarking activity

  5. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Primm III, RT

    2002-05-29

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  6. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    International Nuclear Information System (INIS)

    Primm III, RT

    2002-01-01

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors

  7. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  8. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  9. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  10. DRAGON solutions to the 3D transport benchmark over a range in parameter space

    International Nuclear Information System (INIS)

    Martin, Nicolas; Hebert, Alain; Marleau, Guy

    2010-01-01

    DRAGON solutions to the 'NEA suite of benchmarks for 3D transport methods and codes over a range in parameter space' are discussed in this paper. A description of the benchmark is first provided, followed by a detailed review of the different computational models used in the lattice code DRAGON. Two numerical methods were selected for generating the required quantities for the 729 configurations of this benchmark. First, S N calculations were performed using fully symmetric angular quadratures and high-order diamond differencing for spatial discretization. To compare S N results with those of another deterministic method, the method of characteristics (MoC) was also considered for this benchmark. Comparisons between reference solutions, S N and MoC results illustrate the advantages and drawbacks of each methods for this 3-D transport problem.

  11. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  12. HEATING6 analysis of international thermal benchmark problem sets 1 and 2

    International Nuclear Information System (INIS)

    Childs, K.W.; Bryan, C.B.

    1986-10-01

    In order to assess the heat transfer computer codes used in the analysis of nuclear fuel shipping casks, the Nuclear Energy Agency Committee on Reactor Physics has defined seven problems for benchmarking thermal codes. All seven of these problems have been solved using the HEATING6 heat transfer code. This report presents the results of five of the problems. The remaining two problems were used in a previous benchmarking of thermal codes used in the United States, and their solutions have been previously published

  13. Neutron Activation and Thermoluminescent Detector Responses to a Bare Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [ORNL; Isbell, Kimberly McMahan [ORNL; Lee, Yi-kang [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Gagnier, Emmanuel [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Authier, Nicolas [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Piot, Jerome [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Jacquet, Xavier [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Rousseau, Guillaume [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Reynolds, Kevin H. [Y-12 National Security Complex

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 11, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  14. Classification of criticality calculations with correlation coefficient method and its application to OECD/NEA burnup credit benchmarks phase III-A and II-A

    International Nuclear Information System (INIS)

    Okuno, Hiroshi

    2003-01-01

    A method for classifying benchmark results of criticality calculations according to similarity was proposed in this paper. After formulation of the method utilizing correlation coefficients, it was applied to burnup credit criticality benchmarks Phase III-A and II-A, which were conducted by the Expert Group on Burnup Credit Criticality Safety under auspices of the Nuclear Energy Agency of the Organisation for Economic Cooperation and Development (OECD/NEA). Phase III-A benchmark was a series of criticality calculations for irradiated Boiling Water Reactor (BWR) fuel assemblies, whereas Phase II-A benchmark was a suite of criticality calculations for irradiated Pressurized Water Reactor (PWR) fuel pins. These benchmark problems and their results were summarized. The correlation coefficients were calculated and sets of benchmark calculation results were classified according to the criterion that the values of the correlation coefficients were no less than 0.15 for Phase III-A and 0.10 for Phase II-A benchmarks. When a couple of benchmark calculation results belonged to the same group, one calculation result was found predictable from the other. An example was shown for each of the Benchmarks. While the evaluated nuclear data seemed the main factor for the classification, further investigations were required for finding other factors. (author)

  15. A new benchmark for pose estimation with ground truth from virtual reality

    DEFF Research Database (Denmark)

    Schlette, Christian; Buch, Anders Glent; Aksoy, Eren Erdal

    2014-01-01

    The development of programming paradigms for industrial assembly currently gets fresh impetus from approaches in human demonstration and programming-by-demonstration. Major low- and mid-level prerequisites for machine vision and learning in these intelligent robotic applications are pose estimation......, stereo reconstruction and action recognition. As a basis for the machine vision and learning involved, pose estimation is used for deriving object positions and orientations and thus target frames for robot execution. Our contribution introduces and applies a novel benchmark for typical multi...

  16. Results from the IAEA benchmark of spallation models

    International Nuclear Information System (INIS)

    Leray, S.; David, J.C.; Khandaker, M.; Mank, G.; Mengoni, A.; Otsuka, N.; Filges, D.; Gallmeier, F.; Konobeyev, A.; Michel, R.

    2011-01-01

    Spallation reactions play an important role in a wide domain of applications. In the simulation codes used in this field, the nuclear interaction cross-sections and characteristics are computed by spallation models. The International Atomic Energy Agency (IAEA) has recently organised a benchmark of the spallation models used or that could be used in the future into high-energy transport codes. The objectives were, first, to assess the prediction capabilities of the different spallation models for the different mass and energy regions and the different exit channels and, second, to understand the reason for the success or deficiency of the models. Results of the benchmark concerning both the analysis of the prediction capabilities of the models and the first conclusions on the physics of spallation models are presented. (authors)

  17. A benchmark comparison of the Canadian Supercritical Water-Cooled Reactor (SCWR) 64-element fuel lattice cell parameters using various computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Sharpe, J.; Salaun, F.; Hummel, D.; Moghrabi, A., E-mail: sharpejr@mcmaster.ca [McMaster University, Hamilton, ON (Canada); Nowak, M. [McMaster University, Hamilton, ON (Canada); Institut National Polytechnique de Grenoble, Phelma, Grenoble (France); Pencer, J. [McMaster University, Hamilton, ON (Canada); Canadian Nuclear Laboratories, Chalk River, ON, (Canada); Novog, D.; Buijs, A. [McMaster University, Hamilton, ON (Canada)

    2015-07-01

    Discrepancies in key lattice physics parameters have been observed between various deterministic (e.g. DRAGON and WIMS-AECL) and stochastic (MCNP, KENO) neutron transport codes in modeling previous versions of the Canadian SCWR lattice cell. Further, inconsistencies in these parameters have also been observed when using different nuclear data libraries. In this work, the predictions of k∞, various reactivity coefficients, and relative ring-averaged pin powers have been re-evaluated using these codes and libraries with the most recent 64-element fuel assembly geometry. A benchmark problem has been defined to quantify the dissimilarities between code results for a number of responses along the fuel channel under prescribed hot full power (HFP), hot zero power (HZP) and cold zero power (CZP) conditions and at several fuel burnups (0, 25 and 50 MW·d·kg{sup -1} [HM]). Results from deterministic (TRITON, DRAGON) and stochastic codes (MCNP6, KENO V.a and KENO-VI) are presented. (author)

  18. A benchmark comparison of the Canadian Supercritical Water-Cooled Reactor (SCWR) 64-element fuel lattice cell parameters using various computer codes

    International Nuclear Information System (INIS)

    Sharpe, J.; Salaun, F.; Hummel, D.; Moghrabi, A.; Nowak, M.; Pencer, J.; Novog, D.; Buijs, A.

    2015-01-01

    Discrepancies in key lattice physics parameters have been observed between various deterministic (e.g. DRAGON and WIMS-AECL) and stochastic (MCNP, KENO) neutron transport codes in modeling previous versions of the Canadian SCWR lattice cell. Further, inconsistencies in these parameters have also been observed when using different nuclear data libraries. In this work, the predictions of k∞, various reactivity coefficients, and relative ring-averaged pin powers have been re-evaluated using these codes and libraries with the most recent 64-element fuel assembly geometry. A benchmark problem has been defined to quantify the dissimilarities between code results for a number of responses along the fuel channel under prescribed hot full power (HFP), hot zero power (HZP) and cold zero power (CZP) conditions and at several fuel burnups (0, 25 and 50 MW·d·kg"-"1 [HM]). Results from deterministic (TRITON, DRAGON) and stochastic codes (MCNP6, KENO V.a and KENO-VI) are presented. (author)

  19. Benchmark experiments to test plutonium and stainless steel cross sections. Topical report

    International Nuclear Information System (INIS)

    Jenquin, U.P.; Bierman, S.R.

    1978-06-01

    The Nuclear Regulatory Commission (NRC) commissioned Battelle, Pacific Northwest Laboratory (PNL) to ascertain the accuracy of the neutron cross sections for the isotopes of plutonium and the constituents of stainless steel and determine if improvements can be made in application to criticality safety analysis. NRC's particular area of interest is in the transportation of light-water reactor spent fuel assemblies. The project was divided into two tasks. The first task was to define a set of integral experimental measurements (benchmarks). The second task is to use these benchmarks in neutronics calculations such that the accuracy of ENDF/B-IV plutonium and stainless steel cross sections can be assessed. The results of the first task are given in this report. A set of integral experiments most pertinent to testing the cross sections has been identified and the code input data for calculating each experiment has been developed

  20. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  1. OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results

    International Nuclear Information System (INIS)

    DeHart, M.D.

    1993-01-01

    Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are 149 Sm, 151 Sm, and 155 Gd

  2. CRAB-II: a computer program to predict hydraulics and scram dynamics of LMFBR control assemblies and its validation

    International Nuclear Information System (INIS)

    Carelli, M.D.; Baker, L.A.; Willis, J.M.; Engel, F.C.; Nee, D.Y.

    1982-01-01

    This paper presents an analytical method, the computer code CRAB-II, which calculates the hydraulics and scram dynamics of LMFBR control assemblies of the rod bundle type and its validation against prototypic data obtained for the Clinch River Breeder Reactor (CRBR) primary control assemblies. The physical-mathematical model of the code is presented, followed by a description of the testing of prototypic CRBR control assemblies in water and sodium to characterize, respectively, their hydraulic and scram dynamics behavior. Comparison of code predictions against the experimental data are presened in detail; excellent agreement was found. Also reported are experimental data and empirical correlations for the friction factor of the absorber bundle in the entire flow range (laminar to turbulent) which represent an extension of the state-of-the-art, since only fuel and blanket assemblies friction factor correlations were previously reported in the open literature

  3. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    Science.gov (United States)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  4. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  5. Benchmark tests of JENDL-3.2 for thermal and fast reactors

    International Nuclear Information System (INIS)

    Takano, Hideki

    1995-01-01

    Benchmark calculations for a variety of thermal and fast reactors have been performed by using the newly evaluated JENDL-3 Version-2 (JENDL-3.2) file. In the thermal reactor calculations for the uranium and plutonium fueled cores of TRX and TCA, the k eff and lattice parameters were well predicted. The fast reactor calculations for ZPPR-9 and FCA assemblies showed that the k eff , reactivity worth of Doppler, sodium void and control rod, and reaction rate distribution were in a very good agreement with the experiments. (author)

  6. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  7. Use of Sensitivity and Uncertainty Analysis to Select Benchmark Experiments for the Validation of Computer Codes and Data

    International Nuclear Information System (INIS)

    Elam, K.R.; Rearden, B.T.

    2003-01-01

    Sensitivity and uncertainty analysis methodologies under development at Oak Ridge National Laboratory were applied to determine whether existing benchmark experiments adequately cover the area of applicability for the criticality code and data validation of PuO 2 and mixed-oxide (MOX) powder systems. The study examined three PuO 2 powder systems and four MOX powder systems that would be useful for establishing mass limits for a MOX fuel fabrication facility. Using traditional methods to choose experiments for criticality analysis validation, 46 benchmark critical experiments were identified as applicable to the PuO 2 powder systems. However, only 14 experiments were thought to be within the area of applicability for dry MOX powder systems.The applicability of 318 benchmark critical experiments, including the 60 experiments initially identified, was assessed. Each benchmark and powder system was analyzed using the Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) one-dimensional (TSUNAMI-1D) or TSUNAMI three-dimensional (TSUNAMI-3D) sensitivity analysis sequences, which will be included in the next release of the SCALE code system. This sensitivity data and cross-section uncertainty data were then processed with TSUNAMI-IP to determine the correlation of each application to each experiment in the benchmarking set. Correlation coefficients are used to assess the similarity between systems and determine the applicability of one system for the code and data validation of another.The applicability of most of the experiments identified using traditional methods was confirmed by the TSUNAMI analysis. In addition, some PuO 2 and MOX powder systems were determined to be within the area of applicability of several other benchmarks that would not have been considered using traditional methods. Therefore, the number of benchmark experiments useful for the validation of these systems exceeds the number previously expected. The TSUNAMI analysis

  8. Benchmarking Deep Learning Models on Large Healthcare Datasets.

    Science.gov (United States)

    Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan

    2018-06-04

    Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Analytical Radiation Transport Benchmarks for The Next Century

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    2005-01-01

    Verification of large-scale computational algorithms used in nuclear engineering and radiological applications is an essential element of reliable code performance. For this reason, the development of a suite of multidimensional semi-analytical benchmarks has been undertaken to provide independent verification of proper operation of codes dealing with the transport of neutral particles. The benchmarks considered cover several one-dimensional, multidimensional, monoenergetic and multigroup, fixed source and critical transport scenarios. The first approach, called the Green's Function. In slab geometry, the Green's function is incorporated into a set of integral equations for the boundary fluxes. Through a numerical Fourier transform inversion and subsequent matrix inversion for the boundary fluxes, a semi-analytical benchmark emerges. Multidimensional solutions in a variety of infinite media are also based on the slab Green's function. In a second approach, a new converged SN method is developed. In this method, the SN solution is ''minded'' to bring out hidden high quality solutions. For this case multigroup fixed source and criticality transport problems are considered. Remarkably accurate solutions can be obtained with this new method called the Multigroup Converged SN (MGCSN) method as will be demonstrated

  10. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  11. Communication: energy benchmarking with quantum Monte Carlo for water nano-droplets and bulk liquid water.

    Science.gov (United States)

    Alfè, D; Bartók, A P; Csányi, G; Gillan, M J

    2013-06-14

    We show the feasibility of using quantum Monte Carlo (QMC) to compute benchmark energies for configuration samples of thermal-equilibrium water clusters and the bulk liquid containing up to 64 molecules. Evidence that the accuracy of these benchmarks approaches that of basis-set converged coupled-cluster calculations is noted. We illustrate the usefulness of the benchmarks by using them to analyze the errors of the popular BLYP approximation of density functional theory (DFT). The results indicate the possibility of using QMC as a routine tool for analyzing DFT errors for non-covalent bonding in many types of condensed-phase molecular system.

  12. Quantitative computational models of molecular self-assembly in systems biology.

    Science.gov (United States)

    Thomas, Marcus; Schwartz, Russell

    2017-05-23

    Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.

  13. Benchmarking NNWSI flow and transport codes: COVE 1 results

    International Nuclear Information System (INIS)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs

  14. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Calculations of IAEA-CRP-6 Benchmark Case 1 through 7 for a TRISO-Coated Fuel Particle

    International Nuclear Information System (INIS)

    Kim, Young Min; Lee, Y. W.; Chang, J. H.

    2005-01-01

    IAEA-CRP-6 is a coordinated research program of IAEA on Advances in HTGR fuel technology. The CRP examines aspects of HTGR fuel technology, ranging from design and fabrication to characterization, irradiation testing, performance modeling, as well as licensing and quality control issues. The benchmark section of the program treats simple analytical cases, pyrocarbon layer behavior, single TRISO-coated fuel particle behavior, and benchmark calculations of some irradiation experiments performed and planned. There are totally seventeen benchmark cases in the program. Member countries are participating in the benchmark calculations of the CRP with their own developed fuel performance analysis computer codes. Korea is also taking part in the benchmark calculations using a fuel performance analysis code, COPA (COated PArticle), which is being developed in Korea Atomic Energy Research Institute. The study shows the calculational results of IAEACRP- 6 benchmark cases 1 through 7 which describe the structural behaviors for a single fuel particle

  16. The OECD/NRC BWR full-size fine-mesh bundle tests benchmark (BFBT)-general description

    International Nuclear Information System (INIS)

    Sartori, Enrico; Hochreiter, L.E.; Ivanov, Kostadin; Utsuno, Hideaki

    2004-01-01

    The need to refine models for best-estimate calculations based on good-quality experimental data have been expressed in many recent meetings in the field of nuclear applications. The needs arising in this respect should not be limited to currently available macroscopic approaches but should be extended to next-generation approaches that focus on more microscopic processes. One most valuable database identified for the thermal-hydraulics modelling was developed by the Nuclear Power Engineering Corporation (NUPEC). Part of this database will be made available for an international benchmark exercise. This fine-mesh high-quality data encourages advancement in the insufficiently developed field of the two-phase flow theory. Considering that the present theoretical approach is relatively immature, the benchmark specification is designed so that it will systematically assess and compare the participants' numerical models on the prediction of detailed void distributions and critical powers. The development of truly mechanistic models for critical power prediction is currently underway. These innovative models should include elementary processes such as void distributions, droplet deposit, liquid film entrainment, etc. The benchmark problem includes both macroscopic and microscopic measurement data. In this context, the sub-channel grade void fraction data are regarded as the macroscopic data, and the digitized computer graphic images are the microscopic data. The proposed benchmark consists of two parts (phases), each part consisting of different exercises: Phase 1- Void distribution benchmark: Exercise 1- Steady-state sub-channel grade benchmark. Exercise 2- Steady-state microscopic grade benchmark. Exercise 3-Transient macroscopic grade benchmark. Phase 2-Critical power benchmark: Exercise 1-Steady-state benchmark. Exercise 2-Transient benchmark. (author)

  17. Statistical benchmark for BosonSampling

    International Nuclear Information System (INIS)

    Walschaers, Mattia; Mayer, Klaus; Buchleitner, Andreas; Kuipers, Jack; Urbina, Juan-Diego; Richter, Klaus; Tichy, Malte Christopher

    2016-01-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church–Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects. (fast track communication)

  18. International benchmark study of advanced thermal hydraulic safety analysis codes against measurements on IEA-R1 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Hainoun, A., E-mail: pscientific2@aec.org.sy [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Doval, A. [Nuclear Engineering Department, Av. Cmdt. Luis Piedrabuena 4950, C.P. 8400 S.C de Bariloche, Rio Negro (Argentina); Umbehaun, P. [Centro de Engenharia Nuclear – CEN, IPEN-CNEN/SP, Av. Lineu Prestes 2242-Cidade Universitaria, CEP-05508-000 São Paulo, SP (Brazil); Chatzidakis, S. [School of Nuclear Engineering, Purdue University, West Lafayette, IN 47907 (United States); Ghazi, N. [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Park, S. [Research Reactor Design and Engineering Division, Basic Science Project Operation Dept., Korea Atomic Energy Research Institute (Korea, Republic of); Mladin, M. [Institute for Nuclear Research, Campului Street No. 1, P.O. Box 78, 115400 Mioveni, Arges (Romania); Shokr, A. [Division of Nuclear Installation Safety, Research Reactor Safety Section, International Atomic Energy Agency, A-1400 Vienna (Austria)

    2014-12-15

    Highlights: • A set of advanced system thermal hydraulic codes are benchmarked against IFA of IEA-R1. • Comparative safety analysis of IEA-R1 reactor during LOFA by 7 working teams. • This work covers both experimental and calculation effort and presents new out findings on TH of RR that have not been reported before. • LOFA results discrepancies from 7% to 20% for coolant and peak clad temperatures are predicted conservatively. - Abstract: In the framework of the IAEA Coordination Research Project on “Innovative methods in research reactor analysis: Benchmark against experimental data on neutronics and thermal hydraulic computational methods and tools for operation and safety analysis of research reactors” the Brazilian research reactor IEA-R1 has been selected as reference facility to perform benchmark calculations for a set of thermal hydraulic codes being widely used by international teams in the field of research reactor (RR) deterministic safety analysis. The goal of the conducted benchmark is to demonstrate the application of innovative reactor analysis tools in the research reactor community, validation of the applied codes and application of the validated codes to perform comprehensive safety analysis of RR. The IEA-R1 is equipped with an Instrumented Fuel Assembly (IFA) which provided measurements for normal operation and loss of flow transient. The measurements comprised coolant and cladding temperatures, reactor power and flow rate. Temperatures are measured at three different radial and axial positions of IFA summing up to 12 measuring points in addition to the coolant inlet and outlet temperatures. The considered benchmark deals with the loss of reactor flow and the subsequent flow reversal from downward forced to upward natural circulation and presents therefore relevant phenomena for the RR safety analysis. The benchmark calculations were performed independently by the participating teams using different thermal hydraulic and safety

  19. An IBM 370 assembly language program verifier

    Science.gov (United States)

    Maurer, W. D.

    1977-01-01

    The paper describes a program written in SNOBOL which verifies the correctness of programs written in assembly language for the IBM 360 and 370 series of computers. The motivation for using assembly language as a source language for a program verifier was the realization that many errors in programs are caused by misunderstanding or ignorance of the characteristics of specific computers. The proof of correctness of a program written in assembly language must take these characteristics into account. The program has been compiled and is currently running at the Center for Academic and Administrative Computing of The George Washington University.

  20. Benchmark tests of JENDL-1

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.

    1982-02-01

    Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)

  1. TerraFERMA: Harnessing Advanced Computational Libraries in Earth Science

    Science.gov (United States)

    Wilson, C. R.; Spiegelman, M.; van Keken, P.

    2012-12-01

    Many important problems in Earth sciences can be described by non-linear coupled systems of partial differential equations. These "multi-physics" problems include thermo-chemical convection in Earth and planetary interiors, interactions of fluids and magmas with the Earth's mantle and crust and coupled flow of water and ice. These problems are of interest to a large community of researchers but are complicated to model and understand. Much of this complexity stems from the nature of multi-physics where small changes in the coupling between variables or constitutive relations can lead to radical changes in behavior, which in turn affect critical computational choices such as discretizations, solvers and preconditioners. To make progress in understanding such coupled systems requires a computational framework where multi-physics problems can be described at a high-level while maintaining the flexibility to easily modify the solution algorithm. Fortunately, recent advances in computational science provide a basis for implementing such a framework. Here we present the Transparent Finite Element Rapid Model Assembler (TerraFERMA), which leverages several advanced open-source libraries for core functionality. FEniCS (fenicsproject.org) provides a high level language for describing the weak forms of coupled systems of equations, and an automatic code generator that produces finite element assembly code. PETSc (www.mcs.anl.gov/petsc) provides a wide range of scalable linear and non-linear solvers that can be composed into effective multi-physics preconditioners. SPuD (amcg.ese.ic.ac.uk/Spud) is an application neutral options system that provides both human and machine-readable interfaces based on a single xml schema. Our software integrates these libraries and provides the user with a framework for exploring multi-physics problems. A single options file fully describes the problem, including all equations, coefficients and solver options. Custom compiled applications are

  2. Melcor benchmarking against integral severe fuel damage tests

    Energy Technology Data Exchange (ETDEWEB)

    Madni, I.K. [Brookhaven National Lab., Upton, NY (United States)

    1995-09-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the U.S. Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC to provide independent assessment of MELCOR, and a very important part of this program is to benchmark MELCOR against experimental data from integral severe fuel damage tests and predictions of that data from more mechanistic codes such as SCDAP or SCDAP/RELAP5. Benchmarking analyses with MELCOR have been carried out at BNL for five integral severe fuel damage tests, namely, PBF SFD 1-1, SFD 14, and NRU FLHT-2, analyses, and their role in identifying areas of modeling strengths and weaknesses in MELCOR.

  3. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    International Nuclear Information System (INIS)

    Bess, John D.; Fujimoto, Nozomu

    2014-01-01

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  4. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  5. Validation of VHTRC calculation benchmark of critical experiment using the MCB code

    Directory of Open Access Journals (Sweden)

    Stanisz Przemysław

    2016-01-01

    Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.

  6. Modification of the ANC Nodal Code for analysis of PWR assembly bow

    International Nuclear Information System (INIS)

    Franceschini, Fausto; Fetterman, Robert J.; Little, David C.

    2008-01-01

    Refueling operations at certain PWR cores have revealed fuel assemblies with assembly bow that was higher than expected. As the fuel assemblies bow, the gaps between assemblies change from the uniform nominal configuration. This causes a change in the water volume which affects neutron moderation and thereby power distribution, fuel depletion history, rod internal pressure, etc., with non-trivial impacts on the safety analysis. Westinghouse has developed a new methodology for incorporation of assembly bow in its reload safety analysis package. As part of the new process, the standard Westinghouse reactor physics tool for core analysis, the Advanced Nodal Code ANC, has been modified. The modified ANC, ANCGAP, enables explicit treatment of three-dimensional gap distributions in its neutronic calculations; its accuracy is similar to that of the standard ANC, as demonstrated through an extensive benchmark campaign conducted over a variety of fuel compositions and challenging gap configurations. These features make ANCGAP a crucial tool in the Westinghouse assembly bow package. (authors)

  7. Modification of the ANC Nodal Code for analysis of PWR assembly bow

    Energy Technology Data Exchange (ETDEWEB)

    Franceschini, Fausto; Fetterman, Robert J.; Little, David C. [Westinghouse Electric Company LLC, Pittsburgh PA (United States)

    2008-07-01

    Refueling operations at certain PWR cores have revealed fuel assemblies with assembly bow that was higher than expected. As the fuel assemblies bow, the gaps between assemblies change from the uniform nominal configuration. This causes a change in the water volume which affects neutron moderation and thereby power distribution, fuel depletion history, rod internal pressure, etc., with non-trivial impacts on the safety analysis. Westinghouse has developed a new methodology for incorporation of assembly bow in its reload safety analysis package. As part of the new process, the standard Westinghouse reactor physics tool for core analysis, the Advanced Nodal Code ANC, has been modified. The modified ANC, ANCGAP, enables explicit treatment of three-dimensional gap distributions in its neutronic calculations; its accuracy is similar to that of the standard ANC, as demonstrated through an extensive benchmark campaign conducted over a variety of fuel compositions and challenging gap configurations. These features make ANCGAP a crucial tool in the Westinghouse assembly bow package. (authors)

  8. Benchmark thermal-hydraulic analysis with the Agathe Hex 37-rod bundle

    International Nuclear Information System (INIS)

    Barroyer, P.; Hudina, M.; Huggenberger, M.

    1981-09-01

    Different computer codes are compared, in prediction performance, based on the AGATHE HEX 37-rod bundle experimental results. The compilation of all available calculation results allows a critical assessment of the codes. For the time being, it is concluded which codes are best suited for gas cooled fuel element design purposes. Based on the positive aspects of these cooperative Benchmark exercises, an attempt is made to define a computer code verification procedure. (Auth.)

  9. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  10. Sieve of Eratosthenes benchmarks for the Z8 FORTH microcontroller

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, R.

    1989-02-01

    This report presents benchmarks for the Z8 FORTH microcontroller system that ORNL uses extensively in proving concepts and developing prototype test equipment for the Smart House Project. The results are based on the sieve of Eratosthenes algorithm, a calculation used extensively to rate computer systems and programming languages. Three benchmark refinements are presented,each showing how the execution speed of a FORTH program can be improved by use of a particular optimization technique. The last version of the FORTH benchmark shows that optimization is worth the effort: It executes 20 times faster than the Gilbreaths' widely-published FORTH benchmark program. The National Association of Home Builders Smart House Project is a cooperative research and development effort being undertaken by American home builders and a number of major corporations serving the home building industry. The major goal of the project is to help the participating organizations incorporate advanced technology in communications,energy distribution, and appliance control products for American homes. This information is provided to help project participants use the Z8 FORTH prototyping microcontroller in developing Smart House concepts and equipment. The discussion is technical in nature and assumes some experience with microcontroller devices and the techniques used to develop software for them. 7 refs., 5 tabs.

  11. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  12. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  13. Multilaboratory particle image velocimetry analysis of the FDA benchmark nozzle model to support validation of computational fluid dynamics simulations.

    Science.gov (United States)

    Hariharan, Prasanna; Giarra, Matthew; Reddy, Varun; Day, Steven W; Manning, Keefe B; Deutsch, Steven; Stewart, Sandy F C; Myers, Matthew R; Berman, Michael R; Burgreen, Greg W; Paterson, Eric G; Malinauskas, Richard A

    2011-04-01

    This study is part of a FDA-sponsored project to evaluate the use and limitations of computational fluid dynamics (CFD) in assessing blood flow parameters related to medical device safety. In an interlaboratory study, fluid velocities and pressures were measured in a nozzle model to provide experimental validation for a companion round-robin CFD study. The simple benchmark nozzle model, which mimicked the flow fields in several medical devices, consisted of a gradual flow constriction, a narrow throat region, and a sudden expansion region where a fluid jet exited the center of the nozzle with recirculation zones near the model walls. Measurements of mean velocity and turbulent flow quantities were made in the benchmark device at three independent laboratories using particle image velocimetry (PIV). Flow measurements were performed over a range of nozzle throat Reynolds numbers (Re(throat)) from 500 to 6500, covering the laminar, transitional, and turbulent flow regimes. A standard operating procedure was developed for performing experiments under controlled temperature and flow conditions and for minimizing systematic errors during PIV image acquisition and processing. For laminar (Re(throat)=500) and turbulent flow conditions (Re(throat)≥3500), the velocities measured by the three laboratories were similar with an interlaboratory uncertainty of ∼10% at most of the locations. However, for the transitional flow case (Re(throat)=2000), the uncertainty in the size and the velocity of the jet at the nozzle exit increased to ∼60% and was very sensitive to the flow conditions. An error analysis showed that by minimizing the variability in the experimental parameters such as flow rate and fluid viscosity to less than 5% and by matching the inlet turbulence level between the laboratories, the uncertainties in the velocities of the transitional flow case could be reduced to ∼15%. The experimental procedure and flow results from this interlaboratory study (available

  14. Synthetic graph generation for data-intensive HPC benchmarking: Scalability, analysis and real-world application

    Energy Technology Data Exchange (ETDEWEB)

    Powers, Sarah S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lothian, Joshua [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-12-01

    The benchmarking effort within the Extreme Scale Systems Center at Oak Ridge National Laboratory seeks to provide High Performance Computing benchmarks and test suites of interest to the DoD sponsor. The work described in this report is a part of the effort focusing on graph generation. A previously developed benchmark, SystemBurn, allows the emulation of a broad spectrum of application behavior profiles within a single framework. To complement this effort, similar capabilities are desired for graph-centric problems. This report described the in-depth analysis of the generated synthetic graphs' properties at a variety of scales using different generator implementations and examines their applicability to replicating real world datasets.

  15. OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.

    1993-01-01

    Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are {sup 149}Sm, {sup 151}Sm, and {sup 155}Gd.

  16. Critical assembly of uranium enriched to 10% in uranium-235

    International Nuclear Information System (INIS)

    Hansen, G.E.; Paxton, H.E.

    1979-01-01

    Big Ten is described in the detail appropriate for a benchmark critical assembly. Characteristics provided are spectral indexes and a detailed neutron flux spectrum, Rossi-α on a reactivity scale established by positive periods, and reactivity coefficients of a variety of isotopes, including the fissionable materials. The observed characteristics are compared with values calculated with ENDF/B-IV cross sections

  17. X447 EBR-II Experiment Benchmark for Verification of Audit Code of SFR Metal Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yong Won; Bae, Moo-Hoon; Shin, Andong; Suh, Namduk [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2016-10-15

    In KINS (Korea Institute of Nuclear Safety), to prepare audit calculation of PGSFR licensing review, the project has been started to develop the regulatory technology for SFR system including a fuel area. To evaluate the fuel integrity and safety during an irradiation, the fuel performance code must be used for audit calculation. In this study, to verify the new code system, the benchmark analysis is performed. In the benchmark, X447 EBR-II experiment data are used. Additionally, the sensitivity analysis according to mass flux change of coolant is performed. In case of LWR fuel performance modeling, various and advanced models have been proposed and validated based on sufficient in-reactor test results. However, due to the lack of experience of SFR operation, the current understanding of SFR fuel behavior is limited. In this study, X447 EBR-II Experiment data are used for benchmark. The fuel composition of X447 assembly is U-10Zr and PGSFR also uses this composition in initial phase. So we select X447 EBR-II experiment for benchmark analysis. Due to the lack of experience of SFR operation and data, the current understanding of SFR fuel behavior is limited. However, in order to prepare the licensing of PGSFR, regulatory audit technologies of SFR must be secured. So, in this study, to verify the new audit fuel performance analysis code, the benchmark analysis is performed using X447 EBR-II experiment data. Also, the sensitivity analysis with mass flux change of coolant is performed. In terms of verification, it is considered that the results of benchmark and sensitivity analysis are reasonable.

  18. X447 EBR-II Experiment Benchmark for Verification of Audit Code of SFR Metal Fuel

    International Nuclear Information System (INIS)

    Choi, Yong Won; Bae, Moo-Hoon; Shin, Andong; Suh, Namduk

    2016-01-01

    In KINS (Korea Institute of Nuclear Safety), to prepare audit calculation of PGSFR licensing review, the project has been started to develop the regulatory technology for SFR system including a fuel area. To evaluate the fuel integrity and safety during an irradiation, the fuel performance code must be used for audit calculation. In this study, to verify the new code system, the benchmark analysis is performed. In the benchmark, X447 EBR-II experiment data are used. Additionally, the sensitivity analysis according to mass flux change of coolant is performed. In case of LWR fuel performance modeling, various and advanced models have been proposed and validated based on sufficient in-reactor test results. However, due to the lack of experience of SFR operation, the current understanding of SFR fuel behavior is limited. In this study, X447 EBR-II Experiment data are used for benchmark. The fuel composition of X447 assembly is U-10Zr and PGSFR also uses this composition in initial phase. So we select X447 EBR-II experiment for benchmark analysis. Due to the lack of experience of SFR operation and data, the current understanding of SFR fuel behavior is limited. However, in order to prepare the licensing of PGSFR, regulatory audit technologies of SFR must be secured. So, in this study, to verify the new audit fuel performance analysis code, the benchmark analysis is performed using X447 EBR-II experiment data. Also, the sensitivity analysis with mass flux change of coolant is performed. In terms of verification, it is considered that the results of benchmark and sensitivity analysis are reasonable

  19. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  20. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  1. Automatic generation of 3D fine mesh geometries for the analysis of the venus-3 shielding benchmark experiment with the Tort code

    International Nuclear Information System (INIS)

    Pescarini, M.; Orsi, R.; Martinelli, T.

    2003-01-01

    In many practical radiation transport applications today the cost for solving refined, large size and complex multi-dimensional problems is not so much computing but is linked to the cumbersome effort required by an expert to prepare a detailed geometrical model, verify and validate that it is correct and represents, to a specified tolerance, the real design or facility. This situation is, in particular, relevant and frequent in reactor core criticality and shielding calculations, with three-dimensional (3D) general purpose radiation transport codes, requiring a very large number of meshes and high performance computers. The need for developing tools that make easier the task to the physicist or engineer, by reducing the time required, by facilitating through effective graphical display the verification of correctness and, finally, that help the interpretation of the results obtained, has clearly emerged. The paper shows the results of efforts in this field through detailed simulations of a complex shielding benchmark experiment. In the context of the activities proposed by the OECD/NEA Nuclear Science Committee (NSC) Task Force on Computing Radiation Dose and Modelling of Radiation-Induced Degradation of Reactor Components (TFRDD), the ENEA-Bologna Nuclear Data Centre contributed with an analysis of the VENUS-3 low-flux neutron shielding benchmark experiment (SCK/CEN-Mol, Belgium). One of the targets of the work was to test the BOT3P system, originally developed at the Nuclear Data Centre in ENEA-Bologna and actually released to OECD/NEA Data Bank for free distribution. BOT3P, ancillary system of the DORT (2D) and TORT (3D) SN codes, permits a flexible automatic generation of spatial mesh grids in Cartesian or cylindrical geometry, through combinatorial geometry algorithms, following a simplified user-friendly approach. This system demonstrated its validity also in core criticality analyses, as for example the Lewis MOX fuel benchmark, permitting to easily

  2. Benchmarking of epithermal methods in the lattice-physics code EPRI-CELL

    International Nuclear Information System (INIS)

    Williams, M.L.; Wright, R.Q.; Barhen, J.; Rothenstein, W.; Toney, B.

    1982-01-01

    The epithermal cross section shielding methods used in the lattice physics code EPRI-CELL (E-C) have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology. These include: treatment of the external moderator source with intermediate resonance (IR) theory, development of a new Dancoff factor expression to account for clad interactions, development of a new method for treating resonance interference, and application of a generalized least squares method to compute best-estimate values for the Bell factor and group-dependent IR parameters. The modified E-C code with its new ENDF/B-V cross section library is tested for several numerical benchmark problems. Integral parameters computed by EC are compared with those obtained with point-cross section Monte Carlo calculations, and E-C fine group cross sections are benchmarked against point-cross section descrete ordinates calculations. It is found that the code modifications improve agreement between E-C and the more sophisticated methods. E-C shows excellent agreement on the integral parameters and usually agrees within a few percent on fine-group, shielded cross sections

  3. Comparison of 250 MHz R10K Origin 2000 and 400 MHz Origin 2000 Using NAS Parallel Benchmarks

    Science.gov (United States)

    Turney, Raymond D.; Thigpen, William W. (Technical Monitor)

    2001-01-01

    This report describes results of benchmark tests on Steger, a 250 MHz Origin 2000 system with R10K processors, currently installed at the NASA Ames National Advanced Supercomputing (NAS) facility. For comparison purposes, the tests were also run on Lomax, a 400 MHz Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to measure system performance. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versions used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  4. Theory comparison and numerical benchmarking on neoclassical toroidal viscosity torque

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhirui; Park, Jong-Kyu; Logan, Nikolas; Kim, Kimin; Menard, Jonathan E. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States); Liu, Yueqiang [Euratom/CCFE Association, Culham Science Centre, Abingdon OX14 3DB (United Kingdom)

    2014-04-15

    Systematic comparison and numerical benchmarking have been successfully carried out among three different approaches of neoclassical toroidal viscosity (NTV) theory and the corresponding codes: IPEC-PENT is developed based on the combined NTV theory but without geometric simplifications [Park et al., Phys. Rev. Lett. 102, 065002 (2009)]; MARS-Q includes smoothly connected NTV formula [Shaing et al., Nucl. Fusion 50, 025022 (2010)] based on Shaing's analytic formulation in various collisionality regimes; MARS-K, originally computing the drift kinetic energy, is upgraded to compute the NTV torque based on the equivalence between drift kinetic energy and NTV torque [J.-K. Park, Phys. Plasma 18, 110702 (2011)]. The derivation and numerical results both indicate that the imaginary part of drift kinetic energy computed by MARS-K is equivalent to the NTV torque in IPEC-PENT. In the benchmark of precession resonance between MARS-Q and MARS-K/IPEC-PENT, the agreement and correlation between the connected NTV formula and the combined NTV theory in different collisionality regimes are shown for the first time. Additionally, both IPEC-PENT and MARS-K indicate the importance of the bounce harmonic resonance which can greatly enhance the NTV torque when E×B drift frequency reaches the bounce resonance condition.

  5. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    International Nuclear Information System (INIS)

    Domm, T.D.; Underwood, R.S.

    1999-01-01

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this effort changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppording the needs of the Nuclear Weapons Complex (NW at sign) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system

  6. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    Science.gov (United States)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  7. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  8. Study of LBS for characterization and analysis of big data benchmarks

    International Nuclear Information System (INIS)

    Chandio, A.A.; Zhang, F.; Memon, T.D.

    2014-01-01

    In the past few years, most organizations are gradually diverting their applications and services to Cloud. This is because Cloud paradigm enables (a) on-demand accessed and (b) large data processing for their applications and users on Internet anywhere in the world. The rapid growth of urbanization in developed and developing countries leads a new emerging concept called Urban Computing, one of the application domains that is rapidly deployed to the Cloud. More precisely, in the concept of Urban Computing, sensors, vehicles, devices, buildings, and roads are used as a component to probe city dynamics. Their data representation is widely available including GPS traces of vehicles. However, their applications are more towards data processing and storage hungry, which is due to their data increment in large volume starts from few dozen of TB (Tera Bytes) to thousands of PT (Peta Bytes) (i.e. Big Data). To increase the development and the assessment of the applications such as LBS (Location Based Services), a benchmark of Big Data is urgently needed. This research is a novel research on LBS to characterize and analyze the Big Data benchmarks. We focused on map-matching, which is being used as pre-processing step in many LBS applications. In this preliminary work, this paper also describes current status of Big Data benchmarks and our future direction. (author)

  9. Study on LBS for Characterization and Analysis of Big Data Benchmarks

    Directory of Open Access Journals (Sweden)

    Aftab Ahmed Chandio

    2014-10-01

    Full Text Available In the past few years, most organizations are gradually diverting their applications and services to Cloud. This is because Cloud paradigm enables (a on-demand accessed and (b large data processing for their applications and users on Internet anywhere in the world. The rapid growth of urbanization in developed and developing countries leads a new emerging concept called Urban Computing, one of the application domains that is rapidly deployed to the Cloud. More precisely, in the concept of Urban Computing, sensors, vehicles, devices, buildings, and roads are used as a component to probe city dynamics. Their data representation is widely available including GPS traces of vehicles. However, their applications are more towards data processing and storage hungry, which is due to their data increment in large volume starts from few dozen of TB (Tera Bytes to thousands of PT (Peta Bytes (i.e. Big Data. To increase the development and the assessment of the applications such as LBS (Location Based Services, a benchmark of Big Data is urgently needed. This research is a novel research on LBS to characterize and analyze the Big Data benchmarks. We focused on map-matching, which is being used as pre-processing step in many LBS applications. In this preliminary work, this paper also describes current status of Big Data benchmarks and our future direction

  10. Criticality benchmarks for COG: A new point-wise Monte Carlo code

    International Nuclear Information System (INIS)

    Alesso, H.P.; Pearson, J.; Choi, J.S.

    1989-01-01

    COG is a new point-wise Monte Carlo code being developed and tested at LLNL for the Cray computer. It solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) charged particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems. However, its point-wise cross-sections also make it effective for a wide variety of criticality problems. COG has some similarities to a number of other computer codes used in the shielding and criticality community. These include the Lawrence Livermore National Laboratory (LLNL) codes TART and ALICE, the Los Alamos National Laboratory code MCNP, the Oak Ridge National Laboratory codes 05R, 06R, KENO, and MORSE, the SACLAY code TRIPOLI, and the MAGI code SAM. Each code is a little different in its geometry input and its random-walk modification options. Validating COG consists in part of running benchmark calculations against critical experiments as well as other codes. The objective of this paper is to present calculational results of a variety of critical benchmark experiments using COG, and to present the resulting code bias. Numerous benchmark calculations have been completed for a wide variety of critical experiments which generally involve both simple and complex physical problems. The COG results, which they report in this paper, have been excellent

  11. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  12. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  13. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  14. The necessity of improvement for the current LWR fuel assembly homogenization method

    International Nuclear Information System (INIS)

    Tang Chuntao; Huang Hao; Zhang Shaohong

    2007-01-01

    When the modern LWR core analysis method is used to do core nuclear design and in-core fuel management calculation, how to accurately obtain the fuel assembly homogenized parameters is a crucial issue. In this paper, taking the NEA C5G7-MOX benchmark problem as a severe test problem, which involves low-enriched uranium assemblies interspersed with MOX assemblies, we have re-examined the applicability of the two major assumptions of the modern equivalence theory for fuel assembly homoge- nization, i.e. the isolated assembly spatial spectrum assumption and the condensed two- group representation assumption. Numerical results have demonstrated that for LWR cores with strong spectrum interaction, both of these two assumptions are no longer applicable and the improvement for the homogenization method is necessary, the current two-group representation should be improved by the multigroup representation and the current reflective assembly boundary condition should be improved by the 'real' assembly boundary condition. This is a research project supported by National Natural Science Foundation of China (10605016). (authors)

  15. Report on the on-going EUREDATA Benchmark Exercise on data analysis

    International Nuclear Information System (INIS)

    Besi, A.; Colombo, A.G.

    1989-01-01

    In April 1987 the JRC was charged by the Assembly of the EuReDatA members with the organization and the coordination of a Benchmark Exercise (BE) on data analysis. The main aim of the BE is a comparison of the methods used by the various organizations to estimate reliability parameters and functions from field data. The reference data set was to be constituted by raw data taken from the Component Event Data Bank (CEDB). The CEDB is a centralized bank, which collects data describing the operational behaviour of components of nuclear power plants operating in various European Countries. (orig./HSCH)

  16. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  17. ActivityNet: A Large-Scale Video Benchmark for Human Activity Understanding

    KAUST Repository

    Heilbron, Fabian Caba

    2015-06-02

    In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper we introduce ActivityNet, a new largescale video benchmark for human activity understanding. Our benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. We illustrate three scenarios in which ActivityNet can be used to compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.

  18. ActivityNet: A Large-Scale Video Benchmark for Human Activity Understanding

    KAUST Repository

    Heilbron, Fabian Caba; Castillo, Victor; Ghanem, Bernard; Niebles, Juan Carlos

    2015-01-01

    In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper we introduce ActivityNet, a new largescale video benchmark for human activity understanding. Our benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. We illustrate three scenarios in which ActivityNet can be used to compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.

  19. Analysis of CSNI benchmark test on containment using the code CONTRAN

    International Nuclear Information System (INIS)

    Haware, S.K.; Ghosh, A.K.; Raj, V.V.; Kakodkar, A.

    1994-01-01

    A programme of experimental as well as analytical studies on the behaviour of nuclear reactor containment is being actively pursued. A large number ol' experiments on pressure and temperature transients have been carried out on a one-tenth scale model vapour suppression pool containment experimental facility, simulating the 220 MWe Indian Pressurised Heavy Water Reactors. A programme of development of computer codes is underway to enable prediction of containment behaviour under accident conditions. This includes codes for pressure and temperature transients, hydrogen behaviour, aerosol behaviour etc. As a part of this ongoing work, the code CONTRAN (CONtainment TRansient ANalysis) has been developed for predicting the thermal hydraulic transients in a multicompartment containment. For the assessment of the hydrogen behaviour, the models for hydrogen transportation in a multicompartment configuration and hydrogen combustion have been incorporated in the code CONTRAN. The code also has models for the heat and mass transfer due to condensation and convection heat transfer. The structural heat transfer is modeled using the one-dimensional transient heat conduction equation. Extensive validation exercises have been carried out with the code CONTRAN. The code CONTRAN has been successfully used for the analysis of the benchmark test devised by Committee on the Safety of Nuclear Installations (CSNI) of the Organisation for Economic Cooperation and Development (OECD), to test the numerical accuracy and convergence errors in the computation of mass and energy conservation for the fluid and in the computation of heat conduction in structural walls. The salient features of the code CONTRAN, description of the CSNI benchmark test and a comparison of the CONTRAN predictions with the benchmark test results are presented and discussed in the paper. (author)

  20. Neutron spectra measurement and calculations using data libraries CIELO, JEFF-3.2 and ENDF/B-VII.1 in iron benchmark assemblies

    Science.gov (United States)

    Jansky, Bohumil; Rejchrt, Jiri; Novak, Evzen; Losa, Evzen; Blokhin, Anatoly I.; Mitenkova, Elena

    2017-09-01

    The leakage neutron spectra measurements have been done on benchmark spherical assemblies - iron spheres with diameter of 20, 30, 50 and 100 cm. The Cf-252 neutron source was placed into the centre of iron sphere. The proton recoil method was used for neutron spectra measurement using spherical hydrogen proportional counters with diameter of 4 cm and with pressure of 400 and 1000 kPa. The neutron energy range of spectrometer is from 0.1 to 1.3 MeV. This energy interval represents about 85 % of all leakage neutrons from Fe sphere of diameter 50 cm and about of 74% for Fe sphere of diameter 100 cm. The adequate MCNP neutron spectra calculations based on data libraries CIELO, JEFF-3.2 and ENDF/B-VII.1 were done. Two calculations were done with CIELO library. The first one used data for all Fe-isotopes from CIELO and the second one (CIELO-56) used only Fe-56 data from CIELO and data for other Fe isotopes were from ENDF/B-VII.1. The energy structure used for calculations and measurements was 40 gpd (groups per decade) and 200 gpd. Structure 200 gpd represents lethargy step about of 1%. This relatively fine energy structure enables to analyze the Fe resonance neutron energy structure. The evaluated cross section data of Fe were validated on comparisons between the calculated and experimental spectra.

  1. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  2. CRC DEPLETION CALCULATIONS FOR THE NON-RODDED ASSEMBLIES IN BATCHES 8 AND 9 CRYSTAL RIVER UNIT 3

    International Nuclear Information System (INIS)

    Wilson, Michael L.

    2001-01-01

    The purpose of this design analysis is to document the SAS2H depletion calculations of certain non-rodded fuel assemblies from batches 8 and 9 of the Crystal River Unit 3 pressurized water reactor (PWR) that are required for Commercial Reactor Critical (CRC) evaluations to support the development of the disposal criticality methodology. A non-rodded assembly is one which never contains a control rod assembly (CRA) or an axial power shaping rod assembly (APSRA) during its irradiation history. The objective of this analysis is to provide SAS2H generated isotopic compositions for each fuel assembly's depleted fuel and depleted burnable poison materials. These SAS2H generated isotopic compositions are acceptable for use in CRC benchmark reactivity calculations containing the various fuel assemblies

  3. Accuracy assessment of a new Monte Carlo based burnup computer code

    International Nuclear Information System (INIS)

    El Bakkari, B.; ElBardouni, T.; Nacir, B.; ElYounoussi, C.; Boulaich, Y.; Meroun, O.; Zoubair, M.; Chakir, E.

    2012-01-01

    Highlights: ► A new burnup code called BUCAL1 was developed. ► BUCAL1 uses the MCNP tallies directly in the calculation of the isotopic inventories. ► Validation of BUCAL1 was done by code to code comparison using VVER-1000 LEU Benchmark Assembly. ► Differences from BM value were found to be ± 600 pcm for k ∞ and ±6% for the isotopic compositions. ► The effect on reactivity due to the burnup of Gd isotopes is well reproduced by BUCAL1. - Abstract: This study aims to test for the suitability and accuracy of a new home-made Monte Carlo burnup code, called BUCAL1, by investigating and predicting the neutronic behavior of a “VVER-1000 LEU Assembly Computational Benchmark”, at lattice level. BUCAL1 uses MCNP tally information directly in the computation; this approach allows performing straightforward and accurate calculation without having to use the calculated group fluxes to perform transmutation analysis in a separate code. ENDF/B-VII evaluated nuclear data library was used in these calculations. Processing of the data library is performed using recent updates of NJOY99 system. Code to code comparisons with the reported Nuclear OECD/NEA results are presented and analyzed.

  4. Discussion of OECD LWR Uncertainty Analysis in Modelling Benchmark

    International Nuclear Information System (INIS)

    Ivanov, K.; Avramova, M.; Royer, E.; Gillford, J.

    2013-01-01

    The demand for best estimate calculations in nuclear reactor design and safety evaluations has increased in recent years. Uncertainty quantification has been highlighted as part of the best estimate calculations. The modelling aspects of uncertainty and sensitivity analysis are to be further developed and validated on scientific grounds in support of their performance and application to multi-physics reactor simulations. The Organization for Economic Co-operation and Development (OECD) / Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC) has endorsed the creation of an Expert Group on Uncertainty Analysis in Modelling (EGUAM). Within the framework of activities of EGUAM/NSC the OECD/NEA initiated the Benchmark for Uncertainty Analysis in Modelling for Design, Operation, and Safety Analysis of Light Water Reactor (OECD LWR UAM benchmark). The general objective of the benchmark is to propagate the predictive uncertainties of code results through complex coupled multi-physics and multi-scale simulations. The benchmark is divided into three phases with Phase I highlighting the uncertainty propagation in stand-alone neutronics calculations, while Phase II and III are focused on uncertainty analysis of reactor core and system respectively. This paper discusses the progress made in Phase I calculations, the Specifications for Phase II and the incoming challenges in defining Phase 3 exercises. The challenges of applying uncertainty quantification to complex code systems, in particular the time-dependent coupled physics models are the large computational burden and the utilization of non-linear models (expected due to the physics coupling). (authors)

  5. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  6. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  7. Application of a nodal collocation approximation for the multidimensional PL equations to the 3D Takeda benchmark problems

    International Nuclear Information System (INIS)

    Capilla, M.; Talavera, C.F.; Ginestar, D.; Verdú, G.

    2012-01-01

    Highlights: ► The multidimensional P L approximation to the nuclear transport equation is reviewed. ► A nodal collocation method is developed for the spatial discretization of P L equations. ► Advantages of the method are lower dimension and good characterists of the associated algebraic eigenvalue problem. ► The P L nodal collocation method is implemented into the computer code SHNC. ► The SHNC code is verified with 2D and 3D benchmark eigenvalue problems from Takeda and Ikeda, giving satisfactory results. - Abstract: P L equations are classical approximations to the neutron transport equations, which are obtained expanding the angular neutron flux in terms of spherical harmonics. These approximations are useful to study the behavior of reactor cores with complex fuel assemblies, for the homogenization of nuclear cross-sections, etc., and most of these applications are in three-dimensional (3D) geometries. In this work, we review the multi-dimensional P L equations and describe a nodal collocation method for the spatial discretization of these equations for arbitrary odd order L, which is based on the expansion of the spatial dependence of the fields in terms of orthonormal Legendre polynomials. The performance of the nodal collocation method is studied by means of obtaining the k eff and the stationary power distribution of several 3D benchmark problems. The solutions are obtained are compared with a finite element method and a Monte Carlo method.

  8. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several...... visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  9. Environmental remediation of high-level nuclear waste in geological repository. Modified computer code creates ultimate benchmark in natural systems

    International Nuclear Information System (INIS)

    Peter, Geoffrey J.

    2011-01-01

    Isolation of high-level nuclear waste in permanent geological repositories has been a major concern for over 30 years due to the migration of dissolved radio nuclides reaching the water table (10,000-year compliance period) as water moves through the repository and the surrounding area. Repositories based on mathematical models allow for long-term geological phenomena and involve many approximations; however, experimental verification of long-term processes is impossible. Countries must determine if geological disposal is adequate for permanent storage. Many countries have extensively studied different aspects of safely confining the highly radioactive waste in an underground repository based on the unique geological composition at their selected repository location. This paper discusses two computer codes developed by various countries to study the coupled thermal, mechanical, and chemical process in these environments, and the migration of radionuclide. Further, this paper presents the results of a case study of the Magma-hydrothermal (MH) computer code, modified by the author, applied to nuclear waste repository analysis. The MH code verified by simulating natural systems thus, creating the ultimate benchmark. This approach based on processes similar to those expected near waste repositories currently occurring in natural systems. (author)

  10. Assembling large, complex environmental metagenomes

    Energy Technology Data Exchange (ETDEWEB)

    Howe, A. C. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Plant Soil and Microbial Sciences; Jansson, J. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Malfatti, S. A. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Tringe, S. G. [USDOE Joint Genome Institute (JGI), Walnut Creek, CA (United States); Tiedje, J. M. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Plant Soil and Microbial Sciences; Brown, C. T. [Michigan State Univ., East Lansing, MI (United States). Microbiology and Molecular Genetics, Computer Science and Engineering

    2012-12-28

    The large volumes of sequencing data required to sample complex environments deeply pose new challenges to sequence analysis approaches. De novo metagenomic assembly effectively reduces the total amount of data to be analyzed but requires significant computational resources. We apply two pre-assembly filtering approaches, digital normalization and partitioning, to make large metagenome assemblies more computationaly tractable. Using a human gut mock community dataset, we demonstrate that these methods result in assemblies nearly identical to assemblies from unprocessed data. We then assemble two large soil metagenomes from matched Iowa corn and native prairie soils. The predicted functional content and phylogenetic origin of the assembled contigs indicate significant taxonomic differences despite similar function. The assembly strategies presented are generic and can be extended to any metagenome; full source code is freely available under a BSD license.

  11. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  12. ABM11 parton distributions and benchmarks

    International Nuclear Information System (INIS)

    Alekhin, Sergey; Bluemlein, Johannes; Moch, Sven-Olaf

    2012-08-01

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant α s at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n f =3,4,5 and uses the MS scheme for α s and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  13. ABM11 parton distributions and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, Sergey [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institut Fiziki Vysokikh Ehnergij, Protvino (Russian Federation); Bluemlein, Johannes; Moch, Sven-Olaf [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-08-15

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant {alpha}{sub s} at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS scheme for {alpha}{sub s} and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  14. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  15. Integral measurements on Caliban and Prospero assemblies for nuclear data validation; Mesures integrales sur les assemblages Caliban et Prospero pour la validation des donnees nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Casoli, P.; Authier, N.; Richard, B. [CEA Valduc, 21 - Is-sur-Tille (France); Ducauze-Philippe, M.; Cartier, J. [CEA Bruyeres-le-Chatel, 91 (France)

    2011-07-15

    How can the quality of nuclear data libraries be checked? Performing reference experiments also called benchmarks allows the testing of evaluated data. During these experiments, integral values such as reaction rates or neutron effective multiplication coefficients are measured. In this paper, the principles of benchmark construction are explained and illustrated with several works performed on the CALIBAN et PROSPERO critical assemblies operated by the Valduc center: benchmarks for dosimetry, activation reactions studies, neutron noise measurements. (authors)

  16. A CUMULATIVE MIGRATION METHOD FOR COMPUTING RIGOROUS TRANSPORT CROSS SECTIONS AND DIFFUSION COEFFICIENTS FOR LWR LATTICES WITH MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi

    2016-05-01

    A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.

  17. Criticality experiments to provide benchmark data on neutron flux traps

    International Nuclear Information System (INIS)

    Bierman, S.R.

    1988-06-01

    The experimental measurements covered by this report were designed to provide benchmark type data on water moderated LWR type fuel arrays containing neutron flux traps. The experiments were performed at the US Department of Energy Hanford Critical Mass Laboratory, operated by Pacific Northwest Laboratory. The experimental assemblies consisted of 2 /times/ 2 arrays of 4.31 wt % 235 U enriched UO 2 fuel rods, uniformly arranged in water on a 1.891 cm square center-to-center spacing. Neutron flux traps were created between the fuel units using metal plates containing varying amounts of boron. Measurements were made to determine the effect that boron loading and distance between the fuel and flux trap had on the amount of fuel required for criticality. Also, measurements were made, using the pulse neutron source technique, to determine the effect of boron loading on the effective neutron multiplications constant. On two assemblies, reaction rate measurements were made using solid state track recorders to determine absolute fission rates in 235 U and 238 U. 14 refs., 12 figs., 7 tabs

  18. Reduced-order computational model in nonlinear structural dynamics for structures having numerous local elastic modes in the low-frequency range. Application to fuel assemblies

    International Nuclear Information System (INIS)

    Batou, A.; Soize, C.; Brie, N.

    2013-01-01

    Highlights: • A ROM of a nonlinear dynamical structure is built with a global displacements basis. • The reduced order model of fuel assemblies is accurate and of very small size. • The shocks between grids of a row of seven fuel assemblies are computed. -- Abstract: We are interested in the construction of a reduced-order computational model for nonlinear complex dynamical structures which are characterized by the presence of numerous local elastic modes in the low-frequency band. This high modal density makes the use of the classical modal analysis method not suitable. Therefore the reduced-order computational model is constructed using a basis of a space of global displacements, which is constructed a priori and which allows the nonlinear dynamical response of the structure observed on the stiff part to be predicted with a good accuracy. The methodology is applied to a complex industrial structure which is made up of a row of seven fuel assemblies with possibility of collisions between grids and which is submitted to a seismic loading

  19. Reduced-order computational model in nonlinear structural dynamics for structures having numerous local elastic modes in the low-frequency range. Application to fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Batou, A., E-mail: anas.batou@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Soize, C., E-mail: christian.soize@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Brie, N., E-mail: nicolas.brie@edf.fr [EDF R and D, Département AMA, 1 avenue du général De Gaulle, 92140 Clamart (France)

    2013-09-15

    Highlights: • A ROM of a nonlinear dynamical structure is built with a global displacements basis. • The reduced order model of fuel assemblies is accurate and of very small size. • The shocks between grids of a row of seven fuel assemblies are computed. -- Abstract: We are interested in the construction of a reduced-order computational model for nonlinear complex dynamical structures which are characterized by the presence of numerous local elastic modes in the low-frequency band. This high modal density makes the use of the classical modal analysis method not suitable. Therefore the reduced-order computational model is constructed using a basis of a space of global displacements, which is constructed a priori and which allows the nonlinear dynamical response of the structure observed on the stiff part to be predicted with a good accuracy. The methodology is applied to a complex industrial structure which is made up of a row of seven fuel assemblies with possibility of collisions between grids and which is submitted to a seismic loading.

  20. CRC DEPLETION CALCULATIONS FOR THE NON-RODDED ASSEMBLIES IN BATCHES 4 AND 5 OF CRYSTAL RIVER UNIT 3

    International Nuclear Information System (INIS)

    Wright, Kenneth D.

    1997-01-01

    The purpose of this design analysis is to document the SAS2H depletion calculations of certain non-rodded fuel assemblies from batches 4 and 5 of the Crystal River Unit 3 pressurized water reactor (PWR) that are required for commercial Reactor Critical (CRC) evaluations to support the development of the disposal criticality methodology. A non-rodded assembly is one which never contains a control rod assembly (CRA) or an axial power shaping rod assembly (APSRA) during its irradiation history. The objective of this analysis is to provide SAS2H generated isotopic compositions for each fuel assembly's depleted fuel and depleted burnable poison materials. These SAS2H generated isotopic compositions are acceptable for use in CRC benchmark reactivity calculations containing the various fuel assemblies