WorldWideScience

Sample records for assembly computational benchmark

  1. Shielding Benchmark Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-09-17

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC).

  2. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  3. Benchmark assemblies of the Los Alamos Critical Assemblies Facility

    International Nuclear Information System (INIS)

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  4. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  5. Results of VVER-440 fuel assembly head benchmark

    International Nuclear Information System (INIS)

    In the WWER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational Fluid Dynamics codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D Computational Fluid Dynamics modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12,2 mm rod pitch. Two assemblies of the twenty third cycle of the Paks NPPs Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. In this benchmark, the same fuel assemblies are investigated by the participants thus the results calculated with different codes and models can be compared with each other. Aims of benchmark was comparison of participants results with each other and with in-core measurement data of the Paks NPP in order to test the different Computational Fluid Dynamics codes and applied Computational Fluid Dynamics models. This paper contains OKB 'GIDROPRESSs' results of Computational Fluid Dynamics calculations this benchmark. Results are:-In-core thermocouple signals above the selected assemblies;-Deviations between the in- ore thermocouple signals and the outlet average coolant temperatures of the assemblies;-Axial velocity and temperature profiles along three diameters at the level of the thermocouple;- Axial velocity and temperature distributions in the cross section at the level of the thermocouple;-Axial velocity and temperature

  6. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (Pij, Sn, Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  7. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  8. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  9. Evaluation of PWR and BWR assembly benchmark calculations. Status report of EPRI computational benchmark results, performed in the framework of the Netherlands' PINK programme (Joint project of ECN, IRI, KEMA and GKN)

    International Nuclear Information System (INIS)

    Benchmark results of the Dutch PINK working group on calculational benchmarks on single pin cell and multipin assemblies as defined by EPRI are presented and evaluated. First a short update of methods used by the various institutes involved is given as well as an update of the status with respect to previous performed pin-cell calculations. Problems detected in previous pin-cell calculations are inspected more closely. Detailed discussion of results of multipin assembly calculations is given. The assembly consists of 9 pins in a multicell square lattice in which the central pin is filled differently, i.e. a Gd pin for the BWR assembly and a control rod/guide tube for the PWR assembly. The results for pin cells showed a rather good overall agreement between the four participants although BWR pins with high void fraction turned out to be difficult to calculate. With respect to burnup calculations good overall agreement for the reactivity swing was obtained, provided that a fine time grid is used. (orig.)

  10. Method and system for benchmarking computers

    Science.gov (United States)

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  11. Numerical benchmarks for MTR fuel assemblies with burnable poison

    International Nuclear Information System (INIS)

    This work presents a preliminary version of a set of burn-up dependent numerical benchmarks of MTR fuel assemblies using burnable poisons. The numerical benchmark calculations were carried out using two different types of calculation methodologies: Monte Carlo methodology using MCNP-ORIGEN coupled codes and deterministic methodology using CONDOR collision probabilities code. The main purpose of this work is to provide a numerical benchmark for several geometries, for example number and diameter of the Cadmium wires. The numerical benchmark provides meat and Cadmium numerical density information and the geometry and material data of the calculated systems. These benchmarks provide information for the validation of MTR FA cell codes. This paper is the preliminary work of a 3 dimensional numerical benchmark for research reactors using MTR fuel assemblies with burnable poisons. A short description of the MCNP and ORIGEN coupling method and the CONDOR code are given in the present paper. (author)

  12. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  13. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235U, 239Pu, 238U, and 237Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  14. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...

  15. Benchmark Calculations For A VVER-1000 Assembly Using SRAC

    International Nuclear Information System (INIS)

    This work presents the neutronic calculation results of a VVER-1000 assembly using SRAC with 107 energy groups in comparison with the benchmark values in the OECD/NEA report. The main neutronic characteristics which were calculated in this comparison include infinite multiplication factors (k-inf), nuclide densities as the function of burnup and pin-wise power distribution. Calculations were conducted with various conditions of fuel, coolant and boron content in coolant. (author)

  16. Benchmark on deterministic transport calculations without spatial homogenization. A 2-D/3-D MOX fuel assembly benchmark

    International Nuclear Information System (INIS)

    One of the important issues regarding deterministic transport methods for whole core calculations is that homogenized techniques can introduce errors into results. On the other hand, with modern computation abilities, direct whole core heterogeneous calculations are becoming increasingly feasible. This report provides an analysis of the results obtained from a challenging benchmark on deterministic MOX fuel assembly transport calculations without spatial homogenization. A majority of the participants obtained solutions that were more than acceptable for typical reactor calculations. The report will be of particular interest to reactor physicists and transport code developers. (author)

  17. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Science.gov (United States)

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE. PMID:25247298

  18. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Directory of Open Access Journals (Sweden)

    Seyhan Yazar

    Full Text Available A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR on Amazon EC2 instances and Google Compute Engine (GCE, using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2 for E.coli and 53.5% (95% CI: 34.4-72.6 for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1 and 173.9% (95% CI: 134.6-213.1 more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  19. Geant4 Computing Performance Benchmarking and Monitoring

    Science.gov (United States)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-01

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  20. Benchmarking computational fluid dynamics models for lava flow simulation

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi

    2016-04-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.

  1. APOLLO2 and TRIPOLI4 solutions of the OECD VVER-1000 LEU and MOX assembly benchmark

    International Nuclear Information System (INIS)

    Highlights: ► APOLLO2 MOC calculation schemes were tested for VVER UGd and MOXGd assemblies. ► Depletion and branch calculations were performed. ► The results are close to the Monte Carlo and deterministic reference solutions. ► The Linear Surface MOC gives accurate and computationally efficient solutions. ► The higher-order MOC in APOLLO2 can be recommended for industrial applications. - Abstract: The OECD benchmark for VVER-1000 UGd and MOXGd assemblies was solved with the APOLLO2 and TRIPOLI4 codes. The objective was to verify the TRIPOLI4 Monte-Carlo solution and to assess the APOLLO2 method of characteristics based calculation routes for VVER assemblies. The test problems address important VVER V and V topics such as advanced fuel assemblies, depletion and branch calculations. Solutions with Monte-Carlo and deterministic codes from the OECD benchmark report are available for comparison. The APOLLO2 results obtained with reference and two-level 281/37g MOC calculation schemes are close to the Monte-Carlo reference solutions and the mean of all codes. The higher-order Linear Surface MOC is shown to give accurate and computationally efficient solutions

  2. Computational benchmark for deep penetration in iron

    International Nuclear Information System (INIS)

    A benchmark for calculation of neutron transport through iron is now available based upon a rigorous Monte Carlo treatment of ENDF/B-IV and ENDF/B-V cross sections. The currents, flux, and dose (from monoenergetic 2, 14, and 40 MeV sources) have been tabulated at various distances through the slab using a standard energy group structure. This tabulation is available in a Los Alamos Scientific Laboratory report. The benchmark is simple to model and should be useful for verifying the adequacy of one-dimensional transport codes and multigroup libraries for iron. This benchmark also provides useful insights regarding neutron penetration through iron and displays differences in fluxes calculated with ENDF/B-IV and ENDF/B-V data bases

  3. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Rahul Ravindrudu

    2004-12-19

    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access

  4. SCALE and SERPENT solutions of the OECD VVER-1000 LEU and MOX burnup computational benchmark

    International Nuclear Information System (INIS)

    Highlights: • New solutions for the VVER-1000 LEU and MOX burnup computational benchmark have been obtained using ENDF/B-VII and JEFF3.1 nuclear data libraries. • The SERPENT and SCALE codes have been used for the first time to solve the benchmark exercises. • The comparison of our results with the ones available in literature shows generally a good agreement over all the reactor states considered in terms of reactivity values, pin-by-pin fission rates distributions and nuclide concentrations. • The SERPENT models for the LEU and MOX assemblies have also been tested with JEF2.2 making of this work also a new Monte Carlo reference solution for the benchmark exercise with modern nuclear data libraries. - Abstract: The loading of hybrid cores with Mixed Uranium Plutonium Oxide (MOX) and Low Enriched Uranium (LEU) fuels in commercial nuclear reactors requires well validated computational methods and codes capable of providing reliable predictions of the neutronics characteristics of such fuels in terms of reactivity conditions (kinf), nuclide inventory and pin power generation over the entire fuel cycle length. Within the framework of Joint United States/Russian Fissile Materials Disposition Program an important task is to verify and validate neutronics codes for the use of MOX fuel in VVER-1000 reactors. Benchmark analyses are being performed for both computational benchmarks and experimental benchmarks. In this paper new solutions for the (UO2 + Gd) and (UO2 + PuO2 + Gd) fuel assemblies proposed within the “OECD VVER-1000 Burnup Computational Benchmark” are presented, these being representative of the designs which are expected to be used in the plutonium disposition mission. The objective is to test the SERPENT and SCALE codes against previously obtained solutions and to provide new reference solutions to the benchmark with modern nuclear data libraries

  5. Computed results on the IAEA benchmark problems at JAERI

    International Nuclear Information System (INIS)

    The outline of the computer code system of JAERI for analysing research reactors is presented and the results of check calculations to validate the code system are evaluated by the experimental data. Using this computer code system, some of the IAEA benchmark problems are solved and the results are compared with those of ANL. (author)

  6. The new deterministic 3-D radiation transport code Multitrans: C5G7 MOX fuel assembly benchmark

    International Nuclear Information System (INIS)

    The novel deterministic three-dimensional radiation transport code MultiTrans is based on combination of the advanced tree multigrid technique and the simplified P3 (SP3) radiation transport approximation. In the tree multigrid technique, an automatic mesh refinement is performed on material surfaces. The tree multigrid is generated directly from stereo-lithography (STL) files exported by computer-aided design (CAD) systems, thus allowing an easy interface for construction and upgrading of the geometry. The deterministic MultiTrans code allows fast solution of complicated three-dimensional transport problems in detail, offering a new tool for nuclear applications in reactor physics. In order to determine the feasibility of a new code, computational benchmarks need to be carried out. In this work, MultiTrans code is tested for a seven-group three-dimensional MOX fuel assembly transport benchmark without spatial homogenization (NEA C5G7 MOX). (author)

  7. A quantitative CFD benchmark for Sodium Fast Reactor fuel assembly modeling

    International Nuclear Information System (INIS)

    Highlights: • A CFD model is benchmarked against the ORNL 19-pin Sodium Test assembly. • Sensitivity was tested for cell size, turbulence model, and wire contact model. • The CFD model was found to be appropriately representing the experiment. • CFD was then used as a predictive tool to help understand experimental uncertainty. • Comparison to subchannel results were carried out as well. - Abstract: This paper details a CFD model of a 19-pin wire-wrapped sodium fuel assembly experiment conducted at Oak Ridge National Laboratory in the 1970s. Model sensitivities were tested for cell size, turbulence closure, wire-wrap contact, inlet geometry, outlet geometry, and conjugate heat transfer. The final model was compared to the experimental results quantitatively to establish confidence in the approach. The results were also compared to the sub-channel analysis code COBRA-IV-I-MIT. Experiment and CFD computations were consistent inside the bundle. Comparison between experimental temperature measurements from thermocouples embedded in the heated length of the bundle are consistently reproducible with CFD code predictions across a wide range of operating conditions. The demonstrated agreement provides confidence in the predictive capabilities of the approach. However significant discrepancy between the CFD code predictions and the experimental data was found at the bundle outlet. Further sensitivity studies are presented to support the conclusion that this discrepancy is caused by significant uncertainty associated with the experimental data reported for the bundle outlet

  8. Computer organization and assembly language programming

    CERN Document Server

    Peterson, James L

    2014-01-01

    Computer Organization and Assembly Language Programming deals with lower level computer programming-machine or assembly language, and how these are used in the typical computer system. The book explains the operations of the computer at the machine language level. The text reviews basic computer operations, organization, and deals primarily with the MIX computer system. The book describes assembly language programming techniques, such as defining appropriate data structures, determining the information for input or output, and the flow of control within the program. The text explains basic I/O

  9. Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    Science.gov (United States)

    Dahl, Milo D. (Editor)

    2004-01-01

    This publication contains the proceedings of the Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems. In this workshop, as in previous workshops, the problems were devised to gauge the technological advancement of computational techniques to calculate all aspects of sound generation and propagation in air directly from the fundamental governing equations. A variety of benchmark problems have been previously solved ranging from simple geometries with idealized acoustic conditions to test the accuracy and effectiveness of computational algorithms and numerical boundary conditions; to sound radiation from a duct; to gust interaction with a cascade of airfoils; to the sound generated by a separating, turbulent viscous flow. By solving these and similar problems, workshop participants have shown the technical progress from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The fourth CAA workshop emphasized the application of CAA methods to the solution of realistic problems. The workshop was held at the Ohio Aerospace Institute in Cleveland, Ohio, on October 20 to 22, 2003. At that time, workshop participants presented their solutions to problems in one or more of five categories. Their solutions are presented in this proceedings along with the comparisons of their solutions to the benchmark solutions or experimental data. The five categories for the benchmark problems were as follows: Category 1:Basic Methods. The numerical computation of sound is affected by, among other issues, the choice of grid used and by the boundary conditions. Category 2:Complex Geometry. The ability to compute the sound in the presence of complex geometric surfaces is important in practical applications of CAA. Category 3:Sound Generation by Interacting With a Gust. The practical application of CAA for computing noise generated by turbomachinery involves the modeling of the noise source mechanism as a

  10. Benchmarking Severe Accident Computer Codes for Heavy Water Reactor Applications

    International Nuclear Information System (INIS)

    Requests for severe accident investigations and assurance of mitigation measures have increased for operating nuclear power plants and the design of advanced nuclear power plants. Severe accident analysis investigations necessitate the analysis of the very complex physical phenomena that occur sequentially during various stages of accident progression. Computer codes are essential tools for understanding how the reactor and its containment might respond under severe accident conditions. The IAEA organizes coordinated research projects (CRPs) to facilitate technology development through international collaboration among Member States. The CRP on Benchmarking Severe Accident Computer Codes for HWR Applications was planned on the advice and with the support of the IAEA Nuclear Energy Department's Technical Working Group on Advanced Technologies for HWRs (the TWG-HWR). This publication summarizes the results from the CRP participants. The CRP promoted international collaboration among Member States to improve the phenomenological understanding of severe core damage accidents and the capability to analyse them. The CRP scope included the identification and selection of a severe accident sequence, selection of appropriate geometrical and boundary conditions, conduct of benchmark analyses, comparison of the results of all code outputs, evaluation of the capabilities of computer codes to predict important severe accident phenomena, and the proposal of necessary code improvements and/or new experiments to reduce uncertainties. Seven institutes from five countries with HWRs participated in this CRP

  11. Inelastic finite element analysis of a pipe-elbow assembly (benchmark problem 2)

    International Nuclear Information System (INIS)

    In the scope of the international benchmark problem effort on piping systems, benchmark problem 2 consisting of a pipe elbow assembly, subjected to a time dependent in-plane bending moment, was analysed using the finite element program MARC. Numerical results are presented and a comparison with experimental results is made. It is concluded that the main reason for the deviation between the calculated and measured values is due to the fact that creep-plasticity interaction is not taken into account in the analysis. (author)

  12. Nuclear fuel assembly identification using computer vision

    International Nuclear Information System (INIS)

    This report describes an improved method of remotely identifying irradiated nuclear fuel assemblies. The method uses existing in-cell TV cameras to input an image of the notch-coded top of the fuel assemblies into a computer vision system, which then produces the identifying number for that assembly. This system replaces systems that use either a mechanical mechanism to feel the notches or use human operators to locate notches visually. The system was developed for identifying fuel assemblies from the Fast Flux Test Facility (FFTF) and the Clinch River Breeder Reactor, but could be used for other reactor assembly identification, as appropriate

  13. Argonne National Laboratory results for the OECD 3-D Mox fuel assembly benchmark (C5G7 Mox Benchmark Extension)

    International Nuclear Information System (INIS)

    Two prototypic versions of the Argonne National Laboratory (ANL) nodal neutronics transport code VARIANT are used to solve the OECD 3-D Mox fuel assembly benchmark. The first prototypical code is titled VARIANT-SE, and is based upon the classical spherical harmonics expansion of the angular flux within the node. The second prototypical code is titled VARIANT-ISE, and it uses a form of integral transport internal to each node. In both cases, spherical harmonic Lagrange multipliers are maintained to couple the nodes together. Also, spatial finite elements within each node are used to model the fuel pin cell geometry specified in the OECD benchmark. In general the VARIANT-ISE code yielded results that were comparable to the Monte-Carlo reference solution. The remaining errors resulted primarily from the low space-angle approximation applied. The VARIANT-SE code produced results that were less accurate than those of VARIANT-ISE. This is a direct result of the lack of refinement in the space-angle approximation applied in the VARIANT-SE code

  14. Air ingress benchmarking with computational fluid dynamics analysis

    International Nuclear Information System (INIS)

    The air ingress accident is a complicated accident scenario that may limit the deployment of high-temperature gas reactors. The complexity of this accident scenario is compounded by multiple physical phenomena that are involved in the air ingress event. These include diffusion, natural circulation, and complex chemical reactions with graphite and oxygen. In an attempt to better understand the phenomenon, the FLUENT-6 computational fluid dynamics code was used to assess two air ingress experiments. The first was the Japanese series of tests performed in the early 1990s by Takeda and Hishida. These separate effects tests were conducted to understand and model a multi-component experiment in which all three processes were included with the introduction of air in a heated graphite column. MIT used the FLUENT code to benchmark these series of tests with quite good results. These tests are generically applicable to prismatic reactors and the lower reflector regions of pebble-bed reactors. The second series of tests were performed at the NACOK facility for pebble bed reactors as reported by Kuhlmann [Kuhlmann, M.B., 1999. Experiments to investigate flow transfer and graphite corrosion in case of air ingress accidents in a high-temperature reactor]. These tests were aimed at understanding natural circulation of pebble bed reactors by simulating hot and cold legs of these reactors. The FLUENT code was also successfully used to simulate these tests. The results of these benchmarks and the findings will be presented

  15. Benchmark experiment on vanadium assembly with D-T neutrons. Leakage neutron spectrum measurement

    Energy Technology Data Exchange (ETDEWEB)

    Kokooo; Murata, I.; Nakano, D.; Takahashi, A. [Osaka Univ., Suita (Japan); Maekawa, F.; Ikeda, Y.

    1998-03-01

    The fusion neutronics benchmark experiments have been done for vanadium and vanadium alloy by using the slab assembly and time-of-flight (TOF) method. The leakage neutron spectra were measured from 50 keV to 15 MeV and comparison were done with MCNP-4A calculations which was made by using evaluated nuclear data of JENDL-3.2, JENDL-Fusion File and FENDL/E-1.0. (author)

  16. Benchmark calculation with MOSRA-SRAC for burnup of a BWR fuel assembly

    International Nuclear Information System (INIS)

    The Japan Atomic Energy Agency has developed the Modular Reactor Analysis Code System MOSRA to improve the applicability of neutronic characteristics modeling. The cell calculation module MOSRA-SRAC is based on the collision probability method and is one of the core modules of the MOSRA system. To test the module on a real-world problem, it was combined with the benchmark program 'Burnup Credit Criticality Benchmark Phase IIIC.' In this program participants are requested to submit the neutronic characteristics of burnup calculations for a BWR fuel assembly containing fuel rods poisoned with gadolinium (Gd2O3), which is similar to the fuel assembly at TEPCO's Fukushima Daiichi Nuclear Power Station. Because of certain restrictions of the MOSRA-SRAC burnup calculations part of the geometry model was homogenized. In order to verify the validity of MOSRA-SRAC, including the effects of the homogenization, the calculated burnup dependent infinite multiplication factor and the nuclide compositions were compared with those obtained with the burnup calculation code MVP-BURN which had already been validated for many benchmark problems. As a result of the comparisons, the applicability of MOSRA-SRAC module for the BWR assembly has been verified. Furthermore, it can be shown that the effects of the homogenization are smaller than the effects due to the calculation method for both multiplication factor and compositions. (author)

  17. OECD/NEA burnup credit criticality benchmarks phase IIIA: Criticality calculations of BWR spent fuel assemblies in storage and transport

    International Nuclear Information System (INIS)

    The report describes the final results of Phase IIIA Benchmarks conducted by the Burnup Credit Criticality Calculation Working Group under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD/NEA). The benchmarks are intended to confirm the predictive capability of the current computer code and data library combinations for the neutron multiplication factor (keff) of a layer of irradiated BWR fuel assembly array model. In total 22 benchmark problems are proposed for calculations of keff. The effects of following parameters are investigated: cooling time, inclusion/exclusion of FP nuclides and axial burnup profile, and inclusion of axial profile of void fraction or constant void fractions during burnup. Axial profiles of fractional fission rates are further requested for five cases out of the 22 problems. Twenty-one sets of results are presented, contributed by 17 institutes from 9 countries. The relative dispersion of keff values calculated by the participants from the mean value is almost within the band of ±1%Δk/k. The deviations from the averaged calculated fission rate profiles are found to be within ±5% for most cases. (author)

  18. OECD/NEA burnup credit criticality benchmarks phase IIIB: Burnup calculations of BWR fuel assemblies for storage and transport

    International Nuclear Information System (INIS)

    The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of ±10% relative to the average, although some results, esp. 155Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k∞ also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)

  19. OECD/NEA burnup credit criticality benchmarks phase IIIB. Burnup calculations of BWR fuel assemblies for storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-02-01

    The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of {+-}10% relative to the average, although some results, esp. {sup 155}Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k{sub {infinity}} also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)

  20. A benchmark on computational simulation of a CT fracture experiment

    International Nuclear Information System (INIS)

    For a better understanding of the fracture behavior of cracked welds in piping, FRAMATOME, EDF and CEA have launched an important analytical research program. This program is mainly based on the analysis of the effects of the geometrical parameters (the crack size and the welded joint dimensions) and the yield strength ratio on the fracture behavior of several cracked configurations. Two approaches have been selected for the fracture analyses: on one hand, the global approach based on the concept of crack driving force J and on the other hand, a local approach of ductile fracture. In this approach the crack initiation and growth are modelized by the nucleation, growth and coalescence of cavities in front of the crack tip. The model selected in this study estimates only the growth of the cavities using the RICE and TRACEY relationship. The present study deals with a benchmark on computational simulation of CT fracture experiments using three computer codes : ALIBABA developed by EDF the CEA's code CASTEM 2000 and the FRAMATOME's code SYSTUS. The paper is split into three parts. At first, the authors present the experimental procedure for high temperature toughness testing of two CT specimens taken from a welded pipe, characteristic of pressurized water reactor primary piping. Secondly, considerations are outlined about the Finite Element analysis and the application procedure. A detailed description is given on boundary and loading conditions, on the mesh characteristics, on the numerical scheme involved and on the void growth computation. Finally, the comparisons between numerical and experimental results are presented up to the crack initiation, the tearing process being not taken into account in the present study. The variations of J and of the local variables used to estimate the damage around the crack tip (triaxiality and hydrostatic stresses, plastic deformations, void growth ...) are computed as a function of the increasing load

  1. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  2. Analysis of the previous and preparation of new experiments on fast multiplying assemblies for obtaining benchmark data on criticality

    International Nuclear Information System (INIS)

    The JIPNR-Sosny of the NAS of Belarus created and explored a number of uranium-containing critical assemblies of the BTS series in designing fast BRIG-300 reactor with N2O4 ↔ 2NO2 ↔ 2NO + O2 coolant and the PVER fast-resonance neutron spectrum reactor with a steam-water coolant. Research in the physics of these reactors was performed on fast-thermal critical assemblies at the critical facility Roza. Structurally, these critical assemblies consisted of fast and thermal reactor cores and the buffer zones located between them, intended for leakage spectrum neutron conversion from a thermal zone to a spectrum of neutrons of the modelled fast reactor. Fast zones are a non-uniform hexagonal lattice of cylindrical fuel rods with fuel composition based on metal U (90% U-235), UO2 (36% U-235), depleted U (0.4% U-235), rods with SiO2; a buffer zone is a non-uniform hexagonal lattice of cylindrical fuel rods based on UO2 (36% U-235), natural U and depleted U (0.4% U-235), rods with B4C and made from stainless steel; a thermal zone is a uniform rectangular uranium-polyethylene lattice of cylindrical fuel rods based on the fuel composition UO2+Mg (10% U-235). For obtaining benchmark data on the criticality, computational models have been developed and the analysis of experiments has been carried out to estimate the experimental results as criticality benchmark data. The JIPNR-Sosny of the NAS of Belarus also prepared experiments on the criticality of multiplying systems simulating some physical features of the core of fast low power small-size gas-cooled reactors with UZrCN nuclear fuel. For these purposes, the critical assemblies P-20 were developed at the critical facility “Giacint”. These assemblies represent a uniform hexagonal lattice of fuel cassette: the central area is based on cylindrical fuel rods with UZrCN (19.75% U-235), the peripheral area is based on the cylindrical fuel rods with metallic U (90% U-235), UO2 (36% U-235) and natural U; and the reflector on

  3. Parallel Computation Using Active Self-assembly

    OpenAIRE

    Chen, Moya; Xin, Doris; Woods, Damien

    2013-01-01

    We study the computational complexity of the recently proposed nubot model of molecular-scale self-assembly. The model generalises asynchronous cellular automata to have non-local movement where large assemblies of molecules can be pushed and pulled around, analogous to millions of molecular motors in animal muscle effecting the rapid movement of macroscale arms and legs. We show that the nubot model is capable of simulating Boolean circuits of polylogarithmic depth and polynomial size, in on...

  4. Nuclear fuel assembly identification using computer vision

    International Nuclear Information System (INIS)

    A new method of identifying fuel assemblies has been developed. The method uses existing in-cell TV cameras to read the notch-coded handling sockets of Fast Flux Test Facility (FFTF) assemblies. A computer looks at the TV image, locates the notches, decodes the notch pattern, and produces the identification number. A TV camera is the only in-cell equipment required, thus avoiding complex mechanisms in the hot cell. Assemblies can be identified in any location where the handling socket is visible from the camera. Other advantages include low cost, rapid identification, low maintenance, and ease of use

  5. Computational benchmark problem for deep penetration in iron

    International Nuclear Information System (INIS)

    A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab

  6. Fusion blanket benchmark experiments on a 60 cm-thick lithium-oxide cylindrical assembly

    International Nuclear Information System (INIS)

    Integral experiments on a Li2O cylindrical assembly have been carried out, using the FNS facility to provide benchmark data for verification of methods and data used in fusion neutronics research. The size of assembly was 63 cm (diameter) by 61 cm (length). Measurements included 6Li and 7Li tritium production rates ; 235U, 238U, 237Np and 232Th fission rates ; 27Al(n,α)24Na, 58Ni(n,2n)57Ni, 115In(n,n1)115mIn and 115In(n,γ)116In reaction rates. Neutron energy spectra in the assembly, as well as response rates of TLDs and PIN diodes, were also measured. Measured data are presented in tabular form together with estimated errors. A sample calculation using the DOT3.5 code is provided to facilitate the reader understanding of the experiments. Although several different measuring techniques are used in the experiment, the data are mutually consistent. This fact supports that present experimental data can be applied to the benchmark verification of methods and data. (author)

  7. Computation of flow through a block assembly

    International Nuclear Information System (INIS)

    A simple procedure is presented for computation of flow through gaps in the assembly block. This procedure enables estimation of bypass flows through the reflector of a gas cooled reactor. The method is based on a simplified channel network representation of the gap configuration. Using computer program the procedure was applied for verification on an experimental model. The results of the computation were in good agreement with the experimental data. A typical three dimensional model of a gas cooled reflector was also computed. (authors) 2 refs, 3 figs

  8. Benchmark experiment on a copper slab assembly bombarded by D-T neutrons

    International Nuclear Information System (INIS)

    Copper is a very important material for fusion reactor because it is used in superconducting magnets or first walls and so on. To verify nuclear data of copper, a benchmark experiment was performed using the D-T neutron source of the FNS facility in Japan Atomic Energy Research Institute. An cylindrical experimental assembly of 629 mm in diameter and 608 mm in thickness made of pure copper was located at 200 mm from the D-T neutron source. In the assembly, the following quantities were measured; i) neutron spectra in energy regions of MeV and keV, ii) neutron reaction rates, iii) prompt and decay gamma-ray spectra and iv) gamma-ray heating rates. The obtained experimental data were compiled in this report. (author)

  9. PWR assembly transport calculation: A validation benchmark using DRAGON, PENTRAN, and MCNP

    International Nuclear Information System (INIS)

    This paper presents a 2D PWR fuel assembly benchmark performed with 3 transport codes: DRAGON which uses the collision probability method, PENTRAN, an Sn transport code, and MCNP, a Monte Carlo code. First, DRAGON was used to produce a 2-group pin-by-pin cross-section library associated with 45 materials that describe the fuel assembly. Using the same library, it was then possible to perform comparisons between DRAGON and MCNP, and between PENTRAN and MCNP. Here, MCNP was considered as the reference multigroup Monte Carlo tool used to validate the deterministic codes. This type of 2-group benchmark can be utilized to evaluate the performance of different solvers using the very same cross-sections. The transport solutions provided here May be used as references for further comparisons with industrial reactor core codes using a diffusion or a SPn solver, and generally relying on 2-group cross-sections. Results show an excellent overall agreement between the 3 codes, with discrepancies that are less than 0.5% on the pin-by-pin flux, and less than 20 pcm on the keff. Therefore, it May be concluded that these deterministic codes are reliable tools to perform criticality transport calculations for PWR lattices. Moreover, the use of multigroup Monte Carlo appears as an efficient independent technique to perform detailed code to code comparisons relying on the same cross-section library. The present work May be considered as the first step of a 3D PWR core benchmark using DRAGON generated cross-sections and comparing PENTRAN and MCNP multigroup calculations. (authors)

  10. Solutions to NEANSC benchmark problems on 'Power Distribution within Assemblies (PDWA)' using the SRAC and GMVP

    International Nuclear Information System (INIS)

    An advancement or variety of PWR cores by introducing MOX fuel, burnable poison and so on, increases a heterogeneity in a core or an assembly. For the evaluation of the pin power distribution, the fine mesh flux reconstruction is required with the combination of an assembly calculation and a three dimensional core calculation with coarse mesh, instead of the combination of a two dimensional X-Y core calculation with fine mesh and a one dimensional axial core calculation for the conventional PWR core. The main purpose of the NEANSC benchmark problems entitled 'Power Distribution within Assemblies' is to compare the technique of the fine mesh flux reconstruction based on coarse mesh core calculation. In this report, we examine the validity of the reconstruction technique based on the coarse mesh core calculation using the Spline function, assembly calculation and heterogeneous fine mesh core calculation by built-in programs in the SRAC code using the groupwise Monte Carlo calculation with the GMVP code as reference. (author)

  11. Benchmark calculations of power distribution within fuel assemblies. Phase 2: comparison of data reduction and power reconstruction methods in production codes

    International Nuclear Information System (INIS)

    Systems loaded with plutonium in the form of mixed-oxide (MOX) fuel show somewhat different neutronic characteristics compared with those using conventional uranium fuels. In order to maintain adequate safety standards, it is essential to accurately predict the characteristics of MOX-fuelled systems and to further validate both the nuclear data and the computation methods used. A computation benchmark on power distribution within fuel assemblies to compare different techniques used in production codes for fine flux prediction in systems partially loaded with MOX fuel was carried out at an international level. It addressed first the numerical schemes for pin power reconstruction, then investigated the global performance including cross-section data reduction methods. This report provides the detailed results of this second phase of the benchmark. The analysis of the results revealed that basic data still need to be improved, primarily for higher plutonium isotopes and minor actinides. (author)

  12. Effects of Exciting Evaluated Nuclear Date Files on Nuclear Parameters of the BFS-62-3A Assembly Benchmark Model

    OpenAIRE

    Mikhail

    2002-01-01

    This report is continuation of studying of the experiments performed on BFS-62-3A critical assembly in Russia. The objective of work is definition of the cross section uncertainties on reactor neutronics parameters as applied to the hybrid core of the BN-600 reactor of Beloyarskaya NPP. Two-dimensional benchmark model of BFS-62-3A was created specially for these purposes and experimental values were reduced to it. Benchmark characteristics for this assembly are (1)criticality; (2)central fiss...

  13. The OECD/NEA Data Bank, its computer program services and benchmarking activities

    International Nuclear Information System (INIS)

    The OECD/NEA Data Bank collects, tests and distributes computer programs and numerical data in the field of nuclear energy applications. This activity is coordinated with several similar centres in the United States (ESTSC, NNDC, RSIC) and outside the OECD area through an arrangement with the IAEA. This information is shared worldwide for the benefit of scientists and engineers working on the safe and economic use of nuclear energy. The OECD/NEA Nuclear Science Committee the supervising body of the Data Bank has conducted a series of international computer code benchmark exercises with the aim of verifying the correctness of codes, of building confidence in models used for predicting macroscopic behaviour of nuclear systems and to drive towards refinement of models where necessary. Exercises involving nuclear cross section predictions, in-core reactor physics issues, such as pin cells for different type of reactors, plutonium recycling, reconstruction of pin power within assemblies, core transients, reactor shielding and dosimetry, away from reactor issues such as criticality safety for transport and storage of spent fuel, shielding of radioactive material packages and other problems connected with the back end of the fuel cycle, are listed and the relevant references provided. (author)

  14. Computational benchmarking of fast neutron transport throughout large water thicknesses

    International Nuclear Information System (INIS)

    Neutron dosimetry experiments seem to point our difficulties in the treatment of large water thickness like those encountered between the core baffle and the pressure vessel. This paper describes the theoretical benchmark undertaken by EDF, SCK/CEN and TRACTEBEL ENERGY ENGINEERING, concerning the transport of fast neutrons throughout a one meter cube of water, located after a U-235 fission sources plate. The results showed no major discrepancies between the calculations up to 50 cm from the source, accepting that a P3 development of the Legendre polynomials is necessary for the Sn calculations. The main differences occurred after 50 cm, reaching 20 % at the end of the water cube. This results lead us to consider an experimental benchmark, dedicated to the problem of fast neutron deep penetration in water, which has been launched at SCK/CEN. (authors)

  15. GAP: A computer program for gene assembly

    Energy Technology Data Exchange (ETDEWEB)

    Eisnstein, J.R.; Uberbacher, E.C.; Guan, X.; Mural, R.J.; Mann, R.C.

    1991-09-01

    A computer program, GAP (Gene Assembly Program), has been written to assemble and score hypothetical genes, given a DNA sequence containing the gene, and the outputs of several other programs which analyze the sequence. These programs include the codign-recognition and splice-junction-recognition modules developed in this laboratory. GAP is a prototype of a planned system in which it will be integrated with an expert system and rule base. Initial tests of GAP have been carried out with four sequences, the exons of which have been determined by biochemcial methods. The highest-scoring hypothetical genes for each of the four sequences had percent correct splice junctions ranging from 50 to 100% (average 81%) and percent correct bases ranging from 92 to 100% (average 96%). 9 refs., 1 tab.

  16. Numerics of High Performance Computers and Benchmark Evaluation of Distributed Memory Computers

    Directory of Open Access Journals (Sweden)

    H. S. Krishna

    2004-07-01

    Full Text Available The internal representation of numerical data, their speed of manipulation to generate the desired result through efficient utilisation of central processing unit, memory, and communication links are essential steps of all high performance scientific computations. Machine parameters, in particular, reveal accuracy and error bounds of computation, required for performance tuning of codes. This paper reports diagnosis of machine parameters, measurement of computing power of several workstations, serial and parallel computers, and a component-wise test procedure for distributed memory computers. Hierarchical memory structure is illustrated by block copying and unrolling techniques. Locality of reference for cache reuse of data is amply demonstrated by fast Fourier transform codes. Cache and register-blocking technique results in their optimum utilisation with consequent gain in throughput during vector-matrix operations. Implementation of these memory management techniques reduces cache inefficiency loss, which is known to be proportional to the number of processors. Of the two Linux clusters-ANUP16, HPC22 and HPC64, it has been found from the measurement of intrinsic parameters and from application benchmark of multi-block Euler code test run that ANUP16 is suitable for problems that exhibit fine-grained parallelism. The delivered performance of ANUP16 is of immense utility for developing high-end PC clusters like HPC64 and customised parallel computers with added advantage of speed and high degree of parallelism.

  17. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  18. Computational Benchmark for Estimation of Reactivity Margin from Fission Products and Minor Actinides in PWR Burnup Credit

    International Nuclear Information System (INIS)

    This report proposes and documents a computational benchmark problem for the estimation of the additional reactivity margin available in spent nuclear fuel (SNF) from fission products and minor actinides in a burnup-credit storage/transport environment, relative to SNF compositions containing only the major actinides. The benchmark problem/configuration is a generic burnup credit cask designed to hold 32 pressurized water reactor (PWR) assemblies. The purpose of this computational benchmark is to provide a reference configuration for the estimation of the additional reactivity margin, which is encouraged in the U.S. Nuclear Regulatory Commission (NRC) guidance for partial burnup credit (ISG8), and document reference estimations of the additional reactivity margin as a function of initial enrichment, burnup, and cooling time. Consequently, the geometry and material specifications are provided in sufficient detail to enable independent evaluations. Estimates of additional reactivity margin for this reference configuration may be compared to those of similar burnup-credit casks to provide an indication of the validity of design-specific estimates of fission-product margin. The reference solutions were generated with the SAS2H-depletion and CSAS25-criticality sequences of the SCALE 4.4a package. Although the SAS2H and CSAS25 sequences have been extensively validated elsewhere, the reference solutions are not directly or indirectly based on experimental results. Consequently, this computational benchmark cannot be used to satisfy the ANS 8.1 requirements for validation of calculational methods and is not intended to be used to establish biases for burnup credit analyses

  19. Computational Benchmark for Estimation of Reactivity Margin from Fission Products and Minor Actinides in PWR Burnup Credit

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, J.C.

    2001-08-02

    This report proposes and documents a computational benchmark problem for the estimation of the additional reactivity margin available in spent nuclear fuel (SNF) from fission products and minor actinides in a burnup-credit storage/transport environment, relative to SNF compositions containing only the major actinides. The benchmark problem/configuration is a generic burnup credit cask designed to hold 32 pressurized water reactor (PWR) assemblies. The purpose of this computational benchmark is to provide a reference configuration for the estimation of the additional reactivity margin, which is encouraged in the U.S. Nuclear Regulatory Commission (NRC) guidance for partial burnup credit (ISG8), and document reference estimations of the additional reactivity margin as a function of initial enrichment, burnup, and cooling time. Consequently, the geometry and material specifications are provided in sufficient detail to enable independent evaluations. Estimates of additional reactivity margin for this reference configuration may be compared to those of similar burnup-credit casks to provide an indication of the validity of design-specific estimates of fission-product margin. The reference solutions were generated with the SAS2H-depletion and CSAS25-criticality sequences of the SCALE 4.4a package. Although the SAS2H and CSAS25 sequences have been extensively validated elsewhere, the reference solutions are not directly or indirectly based on experimental results. Consequently, this computational benchmark cannot be used to satisfy the ANS 8.1 requirements for validation of calculational methods and is not intended to be used to establish biases for burnup credit analyses.

  20. Benchmarking and performance analysis of the CM-2. [SIMD computer

    Science.gov (United States)

    Myers, David W.; Adams, George B., II

    1988-01-01

    A suite of benchmarking routines testing communication, basic arithmetic operations, and selected kernel algorithms written in LISP and PARIS was developed for the CM-2. Experiment runs are automated via a software framework that sequences individual tests, allowing for unattended overnight operation. Multiple measurements are made and treated statistically to generate well-characterized results from the noisy values given by cm:time. The results obtained provide a comparison with similar, but less extensive, testing done on a CM-1. Tests were chosen to aid the algorithmist in constructing fast, efficient, and correct code on the CM-2, as well as gain insight into what performance criteria are needed when evaluating parallel processing machines.

  1. Benchmark Numerical Toolkits for High Performance Computing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  2. Using lateral capillary forces to compute by self-assembly

    OpenAIRE

    Rothemund, Paul W. K.

    2000-01-01

    Investigations of DNA computing have highlighted a fundamental connection between self-assembly (SA) and computation: in principle, any computation can be performed by a suitable self-assembling system. In practice, exploration of this connection is limited by our ability to control the geometry and specificity of binding interactions. Recently, a system has been developed that uses surface tension to assemble plastic tiles according to shape complementarity and likeness of wetting [Bowden, N...

  3. Benchmarking of computer codes and approaches for modeling exposure scenarios

    International Nuclear Information System (INIS)

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  4. Benchmarking neuromorphic vision: lessons learnt from computer vision

    OpenAIRE

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part...

  5. Physics Data Management Tools for Monte Carlo Transport: Computational Evolutions and Benchmarks

    CERN Document Server

    Han, Mincheol; Seo, Hee; Moneta, Lorenzo; Kim, Chan Hyeong

    2010-01-01

    The development of a package for the management of physics data is described: its design, implementation and computational benchmarks. This package improves the data management tools originally developed for Geant4 physics models based on the EADL, EEDL and EPDL97 data libraries. The implementation exploits recent evolutions of the C++ libraries appearing in the C++0x draft, which are intended for inclusion in the next C++ ISO Standard. The new tools improve the computational performance of physics data management.

  6. Benchmark experiment on vanadium assembly with D-T neutrons. In-situ measurement

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Kasugai, Yoshimi; Konno, Chikara; Wada, Masayuki; Oyama, Yukio; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Murata, Isao; Kokooo; Takahashi, Akito

    1998-03-01

    Fusion neutronics benchmark experimental data on vanadium were obtained for neutrons in almost entire energies as well as secondary gamma-rays. Benchmark calculations for the experiment were performed to investigate validity of recent nuclear data files, i.e., JENDL Fusion File, FENDL/E-1.0 and EFF-3. (author)

  7. Embedded Volttron specification - benchmarking small footprint compute device for Volttron

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Woodworth, Ken [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kuruganti, Teja [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-17

    An embedded system is a small footprint computing unit that typically serves a specific purpose closely associated with measurements and control of hardware devices. These units are designed for reasonable durability and operations in a wide range of operating conditions. Some embedded systems support real-time operations and can demonstrate high levels of reliability. Many have failsafe mechanisms built to handle graceful shutdown of the device in exception conditions. The available memory, processing power, and network connectivity of these devices are limited due to the nature of their specific-purpose design and intended application. Industry practice is to carefully design the software for the available hardware capability to suit desired deployment needs. Volttron is an open source agent development and deployment platform designed to enable researchers to interact with devices and appliances without having to write drivers themselves. Hosting Volttron on small footprint embeddable devices enables its demonstration for embedded use. This report details the steps required and the experience in setting up and running Volttron applications on three small footprint devices: the Intel Next Unit of Computing (NUC), the Raspberry Pi 2, and the BeagleBone Black. In addition, the report also details preliminary investigation of the execution performance of Volttron on these devices.

  8. Benchmark testing and independent verification of the VS2DT computer code

    International Nuclear Information System (INIS)

    The finite difference flow and transport simulator VS2DT was benchmark tested against several other codes which solve the same equations (Richards equation for flow and the Advection-Dispersion equation for transport). The benchmark problems investigated transient two-dimensional flow in a heterogeneous soil profile with a localized water source at the ground surface. The VS2DT code performed as well as or better than all other codes when considering mass balance characteristics and computational speed. It was also rated highly relative to the other codes with regard to ease-of-use. Following the benchmark study, the code was verified against two analytical solutions, one for two-dimensional flow and one for two-dimensional transport. These independent verifications show reasonable agreement with the analytical solutions, and complement the one-dimensional verification problems published in the code's original documentation

  9. Frances: A Tool for Understanding Computer Architecture and Assembly Language

    Science.gov (United States)

    Sondag, Tyler; Pokorny, Kian L.; Rajan, Hridesh

    2012-01-01

    Students in all areas of computing require knowledge of the computing device including software implementation at the machine level. Several courses in computer science curricula address these low-level details such as computer architecture and assembly languages. For such courses, there are advantages to studying real architectures instead of…

  10. Benchmarking a fission-product release computer program containing a Gibbs energy minimizer

    International Nuclear Information System (INIS)

    The computer program SOURCE IST 2.0 contains a 1997 model of fission-product vaporization, developed by B.J. Corse et al. That model was tractable on computers of that day. However, the understanding of fuel thermochemistry has advanced since that time. A new prototype computer program was developed with: a) newer Royal Military College of Canada thermodynamic model of uranium dioxide fuel, b) new model for fission-product vaporization from the fuel surface, c) a user-callable thermodynamics subroutine library, d) an updated nuclear data library, and e) an updated nuclide generation and depletion algorithm. The prototype has been benchmarked against experimental results. (author)

  11. Experimental study of the neutronics of the first gas cooled fast reactor benchmark assembly (GCFR phase I assembly)

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharyya, S.K.

    1976-12-01

    The Gas Cooled Fast Reactor (GCFR) Phase I Assembly is the first in a series of ZPR-9 critical assemblies designed to provide a reference set of reactor physics measurements in support of the 300 MW(e) GCFR Demonstration Plant designed by General Atomic Company. The Phase I Assembly was the first complete mockup of a GCFR core ever built. A set of basic reactor physics measurements were performed in the assembly to characterize the neutronics of the assembly and assess the impact of the neutron streaming on the various integral parameters. The analysis of the experiments was carried out using ENDF/B-IV based data and two-dimensional diffusion theory methods. The Benoist method of using directional diffusion coefficients was used to treat the anisotropic effects of neutron streaming within the framework of diffusion theory. Calculated predictions of most integral parameters in the GCFR showed the same kinds of agreements with experiment as in earlier LMFBR assemblies.

  12. COSA II Further benchmark exercises to compare geomechanical computer codes for salt

    International Nuclear Information System (INIS)

    Project COSA (COmputer COdes COmparison for SAlt) was a benchmarking exercise involving the numerical modelling of the geomechanical behaviour of heated rock salt. Its main objective was to assess the current European capability to predict the geomechanical behaviour of salt, in the context of the disposal of heat-producing radioactive waste in salt formations. Twelve organisations participated in the exercise in which their solutions to a number of benchmark problems were compared. The project was organised in two distinct phases: The first, from 1984-1986, concentrated on the verification of the computer codes. The second, from 1986-1988 progressed to validation, using three in-situ experiments at the Asse research facility in West Germany as a basis for comparison. This document reports the activities of the second phase of the project and presents the results, assessments and conclusions

  13. VVER-440 control rod follower induced local power peaking computational benchmark: MCNP and Karate solutions - 082

    International Nuclear Information System (INIS)

    With the original VVER-440 follower design the relatively large amount of water in the coupler between the absorber and fuel part of the control assembly can cause sharp power peaking in the fuel rods next to the coupler. The power peaking can be especially high after control rod withdrawal when the coupler reaches a low burnup level region of the adjacent assembly. Though the modernized coupler has a Hf plate in the critical region to suppress the power peak, the complicated structure needs a reference Monte Carlo calculation as a basis of engineering code validation. The coupler mathematical benchmark was solved by the KARATE code system using the same methods and approximations as in case of NPP applications and the results were compared to that of the reference MCNP. The need for treating the Hf burnout in the reflector region was also investigated. (authors)

  14. Genome assembly reborn: recent computational challenges

    OpenAIRE

    Pop, Mihai

    2009-01-01

    Research into genome assembly algorithms has experienced a resurgence due to new challenges created by the development of next generation sequencing technologies. Several genome assemblers have been published in recent years specifically targeted at the new sequence data; however, the ever-changing technological landscape leads to the need for continued research. In addition, the low cost of next generation sequencing data has led to an increased use of sequencing in new settings. For example...

  15. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    International Nuclear Information System (INIS)

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 418 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as 236U capture. Other deficiencies, such as the overprediction of Pu solution system critical eigenvalues and a decreasing trend in calculated eigenvalue for

  16. Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flapper, Joris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry – including four dairy processes – cheese, fluid milk, butter, and milk powder.

  17. On computational properties of gene assembly in ciliates

    Directory of Open Access Journals (Sweden)

    Vladimir Rogojin

    2010-11-01

    Full Text Available Gene assembly in stichotrichous ciliates happening during sexual reproduction is one of the most involved DNA manipulation processes occurring in biology. This biological process is of high interest from the computational and mathematical points of view due to its close analogy with such concepts and notions in theoretical computer science as permutation and linked list sorting and string rewriting. Studies on computational properties of gene assembly in ciliates represent a good example of interdisciplinary research contributing to both computer science and biology. We review here a number of general results related both to the development of different computational methods enhancing our understanding on the nature of gene assembly, as well as to the development of new biologically motivated computational and mathematical models and paradigms. Those paradigms contribute in particular to combinatorics, formal languages and computability theories.

  18. Research on Three Dimensional Computer Assistance Assembly Process Design System

    Institute of Scientific and Technical Information of China (English)

    HOU Wenjun; YAN Yaoqi; DUAN Wenjia; SUN Hanxu

    2006-01-01

    The computer aided process planning will certainly play a significant role in the success of enterprise informationization. 3-dimensional design will promote Tri-dimensional process planning. This article analysis nowadays situation and problems of assembly process planning, gives a 3-dimensional computer aided process planning system (3D-VAPP), and researches on the product information extraction, assembly sequence and path planning in visual interactive assembly process design, dynamic emulation of assembly and process verification, assembly animation outputs and automatic exploding view generation, interactive craft filling and craft knowledge management, etc. It also gives a multi-layer collision detect and multi-perspective automatic camera switching algorithm. Experiments were done to validate the feasibility of such technology and algorithm, which established the foundation of tri-dimensional computer aided process planning.

  19. Benchmarking FENDL libraries through analysis of bulk shielding experiments on large SS316 assemblies for verification of ITER shielding characteristics

    International Nuclear Information System (INIS)

    FENDL-1 data base has been developed recently for use in ITER/EDA phase and other fusion-related design activities. It is now undergoing extensive testing and benchmarking using experimental data of differential and integral measured parameters obtained from fusion-oriented experiments. As part of co-operation between UCLA (U.S.) with JAERI (Japan) on executing the required neutronics R ampersand D tasks for ITER shield design, two bulk shielding experiments on large SS316 assemblies were selected for benchmarking FENDL/MG-1 multigroup data base and FENDL/MC-1 continous energy data base. The analyses with the multigroup data (performed with S8, P5, DORT calculations with shielded and unshielded data) also included library derived from ENDF/B-VI data base for comparison purposes. The MCNP Monte Carlo code was used by JAERI with the FENDL/MC-1 data. The results of this benchmarking is reported in this paper along with the observed deficiencies and discrepancies. 20 refs., 27 figs., 1 tab

  20. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods.

    Science.gov (United States)

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods. PMID:27190234

  1. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  2. ESTABLISHING A METHODOLOGY FOR BENCHMARKING SPEECH SYNTHESIS FOR COMPUTER-ASSISTED LANGUAGE LEARNING (CALL

    Directory of Open Access Journals (Sweden)

    Zöe Handley

    2005-09-01

    Full Text Available Despite the new possibilities that speech synthesis brings about, few Computer-Assisted Language Learning (CALL applications integrating speech synthesis have found their way onto the market. One potential reason is that the suitability and benefits of the use of speech synthesis in CALL have not been proven. One way to do this is through evaluation. Yet, very few formal evaluations of speech synthesis for CALL purposes have been conducted. One possible reason for the neglect of evaluation in this context is the fact that it is expensive in terms of time and resources. An important concern given that there are several levels of evaluation from which such applications would benefit. Benchmarking, the comparison of the score obtained by a system with that obtained by one which is known, to guarantee user satisfaction in a standard task or set of tasks, is introduced as a potential solution to this problem. In this article, we report on our progress towards the development of one of these benchmarks, namely a benchmark for determining the adequacy of speech synthesis systems for use in CALL. We do so by presenting the results of a case study which aimed to identify the criteria which determine the adequacy of the output of speech synthesis systems for use in its various roles in CALL with a view to the selection of benchmark tests which will address these criteria. These roles (reading machine, pronunciation model, and conversational partner are also discussed here. An agenda for further research and evaluation is proposed in the conclusion.

  3. Man vs. Computer: Benchmarking Machine Learning Algorithms for Traffic Sign Recognition

    DEFF Research Database (Denmark)

    Stallkamp, J.; Schlipsing, M.; Salmen, J.;

    2012-01-01

    recognized with very high accuracy. Traffic signs have been designed to be easily readable for humans, who perform very well at this task. For computer systems, however, classifying traffic signs still seems to pose a challenging pattern recognition problem. Both image processing and machine learning...... sign dataset with more than 50,000 images of German road signs in 43 classes. The data was considered in the second stage of the German Traffic Sign Recognition Benchmark held at IJCNN 2011. The results of this competition are reported and the best-performing algorithms are briefly described...

  4. Benchmark for a 3D Monte Carlo boiling water reactor fluence computational package - MF3D

    International Nuclear Information System (INIS)

    A detailed three dimensional model of a quadrant of an operating BWR has been developed using MCNP to calculate flux spectrum and fluence levels at various locations in the reactor system. The calculational package, MF3D, was benchmarked against test data obtained over a complete fuel cycle of the host BWR. The test package included activation wires sensitive in both the fast and thermal ranges. Comparisons between the calculational results and test data are good to within ten percent, making the MF3D package an accurate tool for neutron and gamma fluence computation in BWR pressure vessel internals. (orig.)

  5. GABenchToB: A Genome Assembly Benchmark Tuned on Bacteria and Benchtop Sequencers

    OpenAIRE

    Jünemann, Sebastian; Prior, Karola; Albersmeier, Andreas; Albaum, Stefan; Kalinowski, Jörn; Goesmann, Alexander; Stoye, Jens; Harmsen, Dag

    2014-01-01

    De novo genome assembly is the process of reconstructing a complete genomic sequence from countless small sequencing reads. Due to the complexity of this task, numerous genome assemblers have been developed to cope with different requirements and the different kinds of data provided by sequencers within the fast evolving field of next-generation sequencing technologies. In particular, the recently introduced generation of benchtop sequencers, like Illumina's MiSeq and Ion Torrent's Personal G...

  6. In-cylinder diesel spray combustion simulations using parallel computation: A performance benchmarking study

    International Nuclear Information System (INIS)

    Highlights: ► A performance benchmarking exercise is conducted for diesel combustion simulations. ► The reduced chemical mechanism shows its advantages over base and skeletal models. ► High efficiency and great reduction of CPU runtime are achieved through 4-node solver. ► Increasing ISAT memory from 0.1 to 2 GB reduces the CPU runtime by almost 35%. ► Combustion and soot processes are predicted well with minimal computational cost. - Abstract: In the present study, in-cylinder diesel combustion simulation was performed with parallel processing on an Intel Xeon Quad-Core platform to allow both fluid dynamics and chemical kinetics of the surrogate diesel fuel model to be solved simultaneously on multiple processors. Here, Cartesian Z-Coordinate was selected as the most appropriate partitioning algorithm since it computationally bisects the domain such that the dynamic load associated with fuel particle tracking was evenly distributed during parallel computations. Other variables examined included number of compute nodes, chemistry sizes and in situ adaptive tabulation (ISAT) parameters. Based on the performance benchmarking test conducted, parallel configuration of 4-compute node was found to reduce the computational runtime most efficiently whereby a parallel efficiency of up to 75.4% was achieved. The simulation results also indicated that accuracy level was insensitive to the number of partitions or the partitioning algorithms. The effect of reducing the number of species on computational runtime was observed to be more significant than reducing the number of reactions. Besides, the study showed that an increase in the ISAT maximum storage of up to 2 GB reduced the computational runtime by 50%. Also, the ISAT error tolerance of 10−3 was chosen to strike a balance between results accuracy and computational runtime. The optimised parameters in parallel processing and ISAT, as well as the use of the in-house reduced chemistry model allowed accurate

  7. DNA Computing by Self-Assembly

    OpenAIRE

    Winfree, Erik

    2003-01-01

    Information and algorithms appear to be central to biological organization and processes, from the storage and reproduction of genetic information to the control of developmental processes to the sophisticated computations performed by the nervous system. Much as human technology uses electronic microprocessors to control electromechanical devices, biological organisms use biochemical circuits to control molecular and chemical events. The engineering and programming of bioch...

  8. Computational efficiency and accuracy of the fission collision separation method in 3D HTTR benchmark problems

    International Nuclear Information System (INIS)

    A fission collision separation method has been recently developed to significantly improve the computational efficiency of the COMET response coefficient generator. In this work, the accuracy and efficiency of the new response coefficient generation method is tested in 3D HTTR benchmark problems at both lattice and core levels. In lattice calculations, the surface-to-surface and fission density response coefficients computed by the new method are compared with those directly calculated by the Monte Carlo method. In whole core calculations, the eigenvalues and bundle/pin fission densities predicated by COMET based on the response coefficient libraries generated by the fission collision separation method are compared with those based on the interpolation method as well as the Monte Carlo reference solutions. These comparisons have shown that the new response coefficient generation method is significantly (about 3 times) faster than the interpolations method while its accuracy is close to that of the interpolation method. (author)

  9. Benchmark verification of a method for calculating leakage from partial-length shield assembly modified cores

    International Nuclear Information System (INIS)

    Over the past several years, plant-life extension programs have been implemented at many U.S. plants. One method of pressure vessel (PV) fluence rate reduction being used in several of the older reactors involves partial replacement of the oxide fuel with metallic rods in those peripheral assemblies located at critical azimuths. This substitution extends axially over a region that depends on the individual plant design, but covers the most critical PV weld and plate locations, which may be subject to pressurized thermal shock. In order to analyze the resulting PV dosimetry using these partial-length shield assemblies (PLSA), a relatively simple but accurate method needs to be formulated and qualified that treats the axially asymmetric core leakage. Accordingly, an experiment was devised and performed at the VENUS critical facility in Mol, Belgium. The success of the proposed method bodes well for the accuracy of future analyses of on-line plants using PLSAs

  10. An Easily Assembled Laboratory Exercise in Computed Tomography

    Science.gov (United States)

    Mylott, Elliot; Klepetka, Ryan; Dunlap, Justin C.; Widenhorn, Ralf

    2011-01-01

    In this paper, we present a laboratory activity in computed tomography (CT) primarily composed of a photogate and a rotary motion sensor that can be assembled quickly and partially automates data collection and analysis. We use an enclosure made with a light filter that is largely opaque in the visible spectrum but mostly transparent to the near…

  11. Evaluation of the computer code system RADHEAT-V4 by analysing benchmark problems on radiation shielding

    International Nuclear Information System (INIS)

    A computer code system RADHEAT-V4 has been developed for safety evaluation on radiation shielding of nuclear fuel facilities. To evaluate the performance of the code system, 18 benchmark problem were selected and analysed. Evaluated radiations are neutron and gamma-ray. Benchmark problems consist of penetration, streaming and skyshine. The computed results show more accurate than those by the Sn codes ANISN and DOT3.5 or the Monte Carlo code MORSE. Big core memory and many times I/O are, however, required for RADHEAT-V4. (author)

  12. A computer code package for Monte Carlo photon-electron transport simulation Comparisons with experimental benchmarks

    International Nuclear Information System (INIS)

    A computer code package (PTSIM) for particle transport Monte Carlo simulation was developed using object oriented techniques of design and programming. A flexible system for simulation of coupled photon, electron transport, facilitating development of efficient simulation applications, was obtained. For photons: Compton and photo-electric effects, pair production and Rayleigh interactions are simulated, while for electrons, a class II condensed history scheme was considered, in which catastrophic interactions (Moeller electron-electron interaction, bremsstrahlung, etc.) are treated in detail and all other interactions with reduced individual effect on electron history are grouped together using continuous slowing down approximation and energy straggling theories. Electron angular straggling is simulated using Moliere theory or a mixed model in which scatters at large angles are treated as distinct events. Comparisons with experimentally benchmarks for electron transmission and bremsstrahlung emissions energy and angular spectra, and for dose calculations are presented

  13. Neutronics benchmark for the Quad Cities-1 (Cycle 2) mixed oxide assembly irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, S.E.; Difilippo, F.C.

    1998-04-01

    Reactor physics computer programs are important tools that will be used to estimate mixed oxide fuel (MOX) physics performance in support of weapons grade plutonium disposition in US and Russian Federation reactors. Many of the computer programs used today have not undergone calculational comparisons to measured data obtained during reactor operation. Pin power, the buildup of transuranics, and depletion of gadolinium measurements were conducted (under Electric Power Research Institute sponsorship) on uranium and MOX pins irradiated in the Quad Cities-1 reactor in the 1970`s. These measurements are compared to modern computational models for the HELIOS and SCALE computer codes. Good agreement on pin powers was obtained for both MOX and uranium pins. The agreement between measured and calculated values of transuranic isotopes was mixed, depending on the particular isotope.

  14. Intrinsic universality and the computational power of self-assembly.

    Science.gov (United States)

    Woods, Damien

    2015-07-28

    Molecular self-assembly, the formation of large structures by small pieces of matter sticking together according to simple local interactions, is a ubiquitous phenomenon. A challenging engineering goal is to design a few molecules so that large numbers of them can self-assemble into desired complicated target objects. Indeed, we would like to understand the ultimate capabilities and limitations of this bottom-up fabrication process. We look to theoretical models of algorithmic self-assembly, where small square tiles stick together according to simple local rules in order to carry out a crystal growth process. In this survey, we focus on the use of simulation between such models to classify and separate their computational and expressive powers. Roughly speaking, one model simulates another if they grow the same structures, via the same dynamical growth processes. Our journey begins with the result that there is a single intrinsically universal tile set that, with appropriate initialization and spatial scaling, simulates any instance of Winfree's abstract Tile Assembly Model. This universal tile set exhibits something stronger than Turing universality: it captures the geometry and dynamics of any simulated system in a very direct way. From there we find that there is no such tile set in the more restrictive non-cooperative model, proving it weaker than the full Tile Assembly Model. In the two-handed model, where large structures can bind together in one step, we encounter an infinite set of infinite hierarchies of strictly increasing simulation power. Towards the end of our trip, we find one tile to rule them all: a single rotatable flipable polygonal tile that simulates any tile assembly system. We find another tile that aperiodically tiles the plane (but with small gaps). These and other recent results show that simulation is giving rise to a kind of computational complexity theory for self-assembly. It seems this could be the beginning of a much longer journey

  15. Benchmarking the CRBLASTER Computational Framework on the 350-MHz 49-core Maestro Development Board

    Science.gov (United States)

    Mighell, K. J.

    2012-09-01

    I describe the performance of the CRBLASTER computational framework on a 350-MHz 49-core Maestro Development Board (MBD). The 49-core Interim Test Chip (ITC) was developed by the U.S. Government and is based on the intellectual property of the 64-core TILE64 processor of the Tilera Corporation. The Maestro processor is intended for use in the high radiation environments found in space; the ITC was fabricated using IBM 90-nm CMOS 9SF technology and Radiation-Hardening-by-Design (RHDB) rules. CRBLASTER is a parallel-processing cosmic-ray rejection application based on a simple computational framework that uses the high-performance computing industry standard Message Passing Interface (MPI) library. CRBLASTER was designed to be used by research scientists to easily port image-analysis programs based on embarrassingly-parallel algorithms to a parallel-processing environment such as a multi-node Beowulf cluster or multi-core processors using MPI. I describe my experience of porting CRBLASTER to the 64-core TILE64 processor, the Maestro simulator, and finally the 49-core Maestro processor itself. Performance comparisons using the ITC are presented between emulating all floating-point operations in software and doing all floating point operations with hardware assist from an IEEE-754 compliant Aurora FPU (floating point unit) that is attached to each of the 49 cores. Benchmarking of the CRBLASTER computational framework using the memory-intensive L.A.COSMIC cosmic ray rejection algorithm and a computational-intensive Poisson noise generator reveal subtleties of the Maestro hardware design. Lastly, I describe the importance of using real scientific applications during the testing phase of next-generation computer hardware; complex real-world scientific applications can stress hardware in novel ways that may not necessarily be revealed while executing simple applications or unit tests.

  16. Computational design of co-assembling protein-DNA nanowires

    Science.gov (United States)

    Mou, Yun; Yu, Jiun-Yann; Wannier, Timothy M.; Guo, Chin-Lin; Mayo, Stephen L.

    2015-09-01

    Biomolecular self-assemblies are of great interest to nanotechnologists because of their functional versatility and their biocompatibility. Over the past decade, sophisticated single-component nanostructures composed exclusively of nucleic acids, peptides and proteins have been reported, and these nanostructures have been used in a wide range of applications, from drug delivery to molecular computing. Despite these successes, the development of hybrid co-assemblies of nucleic acids and proteins has remained elusive. Here we use computational protein design to create a protein-DNA co-assembling nanomaterial whose assembly is driven via non-covalent interactions. To achieve this, a homodimerization interface is engineered onto the Drosophila Engrailed homeodomain (ENH), allowing the dimerized protein complex to bind to two double-stranded DNA (dsDNA) molecules. By varying the arrangement of protein-binding sites on the dsDNA, an irregular bulk nanoparticle or a nanowire with single-molecule width can be spontaneously formed by mixing the protein and dsDNA building blocks. We characterize the protein-DNA nanowire using fluorescence microscopy, atomic force microscopy and X-ray crystallography, confirming that the nanowire is formed via the proposed mechanism. This work lays the foundation for the development of new classes of protein-DNA hybrid materials. Further applications can be explored by incorporating DNA origami, DNA aptamers and/or peptide epitopes into the protein-DNA framework presented here.

  17. Selection and benchmarking of computer codes for research reactor core conversions

    International Nuclear Information System (INIS)

    A group of computer codes have been selected and obtained from the Nuclear Energy Agency (NEA) Data Bank in France for the core conversion study of highly enriched research reactors. ANISN, WIMSD-4, MC2, COBRA-3M, FEVER, THERMOS, GAM-2, CINDER and EXTERMINATOR were selected for the study. For the final work THERMOS, GAM-2, CINDER and EXTERMINATOR have been selected and used. A one dimensional thermal hydraulics code also has been used to calculate temperature distributions in the core. THERMOS and CINDER have been modified to serve the purpose. Minor modifications have been made to GAM-2 and EXTERMINATOR to improve their utilization. All of the codes have been debugged on both CDC and IBM computers at the University of IL. IAEA 10 MW Benchmark problem has been solved. Results of this work has been compared with the IAEA contributor's results. Agreement is very good for highly enriched fuel (HEU). Deviations from IAEA contributor's mean value for low enriched fuel (LEU) exist but they are small enough in general. Deviation of keff is about 0.5% for both enrichments at the beginning of life (BOL) and at the end of life (EOL). Flux ratios deviate only about 1.5% from IAEA contributor's mean value. (author)

  18. Computational Benchmark Calculations Relevant to the Neutronic Design of the Spallation Neutron Source (SNS)

    International Nuclear Information System (INIS)

    The Spallation Neutron Source (SNS) will provide an intense source of low-energy neutrons for experimental use. The low-energy neutrons are produced by the interaction of a high-energy (1.0 GeV) proton beam on a mercury (Hg) target and slowed down in liquid hydrogen or light water moderators. Computer codes and computational techniques are being benchmarked against relevant experimental data to validate and verify the tools being used to predict the performance of the SNS. The LAHET Code System (LCS), which includes LAHET, HTAPE ad HMCNP (a modified version of MCNP version 3b), have been applied to the analysis of experiments that were conducted in the Alternating Gradient Synchrotron (AGS) facility at Brookhaven National Laboratory (BNL). In the AGS experiments, foils of various materials were placed around a mercury-filled stainless steel cylinder, which was bombarded with protons at 1.6 GeV. Neutrons created in the mercury target, activated the foils. Activities of the relevant isotopes were accurately measured and compared with calculated predictions. Measurements at BNL were provided in part by collaborating scientists from JAERI as part of the AGS Spallation Target Experiment (ASTE) collaboration. To date, calculations have shown good agreement with measurements

  19. Some benchmark calculations for VVER-1000 assemblies by WIMS-7B code

    International Nuclear Information System (INIS)

    Our aim in this report is to compare of calculation results, obtained with the use of different libraries, which are in the variant of the WIMS7B code. We had the three libraries: the 1986 library is based on the UKNDL files, the two 1996 libraries are based on the JEF-2.2 files, the one having the 69 group approximation, the other having the 172 group approximation. We wanted also to have some acquaintance with the new option of WIMS-7B - CACTUS. The variant of WIMS-7B was placed at our disposal by the code authors for a temporal use for 9 months. It was natural to make at comparisons with analogous values of TVS-M, MCU, Apollo-2, Casmo-4, Conkemo, MCNP, HELIOS codes, where the other different libraries were used. In accordance with our aims the calculations of unprofiled and profiled assemblies of the VVER-1000 reactor have been carried out by the option CACTUS. This option provides calculations by the characteristics method. The calculation results have been compared with the K∞ values obtained by other codes in work. The conclusion from this analysis is such: the methodical parts of errors of these codes have nearly the same values. Spacing for Keff values can be explained of the library microsections differences mainly. Nevertheless, the more detailed analysis of the results obtained is required. In conclusion the calculation of a depletion of VVER-1000 cell has been carried out. The comparison of the dependency of the multiply factor from the depletion obtained by WIMS-7B with different libraries and by the TVS-M, MCU, HELIOS and WIMS-ABBN codes in work has been performed. (orig.)

  20. On computational and behavioral evidence regarding Hebbian transcortical cell assemblies.

    OpenAIRE

    Spivey, M. J.; Andrews, M. W.; Richardson, D. C.

    1999-01-01

    Pulvermuller restricts himself to an unnecessarily narrow range of evidence to support his claims. Evidence from neural modeling and behavioral experiments provides further support for an account of words encoded as transcortical cell assemblies. A cognitive neuroscience of language must include a range of methodologies (e.g., neural, computational, and behavioral) and will need to focus on the on-line processes of real-time language processing in more natural contexts.

  1. Verification Benchmarks to Assess the Implementation of Computational Fluid Dynamics Based Hemolysis Prediction Models.

    Science.gov (United States)

    Hariharan, Prasanna; D'Souza, Gavin; Horner, Marc; Malinauskas, Richard A; Myers, Matthew R

    2015-09-01

    As part of an ongoing effort to develop verification and validation (V&V) standards for using computational fluid dynamics (CFD) in the evaluation of medical devices, we have developed idealized flow-based verification benchmarks to assess the implementation of commonly cited power-law based hemolysis models in CFD. Verification process ensures that all governing equations are solved correctly and the model is free of user and numerical errors. To perform verification for power-law based hemolysis modeling, analytical solutions for the Eulerian power-law blood damage model (which estimates hemolysis index (HI) as a function of shear stress and exposure time) were obtained for Couette and inclined Couette flow models, and for Newtonian and non-Newtonian pipe flow models. Subsequently, CFD simulations of fluid flow and HI were performed using Eulerian and three different Lagrangian-based hemolysis models and compared with the analytical solutions. For all the geometries, the blood damage results from the Eulerian-based CFD simulations matched the Eulerian analytical solutions within ∼1%, which indicates successful implementation of the Eulerian hemolysis model. Agreement between the Lagrangian and Eulerian models depended upon the choice of the hemolysis power-law constants. For the commonly used values of power-law constants (α  = 1.9-2.42 and β  = 0.65-0.80), in the absence of flow acceleration, most of the Lagrangian models matched the Eulerian results within 5%. In the presence of flow acceleration (inclined Couette flow), moderate differences (∼10%) were observed between the Lagrangian and Eulerian models. This difference increased to greater than 100% as the beta exponent decreased. These simplified flow problems can be used as standard benchmarks for verifying the implementation of blood damage predictive models in commercial and open-source CFD codes. The current study only used power-law model as an illustrative example to emphasize the need

  2. COMPUTER-AIDED BLOCK ASSEMBLY PROCESS PLANNING IN SHIPBUILD-ING BASED ON RULE-REASONING

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhiying; LI Zhen; JIANG Zhibin

    2008-01-01

    Computer-aided block assembly process planning based on rule-reasoning are developed in order to improve the assembly efficiency and implement the automated block assembly process planning generation in shipbuilding. First, weighted directed liaison graph (WDLG) is proposed to represent the model of block assembly process according to the characteristics of assembly relation, and edge list (EL) is used to describe assembly sequences. Shapes and assembly attributes of block parts are analyzed to determine the assembly position and matched parts of parts used frequently. Then, a series of assembly rules are generalized, and assembly sequences for block are obtained by means of rule reasoning. Final, a prototype system of computer-aided block assembly process planning is built. The system has been tested on actual block, and the results were found to be quite efficiency. Meanwhile, the fundament for the automation of block assembly process generation and integration with other systems is established.

  3. Genome Assembly and Computational Analysis Pipelines for Bacterial Pathogens

    KAUST Repository

    Rangkuti, Farania Gama Ardhina

    2011-06-01

    Pathogens lie behind the deadliest pandemics in history. To date, AIDS pandemic has resulted in more than 25 million fatal cases, while tuberculosis and malaria annually claim more than 2 million lives. Comparative genomic analyses are needed to gain insights into the molecular mechanisms of pathogens, but the abundance of biological data dictates that such studies cannot be performed without the assistance of computational approaches. This explains the significant need for computational pipelines for genome assembly and analyses. The aim of this research is to develop such pipelines. This work utilizes various bioinformatics approaches to analyze the high-­throughput genomic sequence data that has been obtained from several strains of bacterial pathogens. A pipeline has been compiled for quality control for sequencing and assembly, and several protocols have been developed to detect contaminations. Visualization has been generated of genomic data in various formats, in addition to alignment, homology detection and sequence variant detection. We have also implemented a metaheuristic algorithm that significantly improves bacterial genome assemblies compared to other known methods. Experiments on Mycobacterium tuberculosis H37Rv data showed that our method resulted in improvement of N50 value of up to 9697% while consistently maintaining high accuracy, covering around 98% of the published reference genome. Other improvement efforts were also implemented, consisting of iterative local assemblies and iterative correction of contiguated bases. Our result expedites the genomic analysis of virulent genes up to single base pair resolution. It is also applicable to virtually every pathogenic microorganism, propelling further research in the control of and protection from pathogen-­associated diseases.

  4. AGENT code - neutron transport benchmark examples

    International Nuclear Information System (INIS)

    The paper focuses on description of representative benchmark problems to demonstrate the versatility and accuracy of the AGENT (Arbitrary Geometry Neutron Transport) code. AGENT couples the method of characteristics and R-functions allowing true modeling of complex geometries. AGENT is optimized for robustness, accuracy, and computational efficiency for 2-D assembly configurations. The robustness of R-function based geometry generator is achieved through the hierarchical union of the simple primitives into more complex shapes. The accuracy is comparable to Monte Carlo codes and is obtained by following neutron propagation through true geometries. The computational efficiency is maintained through a set of acceleration techniques introduced in all important calculation levels. The selected assembly benchmark problems discussed in this paper are: the complex hexagonal modular high-temperature gas-cooled reactor, the Purdue University reactor and the well known C5G7 benchmark model. (author)

  5. Ab initio Hartree-Fock computation of electronic static structure factor of crystalline insulators: benchmark results on LiF

    OpenAIRE

    Shukla, Alok

    1999-01-01

    In this paper we present a fully ab initio Hartree-Fock approach aimed at calculating the static structure factor of crystalline insulators at arbitrary values of momentum transfer. In particular, we outline the computation of the incoherent scattering function, the component of the structure factor which governs the incoherent x-ray scattering from solids. The presented theory is applied to crystalline LiF to obtain benchmark Hartree-Fock values for its incoherent scattering function. Benchm...

  6. A Computer Model for Analyzing Volatile Removal Assembly

    Science.gov (United States)

    Guo, Boyun

    2010-01-01

    A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.

  7. Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C

    OpenAIRE

    Blaha, Stephen

    2002-01-01

    We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.

  8. Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C Language

    OpenAIRE

    Blaha, Stephen

    2002-01-01

    We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.

  9. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Flapper, Joris [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variables affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and

  10. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    Science.gov (United States)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  11. Computer simulation of Masurca critical and subcritical experiments. Muse-4 benchmark. Final report

    International Nuclear Information System (INIS)

    The efficient and safe management of spent fuel produced during the operation of commercial nuclear power plants is an important issue. In this context, partitioning and transmutation (P and T) of minor actinides and long-lived fission products can play an important role, significantly reducing the burden on geological repositories of nuclear waste and allowing their more effective use. Various systems, including existing reactors, fast reactors and advanced systems have been considered to optimise the transmutation scheme. Recently, many countries have shown interest in accelerator-driven systems (ADS) due to their potential for transmutation of minor actinides. Much R and D work is still required in order to demonstrate their desired capability as a whole system, and the current analysis methods and nuclear data for minor actinide burners are not as well established as those for conventionally-fuelled systems. Recognizing a need for code and data validation in this area, the Nuclear Science Committee of the OECD/NEA has organised various theoretical benchmarks on ADS burners. Many improvements and clarifications concerning nuclear data and calculation methods have been achieved. However, some significant discrepancies for important parameters are not fully understood and still require clarification. Therefore, this international benchmark based on MASURCA experiments, which were carried out under the auspices of the EC 5. Framework Programme, was launched in December 2001 in co-operation with the CEA (France) and CIEMAT (Spain). The benchmark model was oriented to compare simulation predictions based on available codes and nuclear data libraries with experimental data related to TRU transmutation, criticality constants and time evolution of the neutronic flux following source variation, within liquid metal fast subcritical systems. A total of 16 different institutions participated in this first experiment based benchmark, providing 34 solutions. The large number

  12. Local approach of cleavage fracture applied to a vessel with subclad flaw. A benchmark on computational simulation

    International Nuclear Information System (INIS)

    A benchmark on the computational simulation of a cladded vessel with a 6.2 mm sub-clad flaw submitted to a thermal transient has been conducted. Two-dimensional elastic and elastic-plastic finite element computations of the vessel have been performed by the different partners with respective finite element codes ASTER (EDF), CASTEM 2000 (CEA), SYSTUS (Framatome) and ABAQUS (AEA Technology). Main results have been compared: temperature field in the vessel, crack opening, opening stress at crack tips, stress intensity factor in cladding and base metal, Weibull stress σw and probability of failure in base metal, void growth rate R/R0 in cladding. This comparison shows an excellent agreement on main results, in particular on results obtained with local approach. (K.A.)

  13. Research Reactor Benchmarks

    International Nuclear Information System (INIS)

    A criticality benchmark experiment performed at the Jozef Stefan Institute TRIGA Mark II research reactor is described. This experiment and its evaluation are given as examples of benchmark experiments at research reactors. For this reason the differences and possible problems compared to other benchmark experiments are particularly emphasized. General guidelines for performing criticality benchmarks in research reactors are given. The criticality benchmark experiment was performed in a normal operating reactor core using commercially available fresh 20% enriched fuel elements containing 12 wt% uranium in uranium-zirconium hydride fuel material. Experimental conditions to minimize experimental errors and to enhance computer modeling accuracy are described. Uncertainties in multiplication factor due to fuel composition and geometry data are analyzed by sensitivity analysis. The simplifications in the benchmark model compared to the actual geometry are evaluated. Sample benchmark calculations with the MCNP and KENO Monte Carlo codes are given

  14. Integral test of JENDL-3PR1 through benchmark experiments on Li/sub 2/O slab assemblies

    International Nuclear Information System (INIS)

    Two types of benchmarck experiments on Li/sub 2/O assemblies have been carried out. They were analyzed by using three transport codes with JENDL-3PR1 and ENDF/B-4. The calculation using JENDL-3PR1 predicted tritium production rates of /sup 6/Li and /sup 7/Li better than those using ENDF/B-4

  15. Computational benchmarking of fast neutron transport throughout large water thicknesses; Benchmark theorique du transport de neutrons rapides a travers de larges epaisseurs d`eau

    Energy Technology Data Exchange (ETDEWEB)

    Risch, P.; Dekens, O.; Ait Abderrahim, H. [SCK-CEN, Fuel Research Department, (Belgium); Wouters, R. de [Tractebel, Energy Engineering, (Belgium)

    1997-10-01

    Neutron dosimetry experiments seem to point our difficulties in the treatment of large water thickness like those encountered between the core baffle and the pressure vessel. This paper describes the theoretical benchmark undertaken by EDF, SCK/CEN and TRACTEBEL ENERGY ENGINEERING, concerning the transport of fast neutrons throughout a one meter cube of water, located after a U-235 fission sources plate. The results showed no major discrepancies between the calculations up to 50 cm from the source, accepting that a P3 development of the Legendre polynomials is necessary for the Sn calculations. The main differences occurred after 50 cm, reaching 20 % at the end of the water cube. This results lead us to consider an experimental benchmark, dedicated to the problem of fast neutron deep penetration in water, which has been launched at SCK/CEN. (authors). 7 refs.

  16. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  17. Quantum computing applied to calculations of molecular energies: CH2 benchmark

    Czech Academy of Sciences Publication Activity Database

    Veis, L.; Pittner, Jiří

    2010-01-01

    Roč. 133, č. 19 (2010), s. 194106. ISSN 0021-9606 R&D Projects: GA ČR GA203/08/0626 Institutional research plan: CEZ:AV0Z40400503 Keywords : computation * algorithm * systems Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.920, year: 2010

  18. A performance geodynamo benchmark

    Science.gov (United States)

    Matsui, H.; Heien, E. M.

    2014-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. However, to approach the parameters regime of the Earth's outer core, we need massively parallel computational environment for extremely large spatial resolutions. Local methods are expected to be more suitable for massively parallel computation because the local methods needs less data communication than the spherical harmonics expansion, but only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, some numerical dynamo models using spherical harmonics expansion has performed successfully with thousands of processes. We perform benchmark tests to asses various numerical methods to asses the next generation of geodynamo simulations. The purpose of the present benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the present study, we will report the results of the performance benchmark. We perform the participated dynamo models under the same computational environment (XSEDE TACC Stampede), and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions as possible to investigate computational capability (e

  19. Benchmarking computations using the Monte Carlo code ritracks with data from a tissue equivalent proportional counter

    Science.gov (United States)

    Brogan, John

    Understanding the dosimetry for high-energy, heavy ions (HZE), especially within living systems, is complex and requires the use of both experimental and computational methods. Tissue-equivalent proportional counters (TEPCs) have been used experimentally to measure energy deposition in volumes similar in dimension to a mammalian cell. As these experiments begin to include a wider range of ions and energies, considerations to cost, time, and radiation protection are necessary and may limit the extent of these studies. Multiple Monte Carlo computational codes have been created to remediate this problem and serve as a mode of verification for pervious experimental methods. One such code, Relativistic-Ion Tracks (RITRACKS), is currently being developed at the NASA Johnson Space center. RITRACKS was designed to describe patterns of ionizations responsible for DNA damage on the molecular scale (nanometers). This study extends RITRACKS version 3.07 into the microdosimetric scale (microns), and compares computational results to previous experimental TEPC data. Energy deposition measurements for 1000 MeV nucleon-1 Fe ions in a 1 micron spherical target were compared. Different settings within RITRACKS were tested to verify their effects on dose to a target and the resulting energy deposition frequency distribution. The results were then compared to the TEPC data.

  20. Benchmarking HRD.

    Science.gov (United States)

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  1. Exploring the marketing challenges faced by assembled computer dealers

    OpenAIRE

    Kallimani, Rashmi

    2010-01-01

    There has been a great competition in computer market these days for obtaining higher market share. Computer market consisting of many branded and non branded players have been using various methods for matching the supply and demand in best possible way for attaining market dominance. Branded companies are seen to be investing large amount in aggressive marketing techniques for reaching the customers and obtaining higher market share. Due to this many small companies and non branded computer...

  2. Multidimensional benchmarking

    OpenAIRE

    Campbell, Akiko

    2016-01-01

    Benchmarking is a process of comparison between performance characteristics of separate, often competing organizations intended to enable each participant to improve its own performance in the marketplace (Kay, 2007). Benchmarking sets organizations’ performance standards based on what “others” are achieving. Most widely adopted approaches are quantitative and reveal numerical performance gaps where organizations lag behind benchmarks; however, quantitative benchmarking on its own rarely yi...

  3. Experience in programming Assembly language of CDC CYBER 170/750 computer

    International Nuclear Information System (INIS)

    Aiming to optimize processing time of BCG computer code in the CDC CYBER 170/750 computer, the FORTRAN-V language of INTERP subroutine was converted to Assembly language. The BCG code was developed for solving neutron transport equation by iterative method, and the INTERP subroutine is innermost loop of the code carrying out 5 interpolation types. The central processor unit Assembly language of the CDC CYBER 170/750 computer and its application in implementing the interpolation subroutine of BCG code are described. (M.C.K.)

  4. metaSPAdes: a new versatile de novo metagenomics assembler

    OpenAIRE

    Nurk, Sergey; Meleshko, Dmitry; Korobeynikov, Anton; Pevzner, Pavel

    2016-01-01

    While metagenomics has emerged as a technology of choice for analyzing bacterial populations, assembly of metagenomic data remains difficult thus stifling biological discoveries. metaSPAdes is a new assembler that addresses the challenge of metagenome analysis and capitalizes on computational ideas that proved to be useful in assemblies of single cells and highly polymorphic diploid genomes. We benchmark metaSPAdes against other state-of-the-art metagenome assemblers across diverse da-tasets ...

  5. Hydraulic benchmark data for PWR mixing vane grid

    International Nuclear Information System (INIS)

    The purpose of the present study is to present new hydraulic benchmark data obtained for PWR rod bundles for the purpose of benchmarking Computational Fluid Dynamics (CFD) models of the rod bundle. The flow field in a PWR fuel assembly downstream of structural grids which have mixing vane grids attached is very complex due to the geometry of the subchannel and the high axial component of the velocity field relative to the secondary flows which are used to enhance the heat transfer performance of the rod bundle. Westinghouse has a CFD methodology to model PWR rod bundles that was developed with prior benchmark test data. As improvements in testing techniques have become available, further PWR rod bundle testing is being performed to obtain advanced data which has high spatial and temporal resolution. This paper presents the advanced testing and benchmark data that has been obtained by Westinghouse through collaboration with Texas A&M University. (author)

  6. Further development of the Dynamic Control Assemblies Worth Measurement Method for Advanced Reactivity Computers

    International Nuclear Information System (INIS)

    The dynamic control assemblies worth measurement technique is a quick method for validation of predicted control assemblies worth. The dynamic control assemblies worth measurement utilize space-time corrections for the measured out of core ionization chamber readings calculated by DYN 3D computer code. The space-time correction arising from the prompt neutron density redistribution in the measured ionization chamber reading can be directly applied in the advanced reactivity computer. The second correction concerning the difference of spatial distribution of delayed neutrons can be calculated by simulation the measurement procedure by dynamic version of the DYN 3D code. In the paper some results of dynamic control assemblies worth measurement applied for NPP Mochovce are presented (Authors)

  7. Using the OECD/NRC Pressurized Water Reactor Main Steam Line Break Benchmark to Study Current Numerical and Computational Issues of Coupled Calculations

    International Nuclear Information System (INIS)

    Incorporating full three-dimensional (3-D) models of the reactor core into system transient codes allows for a 'best-estimate' calculation of interactions between the core behavior and plant dynamics. Recent progress in computer technology has made the development of coupled thermal-hydraulic (T-H) and neutron kinetics code systems feasible. Considerable efforts have been made in various countries and organizations in this direction. Appropriate benchmarks need to be developed that will permit testing of two particular aspects. One is to verify the capability of the coupled codes to analyze complex transients with coupled core-plant interactions. The second is to test fully the neutronics/T-H coupling. One such benchmark is the Pressurized Water Reactor Main Steam Line Break (MSLB) Benchmark problem. It was sponsored by the Organization for Economic Cooperation and Development, U.S. Nuclear Regulatory Commission, and The Pennsylvania State University. The benchmark problem uses a 3-D neutronics core model that is based on real plant design and operational data for the Three Mile Island Unit 1 nuclear power plant. The purpose of this benchmark is threefold: to verify the capability of system codes for analyzing complex transients with coupled core-plant interactions; to test fully the 3-D neutronics/T-H coupling; and to evaluate discrepancies among the predictions of coupled codes in best-estimate transient simulations. The purposes of the benchmark are met through the application of three exercises: a point kinetics plant simulation (exercise 1), a coupled 3-D neutronics/core T-H evaluation of core response (exercise 2), and a best-estimate coupled core-plant transient model (exercise 3).In this paper we present the three exercises of the MSLB benchmark, and we summarize the findings of the participants with regard to the current numerical and computational issues of coupled calculations. In addition, this paper reviews in some detail the sensitivity studies on

  8. Theory of Connectivity: Nature and Nurture of Cell Assemblies and Cognitive Computation.

    Science.gov (United States)

    Li, Meng; Liu, Jun; Tsien, Joe Z

    2016-01-01

    Richard Semon and Donald Hebb are among the firsts to put forth the notion of cell assembly-a group of coherently or sequentially-activated neurons-to represent percept, memory, or concept. Despite the rekindled interest in this century-old idea, the concept of cell assembly still remains ill-defined and its operational principle is poorly understood. What is the size of a cell assembly? How should a cell assembly be organized? What is the computational logic underlying Hebbian cell assemblies? How might Nature vs. Nurture interact at the level of a cell assembly? In contrast to the widely assumed randomness within the mature but naïve cell assembly, the Theory of Connectivity postulates that the brain consists of the developmentally pre-programmed cell assemblies known as the functional connectivity motif (FCM). Principal cells within such FCM is organized by the power-of-two-based mathematical principle that guides the construction of specific-to-general combinatorial connectivity patterns in neuronal circuits, giving rise to a full range of specific features, various relational patterns, and generalized knowledge. This pre-configured canonical computation is predicted to be evolutionarily conserved across many circuits, ranging from these encoding memory engrams and imagination to decision-making and motor control. Although the power-of-two-based wiring and computational logic places a mathematical boundary on an individual's cognitive capacity, the fullest intellectual potential can be brought about by optimized nature and nurture. This theory may also open up a new avenue to examining how genetic mutations and various drugs might impair or improve the computational logic of brain circuits. PMID:27199674

  9. Self-assembly of amphiphilic molecules:A review on the recent computer simulation results

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    We provided a short review on the recent progresses in computer simulations of adsorption and self-assembly of amphiphilic molecules.Owing to the extensive applications of amphiphilic molecules,it is very important to understand thoroughly the effects of the detailed chemistry,solid surfaces and the degree of confinement on the aggregate morphologies and kinetics of self-assembly for amphiphilic systems.In this review we paid special attention on(i) morphologies of adsorbed surfactants on solid surfaces,(ii) self-assembly in confined systems,and(iii) kinetic processes involving amphiphilic molecules.

  10. Theory of Connectivity: Nature and Nurture of Cell Assemblies and Cognitive Computation

    OpenAIRE

    Li, Meng; Liu, Jun; Tsien, Joe Z.

    2016-01-01

    Richard Semon and Donald Hebb are among the firsts to put forth the notion of cell assembly—a group of coherently or sequentially-activated neurons—to represent percept, memory, or concept. Despite the rekindled interest in this century-old idea, the concept of cell assembly still remains ill-defined and its operational principle is poorly understood. What is the size of a cell assembly? How should a cell assembly be organized? What is the computational logic underlying Hebbian cell assemblie...

  11. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  12. Status of computational and experimental correlations for Los Alamos fast-neutron critical assemblies

    International Nuclear Information System (INIS)

    New assemblies and improved measuring techniques call for periodic review of the status of computation vs. experiment. It is appropriate to emphasize neutron-spectral characterizations because of the particularly elusive problems associated with absolute spectral-index measurement and the need for checks of computation beyond simple critical size. The ever-improving spectral-index measurements in conjunction with increasing precision, both of microscopic data for detector and assembly materials and of computational techniques, produce a gradual clarification of the characteristics of a family of fast-neutron critical assemblies. This family now includes unreflected and thick-uranium-reflected U233 in spherical geometry. Direct correlations among the experimental data will be presented to indicate the a priori possibilities for successful correlations with computation. Sensitivity of computed spectra and critical sizes to neutron-transport models (transport and linear approximations ) and arithmetic approximations (finite angular segmentations and multi-group representations) will be presented for several typical assemblies to help establish the necessary computational detail. Comparisons between experiment and prediction will include, in addition to spectral indices and critical sizes, neutron lifetimes and delayed-neutron fractions. (author)

  13. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  14. Accelerator shielding benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, H.; Ban, S.; Nakamura, T. [and others

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author).

  15. Financial benchmarking

    OpenAIRE

    Boldyreva, Anna

    2014-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  16. Detection of missing rods in a spent BWR fuel assembly by computed gamma emission tomography

    International Nuclear Information System (INIS)

    This paper reports on a computed gamma emission tomography system that has been constructed which allows detection of the cross sectional rod pattern of BWR fuel assemblies. The under water detection head constructed is remote controlled by a laptop computer and it is housing two SiLi detectors. By scanning 32 to 48 views, the position of the water filled inner rod could be clearly detected in each of the three assemblies with cooling times of 2, 4 and 8 years using gamma rays of Pr-144 or Eu-154

  17. Computational Design of Self-Assembling Protein Nanomaterials with Atomic Level Accuracy

    Energy Technology Data Exchange (ETDEWEB)

    King, Neil P.; Sheffler, William; Sawaya, Michael R.; Vollmar, Breanna S.; Sumida, John P.; André, Ingemar; Gonen, Tamir; Yeates, Todd O.; Baker, David (UWASH); (UCLA); (HHMI); (Lund)

    2015-09-17

    We describe a general computational method for designing proteins that self-assemble to a desired symmetric architecture. Protein building blocks are docked together symmetrically to identify complementary packing arrangements, and low-energy protein-protein interfaces are then designed between the building blocks in order to drive self-assembly. We used trimeric protein building blocks to design a 24-subunit, 13-nm diameter complex with octahedral symmetry and a 12-subunit, 11-nm diameter complex with tetrahedral symmetry. The designed proteins assembled to the desired oligomeric states in solution, and the crystal structures of the complexes revealed that the resulting materials closely match the design models. The method can be used to design a wide variety of self-assembling protein nanomaterials.

  18. M4D: a powerful tool for structured programming at assembly level for MODCOMP computers

    International Nuclear Information System (INIS)

    Structured programming techniques offer numerous benefits for software designers and form the basis of the current high level languages. However, these techniques are generally not available to assembly programmers. The M4D package was therefore developed for a large project to enable the use of structured programming constructs such as DO.WHILE-ENDDO and IF-ORIF-ORIF...-ELSE-ENDIF in the assembly code for MODCOMP computers. Programs can thus be produced that have clear semantics and are considerably easier to read than normal assembly code, resulting in reduced program development and testing effort, and in improved long-term maintainability of the code. This paper describes the M4D structured programming tool as implemented for MODCOMP'S MAX III and MAX IV assemblers, and illustrates the use of the facility with a number of examples

  19. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added in...... order to obtain a unique selection...

  20. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  1. Precious benchmarking

    International Nuclear Information System (INIS)

    Recently, there has been a new word added to our vocabulary - benchmarking. Because of benchmarking, our colleagues travel to power plants all around the world and guests from the European power plants visit us. We asked Marek Niznansky from the Nuclear Safety Department in Jaslovske Bohunice NPP to explain us this term. (author)

  2. PAMELA: An Interactive Assembler System for the IBM System/360 Computer.

    Science.gov (United States)

    Brownlee, Edward H., Jr.

    A description of the Program Assembly and Monitored Execution for Learning Applications (PAMELA) is presented. It is intended for instructors who propose to use the system and for programers who wish to modify it. PAMELA is an interactive system designed to teach the operating principles of the IBM System/360 digital computer at the machine…

  3. DNA Self-Assembly and Computation Studied with a Coarse-grained Dynamic Bonded Model

    DEFF Research Database (Denmark)

    Svaneborg, Carsten; Fellermann, Harold; Rasmussen, Steen

    2012-01-01

    We utilize a coarse-grained directional dynamic bonding DNA model [C. Svaneborg, Comp. Phys. Comm. (In Press DOI:10.1016/j.cpc.2012.03.005)] to study DNA self-assembly and DNA computation. In our DNA model, a single nucleotide is represented by a single interaction site, and complementary sites can...

  4. Combining Self-Explaining with Computer Architecture Diagrams to Enhance the Learning of Assembly Language Programming

    Science.gov (United States)

    Hung, Y.-C.

    2012-01-01

    This paper investigates the impact of combining self explaining (SE) with computer architecture diagrams to help novice students learn assembly language programming. Pre- and post-test scores for the experimental and control groups were compared and subjected to covariance (ANCOVA) statistical analysis. Results indicate that the SE-plus-diagram…

  5. Calculations of WWER cells and assemblies by WIMS-7B code

    International Nuclear Information System (INIS)

    A study of the nuclear data libraries of the WIMS-7B code have been performed in calculations of computational benchmark problems. The benchmarks cover pin cell, single fuel assembly with several different fuel types, moderator densities. Fuel depletion is performed to a burnup of 60 MWd/kgNM in the WWER-1000 pin cell. The results of the analysis of the benchmark with different code systems have been compared and indicated good agreement among the different methods and data. (Authors)

  6. Selecting benchmarks for reactor calculations

    OpenAIRE

    Alhassan, Erwin; Sjöstrand, Henrik; Duan, Junfeng; Helgesson, Petter; Pomp, Stephan; Österlund, Michael; Rochman, Dimitri; Koning, Arjan J.

    2014-01-01

    Criticality, reactor physics, fusion and shielding benchmarks are expected to play important roles in GENIV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used to validate computer codes and test nuclear data libraries. However the selection of these benchmarks are usually done by visual inspection which is dependent on the expertise and the experience of the user and there by resulting in a user...

  7. Dynamics of nuclear fuel assemblies in vertical flow channels: computer modelling and associated studies

    International Nuclear Information System (INIS)

    A computer model, designed to predict the dynamic behaviour of nuclear fuel assemblies in axial flow, is described in this report. The numerical methods used to construct and solve the matrix equations of motion in the model are discussed together with an outline of the method used to interpret the fuel assembly stability data. The mathematics developed for forced response calculations are described in detail. Certain structural and hydrodynamic modelling parameters must be determined by experiment. These parameters are identified and the methods used for their evaluation are briefly described. Examples of typical applications of the dynamic model are presented towards the end of the report. (author)

  8. WLUP benchmarks

    International Nuclear Information System (INIS)

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  9. The solution of the LEU and MOX WWER-1000 calculation benchmark with the CARATE - multicell code

    International Nuclear Information System (INIS)

    Preparations for disposition of weapons grade plutonium in WWER-1000 reactors are in progress. Benchmark: Defined by the Kurchatov Institute (S. Bychkov, M. Kalugin, A. Lazarenko) to assess the applicability of computer codes for weapons grade MOX assembly calculations. Framework: 'Task force on reactor-based plutonium disposition' of OECD Nuclear Energy Agency. (Authors)

  10. Input model of a VVER 440/213 fuel assembly for the CFD computational code FLUENT

    International Nuclear Information System (INIS)

    The preparation of the input data and computation network for FLUENT 6.1 CFD (computational fluid dynamics) calculations by using the GAMBIT preprocessor is described. The input data for the thermal hydraulic calculation and the general issue of network creation - nodalization by using GAMBIT are highlighted. Creation of the particular computation network for the given fuel assembly geometry is described in detail. Attention was paid to the approach to the complex parts of the assembly, the inlet section in particular. The flow simulation in the fuel channel was analyzed. Solutions with lower numbers of channels and various degree of complexity were developed. The effect of the various solutions on the accuracy and time of calculation was investigated. The results were used to create the computation network of the whole assembly. In view of the complexity and volume of the network, the issue was discussed of how to find a suitable approach enabling test analyses to be performed on available hardware using available software

  11. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  12. Mapsembler, targeted and micro assembly of large NGS datasets on a desktop computer

    Directory of Open Access Journals (Sweden)

    Peterlongo Pierre

    2012-03-01

    Full Text Available Abstract Background The analysis of next-generation sequencing data from large genomes is a timely research topic. Sequencers are producing billions of short sequence fragments from newly sequenced organisms. Computational methods for reconstructing whole genomes/transcriptomes (de novo assemblers are typically employed to process such data. However, these methods require large memory resources and computation time. Many basic biological questions could be answered targeting specific information in the reads, thus avoiding complete assembly. Results We present Mapsembler, an iterative micro and targeted assembler which processes large datasets of reads on commodity hardware. Mapsembler checks for the presence of given regions of interest that can be constructed from reads and builds a short assembly around it, either as a plain sequence or as a graph, showing contextual structure. We introduce new algorithms to retrieve approximate occurrences of a sequence from reads and construct an extension graph. Among other results presented in this paper, Mapsembler enabled to retrieve previously described human breast cancer candidate fusion genes, and to detect new ones not previously known. Conclusions Mapsembler is the first software that enables de novo discovery around a region of interest of repeats, SNPs, exon skipping, gene fusion, as well as other structural events, directly from raw sequencing reads. As indexing is localized, the memory footprint of Mapsembler is negligible. Mapsembler is released under the CeCILL license and can be freely downloaded from http://alcovna.genouest.org/mapsembler/.

  13. A Solar Powered Wireless Computer Mouse: Design, Assembly and Preliminary Testing of 15 Prototypes

    OpenAIRE

    Sark, W.G.J.H.M. van; Reich, N.H.; Alsema, E.A.; Netten, M.P.; Veefkind, M.; Silvester, S.; Elzen, B.; Verwaal, M.

    2007-01-01

    The concept and design of a solar powered wireless computer mouse has been completed, and 15 prototypes have been successfully assembled. After necessary cutting, the crystalline silicon cells show satisfactory efficiency: up to 14% when implemented into the mouse device. The implemented voltage conversion unit that is needed to increase the solar cell voltage for charging one single or two series connected NiMH batteries, has a conversion efficiency of up to 50%, which leaves room for furthe...

  14. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  15. Nea Benchmarks

    International Nuclear Information System (INIS)

    The two energy group diffusion equations accuracy is quite good for common/typical transients. However, better solutions should be obtained with more sophisticate techniques, including Monte Carlo and detailed neutron transport or multi-group diffusion equations and multidimensional cross section tables to get more realistic flux distribution. Constitutive models used to determine the evolution of the two-phase mixture, being mostly developed under steady state conditions, should be made more adapted for the simulation of transient situations with main reference to empirical correlations connected with the feedback between thermal-hydraulic and kinetic (e.g. the sub-cooled boiling heat transfer coefficient). 3-D nodalizations for the core or the vessel regions should be qualified based on proper sets of experimental data, as needed for Best Estimate simulation of phenomena like pressure wave propagation and flow redistribution in the core. The importance and the need for uncertainty evaluations for coupled codes predictions should be clear based on a number of reasons discussed in this work. Therefore, uncertainty must be connected with any prediction. The availability of proper computational resources should encourage the modeling of individual assemblies: this appears possible within the neutron kinetics area and may require some effort in thermal-hydraulic area namely when large number of channels constitutes the reactor core. Care is needed when specifying spatial mapping between thermal-hydraulic and kinetic nodes of the core models, especially when asymmetric core behavior is expected or when phenomena affecting a single a limited number of fuel assemblies are important. Finally, the industry and the regulatory bodies should become fully aware about the capabilities and the limitations of the coupled code techniques. Nevertheless, further and continuous assessment studies and investigations should be performed to enhance the degree of the Best Estimate

  16. Benchmark for evaluation and validation of reactor simulations (BEAVRS)

    International Nuclear Information System (INIS)

    Advances in parallel computing have made possible the development of high-fidelity tools for the design and analysis of nuclear reactor cores, and such tools require extensive verification and validation. This paper introduces BEAVRS, a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading patterns, and numerous in-vessel components. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from fifty-eight instrumented assemblies. Initial comparisons between calculations performed with MIT's OpenMC Monte Carlo neutron transport code and measured cycle 1 HZP test data are presented, and these results display an average deviation of approximately 100 pcm for the various critical configurations and control rod worth measurements. Computed HZP radial fission detector flux maps also agree reasonably well with the available measured data. All results indicate that this benchmark will be extremely useful in validation of coupled-physics codes and uncertainty quantification of in-core physics computational predictions. The detailed BEAVRS specification and its associated data package is hosted online at the MIT Computational Reactor Physics Group web site (http://crpg.mit.edu/), where future revisions and refinements to the benchmark specification will be made publicly available. (authors)

  17. Texture Fidelity Benchmark

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Kudělka, Miloš

    Los Alamitos, USA: IEEE Computer Society CPS, 2014. ISBN 978-1-4799-7971-4. [International Workshop on Computational Intelligence for Multimedia Understanding 2014 (IWCIM). Paris (FR), 01.11.2014-02.11.2014] R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : Benchmark testing * fidelity criteria * texture Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2014/RO/haindl-0439654.pdf

  18. A BENCHMARK PROGRAM FOR EVALUATION OF METHODS FOR COMPUTING SEISMIC RESPONSE OF COUPLED BUILDING-PIPING/EQUIPMENT WITH NON-CLASSICAL DAMPING

    International Nuclear Information System (INIS)

    Under the auspices of the US Nuclear Regulatory Commission (NRC), Brookhaven National Laboratory (BNL) developed a comprehensive program to evaluate state-of-the-art methods and computer programs for seismic analysis of typical coupled nuclear power plant (NPP) systems with nonclassical damping. In this program, four benchmark models of coupled building-piping/equipment systems with different damping characteristics were analyzed for a suite of earthquakes by program participants applying their uniquely developed methods and computer programs. This paper presents the results of their analyses, and their comparison to the benchmark solutions generated by BNL using time domain direct integration methods. The participant's analysis results established using complex modal time history methods showed good comparison with the BNL solutions, while the analyses produced with either complex-mode response spectrum methods or classical normal-mode response spectrum method, in general, produced more conservative results, when averaged over a suite of earthquakes. However, when coupling due to damping is significant, complex-mode response spectrum methods performed better than the classical normal-mode response spectrum method. Furthermore, as part of the program objectives, a parametric assessment is also presented in this paper, aimed at evaluation of the applicability of various analysis methods to problems with different dynamic characteristics unique to coupled NPP systems. It is believed that the findings and insights learned from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems

  19. The computational fluid dynamics (CFD) modeling of coolant flow in fuel assembly - current results and their application

    International Nuclear Information System (INIS)

    The paper summarises present works performed by VUJE Inc in the field of computational fluid dynamics (CFD) simulation of coolant flow in fuel assemblies used in the WWER-440 reactors. These works are extension of the previous calculations presented on Acer Symposium 2004 and their aim is the better understanding of coolant flow in different parts of fuel assembly. While the previous CFD simulations were focused on study of individual parameters influences on the coolant flow mixing in the upper part of fuel assembly (pin-wise power distribution, geometry of fuel assembly, by-pass flow and coolant flow in central tube), the present works are directed more to verification of previous calculations and joining of individual effects on coolant flow mixing in the upper part of fuel assembly, mainly in the thermocouple position. The main purpose of the CFD simulations of coolant flow in fuel assembly is to increase accuracy of temperature measurements placed at the output of fuel assemblies (Authors)

  20. A Context-Aware Ubiquitous Learning Approach for Providing Instant Learning Support in Personal Computer Assembly Activities

    Science.gov (United States)

    Hsu, Ching-Kun; Hwang, Gwo-Jen

    2014-01-01

    Personal computer assembly courses have been recognized as being essential in helping students understand computer structure as well as the functionality of each computer component. In this study, a context-aware ubiquitous learning approach is proposed for providing instant assistance to individual students in the learning activity of a…

  1. Benchmark exercise

    International Nuclear Information System (INIS)

    The motivation to conduct this benchmark exercise, a summary of the results, and a discussion of and conclusions from the intercomparison are given in Section 5.2. This section contains further details of the results of the calculations and intercomparisons, illustrated by tables and figures, but avoiding repetition of Section 5.2 as far as possible. (author)

  2. Computational design of a self-assembling symmetrical β-propeller protein

    Science.gov (United States)

    Voet, Arnout R. D.; Noguchi, Hiroki; Addy, Christine; Simoncini, David; Terada, Daiki; Unzai, Satoru; Park, Sam-Yong; Zhang, Kam Y. J.; Tame, Jeremy R. H.

    2014-01-01

    The modular structure of many protein families, such as β-propeller proteins, strongly implies that duplication played an important role in their evolution, leading to highly symmetrical intermediate forms. Previous attempts to create perfectly symmetrical propeller proteins have failed, however. We have therefore developed a new and rapid computational approach to design such proteins. As a test case, we have created a sixfold symmetrical β-propeller protein and experimentally validated the structure using X-ray crystallography. Each blade consists of 42 residues. Proteins carrying 2–10 identical blades were also expressed and purified. Two or three tandem blades assemble to recreate the highly stable sixfold symmetrical architecture, consistent with the duplication and fusion theory. The other proteins produce different monodisperse complexes, up to 42 blades (180 kDa) in size, which self-assemble according to simple symmetry rules. Our procedure is suitable for creating nano-building blocks from different protein templates of desired symmetry. PMID:25288768

  3. Track 3: growth of nuclear technology and research numerical and computational aspects of the coupled three-dimensional core/plant simulations: organization for economic cooperation and development/U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-I. 6. CEA-IPSN Participation in the MSLB Benchmark

    International Nuclear Information System (INIS)

    The OECD/NEA Main-Steam-Line-Break (MSLB) Benchmark lets us compare state-of-the-art and best-estimate models used to compute reactivity accidents.A comprehensive study has been carried out by CEA and IPSN with the CATHARE, CRONOS2, and FLICA4 codes to assess the three-dimensional (3-D) effects in the MSLB accident and to explain the return-to-power (RTP) occurrence. The three exercises of the MSLB benchmark are defined with the aim of analyzing the space and time effects in the core and their modeling with computational tools. Point kinetics (exercise 1) simulation results in an RTP after scram, whereas 3-D kinetics (exercises 2 and 3) does not display any RTP. Our objective is to understand the reasons for the conservative solution of point kinetics and to assess the benefits of best-estimate models. First, the core vessel mixing model is analyzed; second, sensitivity studies on point kinetics are compared to 3-D kinetics; third, the core thermal-hydraulics model and coupling with neutronics is presented; finally, RTP and a suitable model for MSLB are discussed. Modeling of the vessel mixing is identified as a major concern for an accurate computation of MSLB. On one hand, the RTP in exercise 1 is driven by the mixing between primary loops, and on the other hand, the hot assembly power in exercise 3 depends on the inlet temperature map at assembly level. Vessel mixing between primary loops is defined by the ratio of the hot-leg temperature difference over the cold-leg temperature difference. Specifications indicate a ratio of 50%. Sensitivity studies on this ratio were conducted with CATHARE and point kinetics. Full mixing of the primary loops leads to a sooner and higher RTP, while no mixing results in a later and weaker RTP. Indeed, the intact steam generator (SG) is used to cool down the broken SG when both loops are mixed in the vessel, and the primary temperature decreases faster. In the extreme case of no mixing, only one-half of the primary circuit is

  4. Benchmark calculations of sodium fast critical experiments

    International Nuclear Information System (INIS)

    The high expectations from fast critical experiments impose the additional requirements on reliability of final reconstructed values, obtained in experiments at critical facility. Benchmark calculations of critical experiments are characterized by impossibility of complete experiment reconstruction, the large amounts of input data (dependent and independent) with very different reliability. It should also take into account different sensitivity of the measured and appropriate calculated characteristics to the identical changes of geometry parameters, temperature, and isotopic composition of individual materials. The calculations of critical facility experiments are produced for the benchmark models, generated by the specific reconstructing codes with its features when adjusting model parameters, and using the nuclear data library. The generated benchmark model, providing the agreed calculated and experimental values for one or more neutronic characteristics can lead to considerable differences for other key characteristics. The sensitivity of key neutronic characteristics to the extra steel allocation in the core, and ENDF/B nuclear data sources is performed using a few calculated models of BFS-62-3A and BFS1-97 critical assemblies. The comparative analysis of the calculated effective multiplication factor, spectral indices, sodium void reactivity, and radial fission-rate distributions leads to quite different models, providing the best agreement the calculated and experimental neutronic characteristics. This fact should be considered during the refinement of computational models and code-verification purpose. (author)

  5. Applications of the theory of computation to nanoscale self-assembly

    Science.gov (United States)

    Doty, David Samuel

    This thesis applies the theory of computing to the theory of nanoscale self-assembly, to explore the ability -- and under certain conditions, the inability -- of molecules to automatically arrange themselves in computationally sophisticated ways. In particular, we investigate a model of molecular self-assembly known as the abstract Tile Assembly Model (aTAM), in which different types of square "tiles" represent molecules that, through the interaction of highly specific binding sites on their four sides, can automatically assemble into larger and more elaborate structures. We investigate the possibility of using the inherent randomness of sampling different tiles in a well-mixed solution to drive selection of random numbers from a finite set, and explore the tradeoff between the uniformity of the imposed distribution and the size of structures necessary to process the sampled tiles. We then show that the inherent randomness of the competition of different types of molecules for binding can be exploited in a different way. By adjusting the relative concentrations of tiles, the structure assembled by a tile set is shown to be programmable to a high precision, in the following sense. There is a single tile set that can be made to assemble a square of arbitrary width with high probability, by setting the concentrations of the tiles appropriately, so that all the information about the square's width is "learned" from the concentrations by sampling the tiles. Based on these constructions, and those of other researchers, which have been completely implemented in a simulated environment, we design a high-level domain-specific "visual language" for implementing complex constructions in the aTAM. This language frees the implementer of an aTAM construction from many low-level and tedious details of programming and, together with a visual software tool that directly implements the basic operations of the language, frees the implementer from almost any programming at all

  6. Computer-aided design of nano-filter construction using DNA self-assembly

    OpenAIRE

    Mohammadzadegan Reza; Mohabatkar Hassan

    2006-01-01

    AbstractComputer-aided design plays a fundamental role in both top-down and bottom-up nano-system fabrication. This paper presents a bottom-up nano-filter patterning process based on DNA self-assembly. In this study we designed a new method to construct fully designed nano-filters with the pores between 5 nm and 9 nm in diameter. Our calculations illustrated that by constructing such a nano-filter we would be able to separate many molecules.

  7. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  8. Cloud benchmarking for performance

    OpenAIRE

    Varghese, Blesson; Akgun, Ozgur; Miguel, Ian; Thai, Long; Barker, Adam

    2014-01-01

    How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the applic...

  9. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Science.gov (United States)

    Tanabe, Akifumi S; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate

  10. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Directory of Open Access Journals (Sweden)

    Akifumi S Tanabe

    Full Text Available Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need

  11. Two New Computational Methods for Universal DNA Barcoding: A Benchmark Using Barcode Sequences of Bacteria, Archaea, Animals, Fungi, and Land Plants

    Science.gov (United States)

    Tanabe, Akifumi S.; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used “1-nearest-neighbor” (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to

  12. A Privacy-Preserving Benchmarking Platform

    OpenAIRE

    Kerschbaum, Florian

    2010-01-01

    A privacy-preserving benchmarking platform is practically feasible, i.e. its performance is tolerable to the user on current hardware while fulfilling functional and security requirements. This dissertation designs, architects, and evaluates an implementation of such a platform. It contributes a novel (secure computation) benchmarking protocol, a novel method for computing peer groups, and a realistic evaluation of the first ever privacy-preserving benchmarking platform.

  13. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  14. Comparative Neutronics Analysis of DIMPLE S06 Criticality Benchmark with Contemporary Reactor Core Analysis Computer Code Systems

    Directory of Open Access Journals (Sweden)

    Wonkyeong Kim

    2015-01-01

    Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.

  15. Benchmarking hypercube hardware and software

    Science.gov (United States)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  16. Nekaj novih algoritmov za računalniško podprto načrtovanje montaže: Some new algorithms for computer aided assembly planning:

    OpenAIRE

    Kunica, Zoran; Vranješ, Božo; Hrman, Miljenko

    2003-01-01

    The paper depicts some of the improved and newly implemented algorithms of the computer aided design (CAD) based system for the plan generation of automatic assembly (GPAS), relating to definitions of assembly sequence and paths, space structuring (layout) of the assembly process for bench assembly product orientation, connectivity, and treatment of identical parts in a product. In the approach, the parts of which are presented in the paper, the mechanical product to be assembled is, initiall...

  17. Parallel computations of thermo-elasticity benchmark problems: comparison of MPI and OpenMP based program codes

    Czech Academy of Sciences Publication Activity Database

    Kohut, Roman; Starý, Jiří; Kolcun, Alexej

    Ostrava : ÚGN AV ČR, 2007 - (Blaheta, R.; Starý, J.), s. 71-74 ISBN 978-80-86407-12-8. [Seminar on Numerical Analysis. Modelling and Simulation of Chalenging Engineering Problems. Winter School. High-performance and Parallel Computers, Programming Technologies & Numerical Linear Algebra. Ostrava (CZ), 22.01.2007-26.01.2007] R&D Projects: GA AV ČR 1ET400300415; GA ČR GP105/04/P036 Institutional research plan: CEZ:AV0Z30860518 Keywords : mathematical modelling * parallel computations * thermo-mechanical processes Subject RIV: BA - General Mathematics

  18. Coupled computational fluid dynamics and MOC neutronic simulations of Westinghouse PWR fuel assemblies with grid spacers

    International Nuclear Information System (INIS)

    Neutronic coupling with Computational Fluid Dynamics (CFD) has been under development within the US DOE sponsored “Nuclear Simulation Hub”. The method of characteristics (MOC) neutronics code DeCART ([Joo, 2004], [Kochunas, 2009]) under development at the University of Michigan was coupled with the CFD code STAR-CCM+ to achieve more accurate predictions of fuel assembly performance. At Westinghouse, lower order, neutronic codes such as the nodal code ANC have been coupled to thermal-hydraulics codes such the subchannel code VIPRE to predict the heat flux and fuel nuclear behavior. However, a more detailed neutronics and temperature / fluid field simulation of fuel assembly models which includes explicit representation of spacer grids would considerably improve the design and assessment of new fuel assembly designs. Coupled STAR-CCM+ / DeCART calculations have been performed for various representative three-dimensional models with explicit representation of spacer grids with mixing vanes. The high fidelity results have been compared to lower order simulations. The coupled CFD/MOC solution has provided a more truthful model which includes a more accurate representation of all the important physics such as fission energy, heat convection, heat conduction, and turbulence. Of particular significance is the ability to assess the effects of the mixing grid on the coolant temperature and density distribution using coupled thermal/fluids and neutronic solutions. A more precise cladding temperature can be derived by this approach which will also enable more accurate prediction of departure from nucleate boiling (DNB), as well as a better understanding of DNB margin and crud build up on the fuel rod. (author)

  19. Monte Carlo photon benchmark problems

    International Nuclear Information System (INIS)

    Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems. 8 refs., 5 figs

  20. Polymer GARD: computer simulation of covalent bond formation in reproducing molecular assemblies.

    Science.gov (United States)

    Shenhav, Barak; Bar-Even, Arren; Kafri, Ran; Lancet, Doron

    2005-04-01

    The basic Graded Autocatalysis Replication Domain (GARD) model consists of a repertoire of small molecules, typically amphiphiles, which join and leave a non-covalent micelle-like assembly. Its replication behavior is due to occasional fission, followed by a homeostatic growth process governed by the assembly's composition. Limitations of the basic GARD model are its small finite molecular repertoire and the lack of a clear path from a 'monomer world' towards polymer-based living entities. We have now devised an extension of the model (polymer GARD or P-GARD), where a monomer-based GARD serves as a 'scaffold' for oligomer formation, as a result of internal chemical rules. We tested this concept with computer simulations of a simple case of monovalent monomers, whereby more complex molecules (dimers) are formed internally, in a manner resembling biosynthetic metabolism. We have observed events of dimer 'take-over' - the formation of compositionally stable, replication-prone quasi stationary states (composomes) that have appreciable dimer content. The appearance of novel metabolism-like networks obeys a time-dependent power law, reminiscent of evolution under punctuated equilibrium. A simulation under constant population conditions shows the dynamics of takeover and extinction of different composomes, leading to the generation of different population distributions. The P-GARD model offers a scenario whereby biopolymer formation may be a result of rather than a prerequisite for early life-like processes. PMID:16010993

  1. Computer simulation of reaction-induced self-assembly of cellulose via enzymatic polymerization

    International Nuclear Information System (INIS)

    We present a comparison between results of computer simulations and neutron scattering/electron microscopy observations on reaction-induced self-assembly of cellulose molecules synthesized via in vitro polymerization at specific sites of enzymes in an aqueous reaction medium. The experimental results, obtained by using a combined small-angle scattering (SAS) analysis of USANS (ultra-SANS), USAXS (ultra-SAXS), SANS (small-angle neutron scattering), and SAXS (small-angle x-ray scattering) methods over an extremely wide range of wavenumber q (as wide as four orders of magnitude) and of a real-space analysis with field-emission scanning electron microscopy elucidated that: (i) the surface structure of the self-assembly in the medium is characterized by a surface fractal dimension of Ds = 2.3 over a wide length scale (∼30 nm to ∼30 μm); (ii) its internal structure is characterized by crystallized cellulose fibrils spatially arranged with a mass fractal dimension of Dm = 2.1. These results were analysed by Monte Carlo simulation based on the diffusion-limited aggregation of rod-like molecules that model the cellulose molecules. The simulations show similar surface fractal dimensions to those observed in the experiments

  2. Computer simulation of reaction-induced self-assembly of cellulose via enzymatic polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Kawakatsu, Toshihiro [Department of Physics, Faculty of Science, Tohoku University, Sendai 980-8578 (Japan); Tanaka, Hirokazu [Advanced Science Research Center (ASRC), Japan Atomic Energy Agency (JAEA), Tokai, Ibaraki 319-1195, Japan (Japan); Koizumi, Satoshi [Advanced Science Research Center (ASRC), Japan Atomic Energy Agency (JAEA), Tokai, Ibaraki 319-1195 (Japan); Hashimoto, Takeji [Advanced Science Research Center (ASRC), Japan Atomic Energy Agency (JAEA), Tokai, Ibaraki 319-1195 (Japan)

    2006-09-13

    We present a comparison between results of computer simulations and neutron scattering/electron microscopy observations on reaction-induced self-assembly of cellulose molecules synthesized via in vitro polymerization at specific sites of enzymes in an aqueous reaction medium. The experimental results, obtained by using a combined small-angle scattering (SAS) analysis of USANS (ultra-SANS), USAXS (ultra-SAXS), SANS (small-angle neutron scattering), and SAXS (small-angle x-ray scattering) methods over an extremely wide range of wavenumber q (as wide as four orders of magnitude) and of a real-space analysis with field-emission scanning electron microscopy elucidated that: (i) the surface structure of the self-assembly in the medium is characterized by a surface fractal dimension of D{sub s} = 2.3 over a wide length scale ({approx}30 nm to {approx}30 {mu}m); (ii) its internal structure is characterized by crystallized cellulose fibrils spatially arranged with a mass fractal dimension of D{sub m} = 2.1. These results were analysed by Monte Carlo simulation based on the diffusion-limited aggregation of rod-like molecules that model the cellulose molecules. The simulations show similar surface fractal dimensions to those observed in the experiments.

  3. Analysis of the WWER-440 AER2 rod ejection benchmark by the SKETCH-N code

    International Nuclear Information System (INIS)

    The neutron kinetics code SKETCH-N has been recently extended to treat hexagonal geometry using a polynomial nodal method based on the conformal mapping of a hexagon into a rectangle. Basic features of the code are outlined. Results of the steady-state benchmark calculations demonstrate excellent accuracy of the nodal method. To test a neutron kinetics module for WWER applications, the second AER rod ejection benchmark is computed and the results are compared with the results of the production WWER codes: BIPR8, DYN3D, HEXTRAN and KIKO3D. The steady-state results show that the SKETCH-N code gives an ejected control rod worth close to that of BIPR8 and HEXTRAN. The assembly power distribution is compared with the DYN3D results. Maximum discrepancies of about 5% are found in the power of peripheral assemblies and assemblies with partially inserted control rods (Authors)

  4. Computation of gap conductance in different fuel assemblies in VVER-1000 type reactors

    International Nuclear Information System (INIS)

    In this paper, a calculation for fresh fuels gap conductance at different axial lengths of fuel assemblies of the VVER-1000 type reactors has been made using two models of Calza-Bini and Relap5. By applying these two models, the dependency of the fuel outer surface temperature and the clad inner surface temperature of the gap conductance has been determined upon using following procedures: Coupling gap conductance model computer programming to obtain temperature at different axial lengths of the fuel and clad; and coupling gap conductance model to the Cobra-En output code. The results of calculations and comparison with the final safety analysis report results showed that the Relap5 model is less accurate than the Calza-Bini model. The Calza-Bini model agrees well with the final safety analysis report results. By combining these two models, a new model with a better accuracy was proposed for the gap conductance.

  5. Selecting benchmarks for reactor calculations

    International Nuclear Information System (INIS)

    Criticality, reactor physics, fusion and shielding benchmarks are expected to play important roles in GENIV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used to validate computer codes and test nuclear data libraries. However the selection of these benchmarks are usually done by visual inspection which is dependent on the expertise and the experience of the user and thereby resulting in a user bias in the process. In this paper we present a method for the selection of these benchmarks for reactor applications and uncertainty reduction based on Total Monte Carlo (TMC) method. Similarities between an application case and one or several benchmarks are quantified using the correlation coefficient. Based on the method, we also propose two approaches for reducing nuclear data uncertainty using integral benchmark experiments as an additional constrain in the TMC method: a binary accept/reject method and a method of uncertainty reduction using weights. Finally, the methods were applied to a full Lead Fast Reactor core and a set of criticality benchmarks. (author)

  6. A benchmark exercise on the use of CFD codes for containment issues using best practice guidelines: A computational challenge

    International Nuclear Information System (INIS)

    In the framework of the 5th EU-FWP project ECORA the capabilities of CFD software packages for simulating flows in the containment of nuclear reactors was evaluated. Four codes were assessed using two basic tests in the PANDA facility addressing the transport of gases in a multi-compartment geometry. The assessment included a first attempt to use Best Practice Guidelines (BPGs) for the analysis of long, large-scale, transient problems. Due to the large computational overhead of the analysis, the BPGs could not fully be applied. It was thus concluded that the application of the BPGs to full containment analysis is out of reach with the currently available computer power. On the other hand, CFD codes used with a sufficiently detailed mesh seem to be capable to give reliable answers on issues relevant for containment simulation using standard two-equation turbulence models. Development on turbulence models is constantly ongoing. If it turns out that advanced (and more computationally intensive) turbulence models may not be needed, the use of the BPGs for 'certified' simulations could become feasible within a relatively short time

  7. A benchmark exercise on the use of CFD codes for containment issues using best practice guidelines: a computational challenge

    International Nuclear Information System (INIS)

    In the framework of the 5. EU-FWP project ECORA the capabilities of CFD software packages for simulating flows in the containment of nuclear reactors was evaluated. Four codes were assessed using two basic tests in the PANDA facility addressing the transport of gases in a multi-compartment geometry. The assessment included a first attempt to use Best Practice Guidelines (BPG) to the analysis of long, large-scale, transient problems. Due to the large computational overhead of the analysis, the BPGs could not fully be applied. It was thus concluded that the application of the BPGs to full containment analysis is out of reach with the currently available computer power. On the other hand, CFD codes used with a sufficiently detailed mesh seem to be capable to give reliable answers on issues relevant for containment simulation using standard two-equation turbulence models. Development on turbulence models is constantly ongoing. If it turns out that advanced (and more computationally intensive) turbulence models may not be needed, the use of the BPG for 'certified' simulations could become feasible within a relatively short time. (authors)

  8. Benchmark analysis of fission-rate distributions in a series of spherical depleted-uranium assemblies for hybrid-reactor design

    International Nuclear Information System (INIS)

    Highlights: • We do simulations using MCNP code and ENDF/B-V.0 library. • The fission rate distribution on depleted uranium assemblies was analyzed. • The calculations overestimate the measured fission rates. • The observed differences are discussed. - Abstract: The nuclear performance of a fission blanket in a hybrid reactor has been validated by analyzing fission-rate experiments with a series of spherical depleted-uranium assemblies. Calculations were made with the Monte–Carlo transport code MCNP5 and the ENDF/B-V.0 continuous-energy cross sections and compared to the measured results. The ratios of calculated to experimental values (C/E) for the fission rate and the fission-rate ratio of 238U to 235U are presented along with a discussion of the validation of the ENDF/B-V.0 library regarding its use in the design of the fission blanket. Overestimations are observed in the calculation of the 238U and 235U fission rates at all positions, except the ones near the outer surfaces of the assemblies, and the C/Es of the fission rate decreased as the thickness of the depleted-uranium (DU) layer increased, while most of the C/Es of the fission-rate ratio of 238U to 235U were close to unity, being within the range of 0.95–1.05

  9. The PRISM Benchmark Suite

    OpenAIRE

    Kwiatkowsa, Marta; Norman, Gethin; Parker, David

    2012-01-01

    We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.

  10. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  11. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  12. Computational modelling of genome-wide [corrected] transcription assembly networks using a fluidics analogy.

    Directory of Open Access Journals (Sweden)

    Yousry Y Azmy

    Full Text Available Understanding how a myriad of transcription regulators work to modulate mRNA output at thousands of genes remains a fundamental challenge in molecular biology. Here we develop a computational tool to aid in assessing the plausibility of gene regulatory models derived from genome-wide expression profiling of cells mutant for transcription regulators. mRNA output is modelled as fluid flow in a pipe lattice, with assembly of the transcription machinery represented by the effect of valves. Transcriptional regulators are represented as external pressure heads that determine flow rate. Modelling mutations in regulatory proteins is achieved by adjusting valves' on/off settings. The topology of the lattice is designed by the experimentalist to resemble the expected interconnection between the modelled agents and their influence on mRNA expression. Users can compare multiple lattice configurations so as to find the one that minimizes the error with experimental data. This computational model provides a means to test the plausibility of transcription regulation models derived from large genomic data sets.

  13. Computational analysis of hole placement errors for directed self-assembly

    Science.gov (United States)

    Yamamoto, K.; Nakano, T.; Muramatsu, M.; Tomita, T.; Matsuzaki, K.; Kitano, T.

    2015-03-01

    We report computational study for directed self-assembly (DSA) on morphologies' dislocation caused by block copolymers' (BCPs') thermal fluctuation in grapho-epitaxial cylindrical guides. The dislocation factor expressed as DSA-oriented placement errors (DSA-PEs) was numerically evaluated by historical data acquisition utilizing dissipative particle dynamics simulation. Calculated DSA-PEs was compared with experimental results on two kinds of guide pattern, resist guide with no surface modifications (REF guide) and resist guide with polystyrene coated (PS-brush guide). Vertical distribution of DSA-PEs within the cylindrical guides was calculated and relatively high DSA-PEs near top region was deduced particularly in REF guide. The tendency of experimental DSA-PEs was well explained by the calculation including a fluctuation parameter on the wall particles. In PS-brush guide, calculated DSA-PEs was drastically increased with becoming the guide more fluctuating. This result indicates to fabricate hard and steady guide condition in PS-brush guide so as to achieve better placements. From the variety of guide critical dimension (CD) computation, it is suggested that smaller guide CD is better to obtain good placements. The smallest DSA-PE value in this study was observed in PS-brush guide with smaller guide CD because of the strong restriction of BCP arrangement flexibility.

  14. Numerical and computational aspects of the coupled three-dimensional core/ plant simulations: organization for economic cooperation and development/ U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-II. 2. TRAB-3D/SMABRE Calculation of the OECD/ NRC PWR MSLB Benchmark

    International Nuclear Information System (INIS)

    All three exercises of the OECD/NRC Pressurized Water Reactor (PWR) Main-Steam-Line-Break (MSLB) Benchmark were calculated at VTT Energy. The SMABRE thermal-hydraulics code was used for the first exercise, the plant simulation with point-kinetics neutronics. The second exercise was calculated with the TRAB-3D three-dimensional reactor dynamics code. The third exercise was calculated with the combination TRAB-3D/SMABRE. Both codes have been developed at VTT Energy. The results of all the exercises agree reasonably well with those of the other participants; thus, instead of reporting the results, this paper concentrates on describing the computational aspects of the calculation with the foregoing codes and on some observations of the sensitivity of the results. In the TRAB-3D neutron kinetics, the two-group diffusion equations are solved in homogenized fuel assembly geometry with an efficient two-level nodal method. The point of the two-level iteration scheme is that only one unknown variable per node, the average neutron flux, is calculated during the inner iteration. The nodal flux shapes and cross sections are recalculated only once in the outer iteration loop. The TRAB-3D core model includes also parallel one-dimensional channel hydraulics with detailed fuel models. Advanced implicit time discretization methods are used in all submodels. SMABRE is a fast-running five-equation model completed by a drift-flux model, with a time discretization based on a non-iterative semi-implicit algorithm. For the third exercise of the benchmark, the TMI-1 models of TRAB-3D and SMABRE were coupled. This was the first time these codes were coupled together. However, similar coupling of the HEXTRAN and SMABRE codes has been shown to be stable and efficient, when used in safety analyses of Finnish and foreign VVER-type reactors. The coupling used between the two codes is called a parallel coupling. SMABRE solves the thermal hydraulics both in the cooling circuit and in the core

  15. COXPRO-II: a computer program for calculating radiation and conduction heat transfer in irradiated fuel assemblies

    International Nuclear Information System (INIS)

    This report describes the computer program COXPRO-II, which was written for performing thermal analyses of irradiated fuel assemblies in a gaseous environment with no forced cooling. The heat transfer modes within the fuel pin bundle are radiation exchange among fuel pin surfaces and conduction by the stagnant gas. The array of parallel cylindrical fuel pins may be enclosed by a metal wrapper or shroud. Heat is dissipated from the outer surface of the fuel pin assembly by radiation and convection. Both equilateral triangle and square fuel pin arrays can be analyzed. Steady-state and unsteady-state conditions are included. Temperatures predicted by the COXPRO-II code have been validated by comparing them with experimental measurements. Temperature predictions compare favorably to temperature measurements in pressurized water reactor (PWR) and liquid-metal fast breeder reactor (LMFBR) simulated, electrically heated fuel assemblies. Also, temperature comparisons are made on an actual irradiated Fast-Flux Test Facility (FFTF) LMFBR fuel assembly

  16. An introduction to benchmarking in healthcare.

    Science.gov (United States)

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related. PMID:10139084

  17. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Purpose/Objective: The potential for on-line error detection using electronic portal images (EPIs) has stimulated the investigation of computer-based methods for matching portal images with reference or 'gold standard' images. The lack of absolute truth for clinical images is a major obstacle to the evaluation of these methods. The purpose of this investigation was to create a set of realistic test EPIs with known setup errors for use as a benchmark for evaluation and intercomparison of computer-based methods, including automatic and user-guided techniques, for EPI analysis. Materials and Methods: Digitally reconstructed electronic portal images (DREPIs) were computed using the visible male CT data set from the National Library of Medicine (NLM). (DREPIs are computed using high energy attenuation coefficients to simulate megavoltage images.) The NLM CT data set comprises 512x512x1 mm contiguous slices from the tip of the head to below the knees. The subject was frozen and scanned very soon after non-traumatizing death, and thus the visualized anatomy closely resembles that of a living person, but without breathing and other motion artifacts. Also since dose was not a consideration the signal-to-noise ratio is higher compared with typical 1 mm slices obtained on a living person. Because of the quality of the CT data, the quality of the DREPIs had to be degraded, and modified in other ways, to create realistic test cases. Modifications included: 1) contrast histogram matching to actual EPIs, 2) addition of structured noise by blending an 'open field' EPI image with the DREPI, 3) addition of random unstructured noise, and 4) Gaussian blurring to simulate patient motion and head scatter effects. (It is important to note that there is no standard appearance or quality for EPIs. The appearance of EPIs is quite variable, especially across EPIDs from different manufacturers. Even for a given system, EPIs are quite sensitive to system calibration and acquisition parameters

  18. Development and using computer codes for improvement of defect assembly detection on Russian WWER NPPs

    International Nuclear Information System (INIS)

    Diagnostic methods of fuel failure detection for improving the radiation safety and shortening of fuel reload time at Russian WWERs are currently in development . The works include creation new computer means for increase of effectiveness of fuel monitoring and reliability of leakage tests. Reliability of failure detection can be noticeably improved when we apply an integrated approach including the following methods. The first is fuel failure analysis under operating conditions. Analysis is performed with the pilot version of the expert system, which has been developed on the basis of the mechanistic code RTOP-CA. The second stage of failure monitoring is 'sipping' tests in the mast of the refueling machine. The leakage tests are the final stage of failure monitoring. A new technique with pressure cycling in the specialized casks was introduced to meet the requirements of higher reliability in detection/confirmation of the leakages. Measurements of the activity release kinetics during the pressure cycling and handling of the acquired data with the RTOP-LT code enable to evaluate a defect size in leaking fuel assembly. The mechanistic codes RTOP-CA and RTOP-LT were verified on a base of specialized experimental data and currently the code were certified by Russian authorities Rostechnadzor. Now the pressure cycling method in the specialized casks has official status and is utilized at the all Russian WWER units. Some results of application of the integrated approach to fuel failure monitoring at several Russian NPPs with WWER units are reported in the present paper. Predictions of the current version of the expert system are compared with the results of the leakage tests and with the estimations of the defect size by the pressure cycling technique. Using the RTOP-CA code the level of activity is assessed for the following fuel campaign if the leaking fuel assembly was decided to be reloaded into the core. A project of the automated computer system on the basis of

  19. Sensitivity/uncertainty analysis for BWR configurations of Exercise I-2 of UAM benchmark

    International Nuclear Information System (INIS)

    In order to evaluate the uncertainties in prediction of lattice-averaged parameters, input data of core neutronics codes, Exercise I-2 of the OECD benchmark for uncertainty analysis in modeling (UAM) was proposed. This work aims to perform a sensitivity/uncertainty analysis of the BWR configurations defined in the benchmark for the purpose of Exercise I-2. Criticality calculations are done for a 7x7 BWR fresh fuel assembly at HFP in four configurations: single unrodded fuel assembly, rodded fuel assembly, assembly/reflector and assembly in a color-set. The SCALE6.1 code package is used to propagate cross section covariance data through lattice physics calculations to both k-effective and two-group assembly-homogenized cross sections uncertainties. Computed sensitivities and uncertainties for all configurations are analyzed and compared. It was found that uncertainties are very similar for the four test-problems, showing that the influence of the assembly environment on uncertainty prediction is very small. (author)

  20. AN APPROACH THAT AUTOMATICALLY DETERMINES PART CONTACT RELATIONS IN COMPUTER AIDED ASSEMBLY MODELING

    Directory of Open Access Journals (Sweden)

    Cem SİNANOĞLU

    2002-03-01

    Full Text Available This study describes an approach for modeling of an assembly system which is, one of the main problems encountered during assembly. In this approach the wire-frame modeling of the assembly system is used. In addition, each part is drawn in a different color. Assembly drawing and its various approaches are scanned along three different (-x, -y, -z axis. Scanning is done automatically the software developed. The color codes obtained by scanning and representing different assembly parts are assessed by the software along the six axes of Cartesian coordinate. Then contact matrices are formed to represent the relations among the assembly parts. These matrices are complete enough to represent an assembly modeling. This approach was applied for various assembly systems. These assembly systems are as follows; pincer, hinge and clutch systems. One of the basic advantages of this approach is that the wire-frame modeling of the assembly system can be formed through various CAD programs; and it can be applied to assembly systems contain many parts.

  1. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  2. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358. ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  3. A quantum CISC compiler and scalable assembler for quantum computing on large systems

    International Nuclear Information System (INIS)

    Using the cutting edge high-speed parallel cluster HLRB-II (with a total LINPACK performance of 63.3 TFlops/s) we present a quantum CISC compiler into time-optimised or decoherence-protected complex instruction sets. They comprise effective multi-qubit interactions with up to 10 qubits. We show how to assemble these medium-sized CISC-modules in a scalable way for quantum computation on large systems. Extending the toolbox of universal gates by optimised complex multi-qubit instruction sets paves the way to fight decoherence in realistic Markovian and non-Markovian settings. The advantage of quantum CISC compilation over standard RISC compilations into one- and two-qubit universal gates is demonstrated inter alia for the quantum Fourier transform (QFT) and for multiply-controlled NOT gates. The speed-up is up to factor of six thus giving significantly better performance under decoherence. - Implications for upper limits to time complexities are also derived

  4. Polymer Gard: Computer Simulation of Covalent Bond Formation in Reproducing Molecular Assemblies

    Science.gov (United States)

    Shenhav, Barak; Bar-Even, Arren; Kafri, Ran; Lancet, Doron

    2005-04-01

    The basic Graded Autocatalysis Replication Domain (GARD) model consists of a repertoire of small molecules, typically amphiphiles, which join and leave a non-covalent micelle-like assembly. Its replication behavior is due to occasional fission, followed by a homeostatic growth process governed by the assembly’ s composition. Limitations of the basic GARD model are its small finite molecular repertoire and the lack of a clear path from a ‘monomer world’ towards polymer-based living entities.We have now devised an extension of the model (polymer GARD or P-GARD), where a monomer-based GARD serves as a ‘scaffold’ for oligomer formation, as a result of internal chemical rules. We tested this concept with computer simulations of a simple case of monovalent monomers, whereby more complex molecules (dimers) are formed internally, in a manner resembling biosynthetic metabolism. We have observed events of dimer ‘take-over’ the formation of compositionally stable, replication-prone quasi stationary states (composomes) that have appreciable dimer content. The appearance of novel metabolism-like networks obeys a time-dependent power law, reminiscent of evolution under punctuated equilibrium. A simulation under constant population conditions shows the dynamics of takeover and extinction of different composomes, leading to the generation of different population distributions. The P-GARD model offers a scenario whereby biopolymer formation may be a result of rather than a prerequisite for early life-like processes.

  5. Characterizing universal gate sets via dihedral benchmarking

    Science.gov (United States)

    Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph

    2015-12-01

    We describe a practical experimental protocol for robustly characterizing the error rates of non-Clifford gates associated with dihedral groups, including small single-qubit rotations. Our dihedral benchmarking protocol is a generalization of randomized benchmarking that relaxes the usual unitary 2-design condition. Combining this protocol with existing randomized benchmarking schemes enables practical universal gate sets for quantum information processing to be characterized in a way that is robust against state-preparation and measurement errors. In particular, our protocol enables direct benchmarking of the π /8 gate even under the gate-dependent error model that is expected in leading approaches to fault-tolerant quantum computation.

  6. Benchmarking of photon and coupled neutron and photon process of SuperMC 2.0

    International Nuclear Information System (INIS)

    Super Monte Carlo Calculation Program for Nuclear and Radiation Process (SuperMC), developed by FDS Team in China, is a multi-functional simulation program mainly based on Monte Carlo (MC) method and advanced computer technology. This paper focuses on the benchmarking of physical process of photon and coupled neutron-photon of SuperMC2.0. Integral leakage rate of photon in the spherical and hemispherical shell experiment was tested to verify the physical process of photon and coupled neutron and photon transport. Vanadium assembly experiment and ADS benchmark were given as comprehensive benchmarks. The correctness was preliminarily verified by comparing calculation results of SuperMC with experimental results and MCNP calculation results. (author)

  7. Sequence assembly

    DEFF Research Database (Denmark)

    Scheibye-Alsing, Karsten; Hoffmann, S.; Frankel, Annett Maria;

    2009-01-01

    Despite the rapidly increasing number of sequenced and re-sequenced genomes, many issues regarding the computational assembly of large-scale sequencing data have remain unresolved. Computational assembly is crucial in large genome projects as well for the evolving high-throughput technologies and...... plays an important role in processing the information generated by these methods. Here, we provide a comprehensive overview of the current publicly available sequence assembly programs. We describe the basic principles of computational assembly along with the main concerns, such as repetitive sequences...... in genomic DNA, highly expressed genes and alternative transcripts in EST sequences. We summarize existing comparisons of different assemblers and provide a detailed descriptions and directions for download of assembly programs at: http://genome.ku.dk/resources/assembly/methods.html....

  8. A performance benchmark test for geodynamo simulations

    Science.gov (United States)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  9. Application of the coupled code Athlet-Quabox/Cubbox for the extreme scenarios of the OECD/NRC BWR turbine trip benchmark and its performance on multi-processor computers

    International Nuclear Information System (INIS)

    The OECD/NRC BWR Turbine Trip (TT) Benchmark is investigated to perform code-to-code comparison of coupled codes including a comparison to measured data which are available from turbine trip experiments at Peach Bottom 2. This Benchmark problem for a BWR over-pressure transient represents a challenging application of coupled codes which integrate 3-dimensional neutron kinetics into thermal-hydraulic system codes for best-estimate simulation of plant transients. This transient represents a typical application of coupled codes which are usually performed on powerful workstations using a single CPU. Nowadays, the availability of multi-CPUs is much easier. Indeed, powerful workstations already provide 4 to 8 CPU, computer centers give access to multi-processor systems with numbers of CPUs in the order of 16 up to several 100. Therefore, the performance of the coupled code Athlet-Quabox/Cubbox on multi-processor systems is studied. Different cases of application lead to changing requirements of the code efficiency, because the amount of computer time spent in different parts of the code is varying. This paper presents main results of the coupled code Athlet-Quabox/Cubbox for the extreme scenarios of the BWR TT Benchmark together with evaluations of the code performance on multi-processor computers. (authors)

  10. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  11. Single PWR spent fuel assembly heat transfer data for computer code evaluations

    International Nuclear Information System (INIS)

    The descriptions and results of two separate heat transfer tests designed to investigate the dry storage of commercial PWR spent fuel assemblies are presented. Presented first are descriptions and selected results from the Fuel Temperature Test performed at the Engine Maintenance and Disassembly facility on the Nevada Test Site. An actual spent fuel assembly from the Turkey Point Unit Number 3 Reactor with a decay heat level of 1.17 KW, was installed vertically in a test stand mounted canister/liner assembly. The boundary temperatures were controlled and the canister backfill gases were alternated between air, helium and vacuum to investigate the primary heat transfer mechanisms of convection, conduction and radiation. The assembly temperature profiles were experimentally measured using installed thermocouple instrumentation. Also presented are the results from the Single Assembly Heat Transfer Test designed and fabricated by Allied General Nuclear Services, under contract to the Department of Energy, and ultimately conducted by the Pacific Northwest Laboratory. For this test, an electrically heated 15 x 15 rod assembly was used to model a single PWR spent fuel assembly. The electrically heated model fuel assembly permitted various ''decay heat'', levels to be tested; 1.0 KW and 0.5 KW were used for these tests. The model fuel assembly was positioned within a prototypic fuel tube and in turn placed within a double-walled sealed cask. The complete test assembly could be positioned at any desired orientation (horizontal, vertical, and 250 from horizontal for the present work) and backfilled as desired (air, helium, or vacuum). Tests were run for all combinations of ''decay heat,'' backfill, and orientation. Boundary conditions were imposed by temperature controlled guard heaters installed on the cask exterior surface

  12. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in...

  13. Computational study of trimer self-assembly and fluid phase behavior

    International Nuclear Information System (INIS)

    The fluid phase diagram of trimer particles composed of one central attractive bead and two repulsive beads was determined as a function of simple geometric parameters using flat-histogram Monte Carlo methods. A variety of self-assembled structures were obtained including spherical micelle-like clusters, elongated clusters, and densely packed cylinders, depending on both the state conditions and shape of the trimer. Advanced simulation techniques were employed to determine transitions between self-assembled structures and macroscopic phases using thermodynamic and structural definitions. Simple changes in particle geometry yield dramatic changes in phase behavior, ranging from macroscopic fluid phase separation to molecular-scale self-assembly. In special cases, both self-assembled, elongated clusters and bulk fluid phase separation occur simultaneously. Our work suggests that tuning particle shape and interactions can yield superstructures with controlled architecture

  14. Next-Generation Sequence Assembly: Four Stages of Data Processing and Computational Challenges

    OpenAIRE

    El-Metwally, Sara; Hamza, Taher; Zakaria, Magdi; Helmy, Mohamed

    2013-01-01

    Decoding DNA symbols using next-generation sequencers was a major breakthrough in genomic research. Despite the many advantages of next-generation sequencers, e.g., the high-throughput sequencing rate and relatively low cost of sequencing, the assembly of the reads produced by these sequencers still remains a major challenge. In this review, we address the basic framework of next-generation genome sequence assemblers, which comprises four basic stages: preprocessing filtering, a graph constru...

  15. Molecular design driving tetraporphyrin self-assembly on graphite: a joint STM, electrochemical and computational study

    Science.gov (United States)

    El Garah, M.; Santana Bonilla, A.; Ciesielski, A.; Gualandi, A.; Mengozzi, L.; Fiorani, A.; Iurlo, M.; Marcaccio, M.; Gutierrez, R.; Rapino, S.; Calvaresi, M.; Zerbetto, F.; Cuniberti, G.; Cozzi, P. G.; Paolucci, F.; Samorì, P.

    2016-07-01

    Tuning the intermolecular interactions among suitably designed molecules forming highly ordered self-assembled monolayers is a viable approach to control their organization at the supramolecular level. Such a tuning is particularly important when applied to sophisticated molecules combining functional units which possess specific electronic properties, such as electron/energy transfer, in order to develop multifunctional systems. Here we have synthesized two tetraferrocene-porphyrin derivatives that by design can selectively self-assemble at the graphite/liquid interface into either face-on or edge-on monolayer-thick architectures. The former supramolecular arrangement consists of two-dimensional planar networks based on hydrogen bonding among adjacent molecules whereas the latter relies on columnar assembly generated through intermolecular van der Waals interactions. Scanning Tunneling Microscopy (STM) at the solid-liquid interface has been corroborated by cyclic voltammetry measurements and assessed by theoretical calculations to gain multiscale insight into the arrangement of the molecule with respect to the basal plane of the surface. The STM analysis allowed the visualization of these assemblies with a sub-nanometer resolution, and cyclic voltammetry measurements provided direct evidence of the interactions of porphyrin and ferrocene with the graphite surface and offered also insight into the dynamics within the face-on and edge-on assemblies. The experimental findings were supported by theoretical calculations to shed light on the electronic and other physical properties of both assemblies. The capability to engineer the functional nanopatterns through self-assembly of porphyrins containing ferrocene units is a key step toward the bottom-up construction of multifunctional molecular nanostructures and nanodevices.Tuning the intermolecular interactions among suitably designed molecules forming highly ordered self-assembled monolayers is a viable approach to

  16. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  17. Molecular design driving tetraporphyrin self-assembly on graphite: a joint STM, electrochemical and computational study.

    Science.gov (United States)

    El Garah, M; Santana Bonilla, A; Ciesielski, A; Gualandi, A; Mengozzi, L; Fiorani, A; Iurlo, M; Marcaccio, M; Gutierrez, R; Rapino, S; Calvaresi, M; Zerbetto, F; Cuniberti, G; Cozzi, P G; Paolucci, F; Samorì, P

    2016-07-14

    Tuning the intermolecular interactions among suitably designed molecules forming highly ordered self-assembled monolayers is a viable approach to control their organization at the supramolecular level. Such a tuning is particularly important when applied to sophisticated molecules combining functional units which possess specific electronic properties, such as electron/energy transfer, in order to develop multifunctional systems. Here we have synthesized two tetraferrocene-porphyrin derivatives that by design can selectively self-assemble at the graphite/liquid interface into either face-on or edge-on monolayer-thick architectures. The former supramolecular arrangement consists of two-dimensional planar networks based on hydrogen bonding among adjacent molecules whereas the latter relies on columnar assembly generated through intermolecular van der Waals interactions. Scanning Tunneling Microscopy (STM) at the solid-liquid interface has been corroborated by cyclic voltammetry measurements and assessed by theoretical calculations to gain multiscale insight into the arrangement of the molecule with respect to the basal plane of the surface. The STM analysis allowed the visualization of these assemblies with a sub-nanometer resolution, and cyclic voltammetry measurements provided direct evidence of the interactions of porphyrin and ferrocene with the graphite surface and offered also insight into the dynamics within the face-on and edge-on assemblies. The experimental findings were supported by theoretical calculations to shed light on the electronic and other physical properties of both assemblies. The capability to engineer the functional nanopatterns through self-assembly of porphyrins containing ferrocene units is a key step toward the bottom-up construction of multifunctional molecular nanostructures and nanodevices. PMID:27376633

  18. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  19. Computer-aided design of nanostructures from self- and directed-assembly of soft matter building blocks

    Science.gov (United States)

    Nguyen, Trung Dac

    2011-12-01

    Functional materials that are active at nanometer scales and adaptive to environment have been highly desirable for a huge array of novel applications ranging from photonics, sensing, fuel cells, smart materials to drug delivery and miniature robots. These bio-inspired features imply that the underlying structure of this type of materials should possess a well-defined ordering as well as the ability to reconfigure in response to a given external stimulus such as temperature, electric field, pH or light. In this thesis, we employ computer simulation as a design tool, demonstrating that various ordered and reconfigurable structures can be obtained from the self- and directed-assembly of soft matter nano-building blocks such as nanoparticles, polymer-tethered nanoparticles and colloidal particles. We show that, besides thermodynamic parameters, the self-assembly of these building blocks is governed by nanoparticle geometry, the number and attachment location of tethers, solvent selectivity, balance between attractive and repulsive forces, nanoparticle size polydispersity, and field strength. We demonstrate that higher-order nanostructures, i.e. those for which the correlation length is much greater than the length scale of individual assembling building blocks, can be hierarchically assembled. For instance, bilayer sheets formed by laterally tethered rods fold into spiral scrolls and helical structures, which are able to adopt different morphologies depending on the environmental condition. We find that a square grid structure formed by laterally tethered nanorods can be transformed into a bilayer sheet structure, and vice versa, upon shortening, or lengthening, the rod segments, respectively. From these inspiring results, we propose a general scheme by which shape-shifting particles are employed to induce the reconfiguration of pre-assembled structures. Finally, we investigate the role of an external field in assisting the formation of assembled structures that would

  20. Process-directed self-assembly of block copolymers: a computer simulation study

    International Nuclear Information System (INIS)

    The free-energy landscape of self-assembling block copolymer systems is characterized by a multitude of metastable minima and concomitant protracted relaxation times of the morphology. Tailoring rapid changes (quench) of thermodynamic conditions, one can reproducibly trap the ensuing kinetics of self-assembly in a specific metastable state. To this end, it is necessary to (1) control the generation of well-defined, highly unstable states and (2) design the unstable state such that the ensuing spontaneous kinetics of structure formation reaches the desired metastable morphology. This process-directed self-assembly provides an alternative to fine-tuning molecular architecture by synthesis or blending, for instance, in order to fabricate complex network structures. Comparing our simulation results to recently developed free-energy techniques, we highlight the importance of non-equilibrium molecular conformations in the starting state and motivate the significance of the local conservation of density. (paper)

  1. Exploring Programmable Self-Assembly in Non-DNA based Molecular Computing

    CERN Document Server

    Terrazas, German; Krasnogor, Natalio

    2013-01-01

    Self-assembly is a phenomenon observed in nature at all scales where autonomous entities build complex structures, without external influences nor centralised master plan. Modelling such entities and programming correct interactions among them is crucial for controlling the manufacture of desired complex structures at the molecular and supramolecular scale. This work focuses on a programmability model for non DNA-based molecules and complex behaviour analysis of their self-assembled conformations. In particular, we look into modelling, programming and simulation of porphyrin molecules self-assembly and apply Kolgomorov complexity-based techniques to classify and assess simulation results in terms of information content. The analysis focuses on phase transition, clustering, variability and parameter discovery which as a whole pave the way to the notion of complex systems programmability.

  2. Process-directed self-assembly of block copolymers: a computer simulation study

    Science.gov (United States)

    Müller, Marcus; Sun, De-Wen

    2015-05-01

    The free-energy landscape of self-assembling block copolymer systems is characterized by a multitude of metastable minima and concomitant protracted relaxation times of the morphology. Tailoring rapid changes (quench) of thermodynamic conditions, one can reproducibly trap the ensuing kinetics of self-assembly in a specific metastable state. To this end, it is necessary to (1) control the generation of well-defined, highly unstable states and (2) design the unstable state such that the ensuing spontaneous kinetics of structure formation reaches the desired metastable morphology. This process-directed self-assembly provides an alternative to fine-tuning molecular architecture by synthesis or blending, for instance, in order to fabricate complex network structures. Comparing our simulation results to recently developed free-energy techniques, we highlight the importance of non-equilibrium molecular conformations in the starting state and motivate the significance of the local conservation of density.

  3. The procedure of computational evaluation of the margin to CHF in new generation WWER fuel assemblies

    International Nuclear Information System (INIS)

    A modified and upgraded empirical procedure of the Institute for Physics and Power Engineering (SSC RF-IPPE) has been presented, applicable to the grids-enhancers (EG) of different types located at random over the length of fuel assembly (FA), which allows its application for optimizing the FA and EG designs. (author)

  4. VHTRC temperature coefficient benchmark problem

    International Nuclear Information System (INIS)

    As an activity of IAEA Coordinated Research Programme, a benchmark problem is proposed for verifications of neutronic calculation codes for a low enriched uranium fuel high temperature gas-cooled reactor. Two problems are given on the base of heating experiments at the VHTRC which is a pin-in-block type core critical assembly loaded mainly with 4% enriched uranium coated particle fuel. One problem, VH1-HP, asks to calculate temperature coefficient of reactivity from the subcritical reactivity values at five temperature steps between an room temperature where the assembly is nearly at critical state and 200degC. The other problem, VH1-HC, asks to calculate the effective multiplication factor of nearly critical loading cores at the room temperature and 200degC. Both problems further ask to calculate cell parameters such as migration area and spectral indices. Experimental results corresponding to main calculation items are also listed for comparison. (author)

  5. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  6. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  7. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  8. Simulation benchmarks for low-pressure plasmas: capacitive discharges

    CERN Document Server

    Turner, M M; Donko, Z; Eremin, D; Kelly, S J; Lafleur, T; Mussenbrock, T

    2012-01-01

    Benchmarking is generally accepted as an important element in demonstrating the correctness of computer simulations. In the modern sense, a benchmark is a computer simulation result that has evidence of correctness, is accompanied by estimates of relevant errors, and which can thus be used as a basis for judging the accuracy and efficiency of other codes. In this paper, we present four benchmark cases related to capacitively coupled discharges. These benchmarks prescribe all relevant physical and numerical parameters. We have simulated the benchmark conditions using five independently developed particle-in-cell codes. We show that the results of these simulations are statistically indistinguishable, within bounds of uncertainty that we define. We therefore claim that the results of these simulations represent strong benchmarks, that can be used as a basis for evaluating the accuracy of other codes. These other codes could include other approaches than particle-in-cell simulations, where benchmarking could exa...

  9. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    International Nuclear Information System (INIS)

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  10. Fuel assemblies mechanical behaviour improvements based on design changes and loading patterns computational analyses

    International Nuclear Information System (INIS)

    In the past few years, incomplete RCCA insertion events (IRI) have been taking place at some nuclear plants. Large guide thimble distortion caused by high compressive loads together with the irradiation induced material creep and growth, is considered as the primary cause of those events. This disturbing phenomenon is worsened when some fuel assemblies are deformed to the extent that they push the neighbouring fuel assemblies and the distortion is transmitted along the core. In order to better understand this mechanism, ENUSA has developed a methodology based on finite element core simulation to enable assessments on the propensity of a given core loading pattern to propagate the distortion along the core. At the same time, the core loading pattern could be decided interacting with nuclear design to obtain the optimum response under both, nuclear and mechanical point of views, with the objective of progressively attenuating the core distortion. (author)

  11. DNASynth: A Computer Program for Assembly of Artificial Gene Parts in Decreasing Temperature

    OpenAIRE

    Nowak, Robert M; Anna Wojtowicz-Krawiec; Andrzej Plucienniczak

    2015-01-01

    Artificial gene synthesis requires consideration of nucleotide sequence development as well as long DNA molecule assembly protocols. The nucleotide sequence of the molecule must meet many conditions including particular preferences of the host organism for certain codons, avoidance of specific regulatory subsequences, and a lack of secondary structures that inhibit expression. The chemical synthesis of DNA molecule has limitations in terms of strand length; thus, the creation of artificial ge...

  12. Structural, nanomechanical, and computational characterization of D,L-cyclic peptide assemblies.

    Science.gov (United States)

    Rubin, Daniel J; Amini, Shahrouz; Zhou, Feng; Su, Haibin; Miserez, Ali; Joshi, Neel S

    2015-03-24

    The rigid geometry and tunable chemistry of D,L-cyclic peptides makes them an intriguing building-block for the rational design of nano- and microscale hierarchically structured materials. Herein, we utilize a combination of electron microscopy, nanomechanical characterization including depth sensing-based bending experiments, and molecular modeling methods to obtain the structural and mechanical characteristics of cyclo-[(Gln-D-Leu)4] (QL4) assemblies. QL4 monomers assemble to form large, rod-like structures with diameters up to 2 μm and lengths of tens to hundreds of micrometers. Image analysis suggests that large assemblies are hierarchically organized from individual tubes that undergo bundling to form larger structures. With an elastic modulus of 11.3 ± 3.3 GPa, hardness of 387 ± 136 MPa and strength (bending) of 98 ± 19 MPa the peptide crystals are among the most robust known proteinaceous micro- and nanofibers. The measured bending modulus of micron-scale fibrils (10.5 ± 0.9 GPa) is in the same range as the Young's modulus measured by nanoindentation indicating that the robust nanoscale network from which the assembly derives its properties is preserved at larger length-scales. Materials selection charts are used to demonstrate the particularly robust properties of QL4 including its specific flexural modulus in which it outperforms a number of biological proteinaceous and nonproteinaceous materials including collagen and enamel. The facile synthesis, high modulus, and low density of QL4 fibers indicate that they may find utility as a filler material in a variety of high efficiency, biocompatible composite materials. PMID:25757883

  13. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  14. Bayesian Benchmark Dose Analysis

    OpenAIRE

    Fang, Qijun; Piegorsch, Walter W.; Barnes, Katherine Y.

    2014-01-01

    An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indee...

  15. Risk Management with Benchmarking

    OpenAIRE

    Suleyman Basak; Alex Shapiro; Lucie Teplá

    2005-01-01

    Portfolio theory must address the fact that, in reality, portfolio managers are evaluated relative to a benchmark, and therefore adopt risk management practices to account for the benchmark performance. We capture this risk management consideration by allowing a prespecified shortfall from a target benchmark-linked return, consistent with growing interest in such practice. In a dynamic setting, we demonstrate how a risk-averse portfolio manager optimally under- or overperforms a target benchm...

  16. Organic molecules deposited on graphene: A computational investigation of self-assembly and electronic structure

    International Nuclear Information System (INIS)

    We use ab initio simulations to investigate the adsorption and the self-assembly processes of tetracyanoquinodimethane (TCNQ), tetrafluoro-tetracyanoquinodimethane (F4-TCNQ), and tetrasodium 1,3,6,8-pyrenetetrasulfonic acid (TPA) on the graphene surface. We find that there are no chemical bonds at the molecule–graphene interface, even at the presence of grain boundaries on the graphene surface. The molecules bond to graphene through van der Waals interactions. In addition to the molecule–graphene interaction, we performed a detailed study of the role played by the (lateral) molecule–molecule interaction in the formation of the, experimentally verified, self-assembled layers of TCNQ and TPA on graphene. Regarding the electronic properties, we calculate the electronic charge transfer from the graphene sheet to the TCNQ and F4-TCNQ molecules, leading to a p-doping of graphene. Meanwhile, such charge transfer is reduced by an order of magnitude for TPA molecules on graphene. In this case, it is not expected a significant doping process upon the formation of self-assembled layer of TPA molecules on the graphene sheet

  17. Aeroelastic Benchmark Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the...

  18. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  19. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as...... important; (2) will, that activists and issue entrepreneurs will carry the message forward; and (3) expertise, that benchmarks created can be defended as accurate representations of what is happening on the issue of concern. We contrast two types of benchmarking cycles where salience, will, and expertise...

  20. ''FULL-CORE'' VVER-440 calculation benchmark

    International Nuclear Information System (INIS)

    Because of the difficulties with experimental validation of power distribution predicted by macro-code on the pin by pin level we decided to prepare a calculation benchmark named ''FULL-CORE'' VVER-440. This benchmark is a two-dimensional (2D) calculation benchmark based on the VVER-440 reactor core cold state geometry with taking into account the geometry of explicit radial reflector. The main task of this benchmark is to test the pin by pin power distribution in fuel assemblies predicted by macro-codes that are used for neutron-physics calculations especially for VVER-440 reactors. The proposal of this benchmark was presented at the 21st Symposium of AER in 2011. The reference solution has been calculated by MCNP code using Monte Carlo method and the results have been published in the AER community. The results of reference calculation were presented at the 22nd Symposium of AER in 2012. In this paper we will compare the available macro-codes results of this calculation benchmark.

  1. A comparative study of methods to compute the free energy of an ordered assembly by molecular simulation

    Science.gov (United States)

    Moustafa, Sabry G.; Schultz, Andrew J.; Kofke, David A.

    2013-08-01

    We present a comparative study of methods to compute the absolute free energy of a crystalline assembly of hard particles by molecular simulation. We consider all combinations of three choices defining the methodology: (1) the reference system: Einstein crystal (EC), interacting harmonic (IH), or r-12 soft spheres (SS); (2) the integration path: Frenkel-Ladd (FL) or penetrable ramp (PR); and (3) the free-energy method: overlap-sampling free-energy perturbation (OS) or thermodynamic integration (TI). We apply the methods to FCC hard spheres at the melting state. The study shows that, in the best cases, OS and TI are roughly equivalent in efficiency, with a slight advantage to TI. We also examine the multistate Bennett acceptance ratio method, and find that it offers no advantage for this particular application. The PR path shows advantage in general over FL, providing results of the same precision with 2-9 times less computation, depending on the choice of a common reference. The best combination for the FL path is TI+EC, which is how the FL method is usually implemented. For the PR path, the SS system (with either TI or OS) proves to be most effective; it gives equivalent precision to TI+FL+EC with about 6 times less computation (or 12 times less, if discounting the computational effort required to establish the SS reference free energy). Both the SS and IH references show great advantage in capturing finite-size effects, providing a variation in free-energy difference with system size that is about 10 times less than EC. This result further confirms previous work for soft-particle crystals, and suggests that free-energy calculations for a structured assembly be performed using a hybrid method, in which the finite-system free-energy difference is added to the extrapolated (1/N→0) absolute free energy of the reference system, to obtain a result that is nearly independent of system size.

  2. TRUMP-BD: A computer code for the analysis of nuclear fuel assemblies under severe accident conditions

    International Nuclear Information System (INIS)

    TRUMP-BD (Boil Down) is an extension of the TRUMP (Edwards 1972) computer program for the analysis of nuclear fuel assemblies under severe accident conditions. This extension allows prediction of the heat transfer rates, metal-water oxidation rates, fission product release rates, steam generation and consumption rates, and temperature distributions for nuclear fuel assemblies under core uncovery conditions. The heat transfer processes include conduction in solid structures, convection across fluid-solid boundaries, and radiation between interacting surfaces. Metal-water reaction kinetics are modeled with empirical relationships to predict the oxidation rates of steam-exposed Zircaloy and uranium metal. The metal-water oxidation models are parabolic in form with an Arrhenius temperature dependence. Uranium oxidation begins when fuel cladding failure occurs; Zircaloy oxidation occurs continuously at temperatures above 13000 degree F when metal and steam are available. From the metal-water reactions, the hydrogen generation rate, total hydrogen release, and temporal and spatial distribution of oxide formations are computed. Consumption of steam from the oxidation reactions and the effect of hydrogen on the coolant properties is modeled for independent coolant flow channels. Fission product release from exposed uranium metal Zircaloy-clad fuel is modeled using empirical time and temperature relationships that consider the release to be subject to oxidation and volitization/diffusion (''bake-out'') release mechanisms. Release of the volatile species of iodine (I), tellurium (Te), cesium (Ce), ruthenium (Ru), strontium (Sr), zirconium (Zr), cerium (Cr), and barium (Ba) from uranium metal fuel may be modeled

  3. Computed isotopic inventory and dose assessment for SRS fuel and target assemblies

    International Nuclear Information System (INIS)

    Past studies have identified and evaluated important radionuclide contributors to dose from reprocessed spent fuel sent to waste for Mark 16B and 22 fuel assemblies and for Mark 31 A and 31B target assemblies. Fission-product distributions after a 5- and 15-year decay time were calculated for a ''representative'' set of irradiation conditions (i.e., reactor power, irradiation time, and exposure) for each type of assembly. The numerical calculations were performed using the SHIELD/GLASS system of codes. The sludge and supernate source terms for dose were studied separately with the significant radionuclide contributors for each identified and evaluated. Dose analysis considered both inhalation and ingestion pathways: The inhalation pathway was analyzed for both evaporative and volatile releases. Analysis of evaporative releases utilized release fractions for the individual radionuclides as defined in the ICRP-30 by DOE guidance. A release fraction of unity was assumed for each radionuclide under volatile-type releases, which would encompass internally initiated events (e.g., fires, explosions), process-initiated events, and externally initiated events. Radionuclides which contributed at least 1% to the overall dose were designated as significant contributors. The present analysis extends and complements the past analyses through considering a broader spectrum of fuel types and a wider range of irradiation conditions. The results provide for a more thorough understanding of the influences of fuel composition and irradiation parameters on fission product distributions (at 2 years or more). Additionally, the present work allows for a more comprehensive evaluation of radionuclide contributions to dose and an estimation of the variability in the radionuclide composition of the dose source term that results from the spent fuel sent to waste encompassing a broad spectrum of fuel compositions and irradiation conditions

  4. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  5. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt at...

  6. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  7. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  8. Enlargement of the Assessment Database for Advanced Computer Codes in Relation to the VVER Technology: Benchmark on LB-LOCA Transient in PSB-VVER Facility

    International Nuclear Information System (INIS)

    The OECD/NEA PSB-VVER project provided unique and useful experimental data from the large-scale PSB-VVER test facility for code validation. This facility represents the scaled down layout of the Russian designed PWR reactors, namely VVER-1000. Five experiments were executed in the project, dealing mainly with the loss of coolant scenarios (small, intermediate, large break loss of coolant accident); a primary to secondary leak and a parametric study (natural circulation test) aimed at the characterization of the VVER system at reduced mass inventory conditions. The comparative analysis described in the paper deals with the analytical exercise on the large break loss of coolant accident experiment (Test 5). Four participants from three different institutions were involved in the benchmark and applied their own analytical models, set up for four different thermal-hydraulic system codes. The benchmark demonstrated that almost all performed post-tests appeared qualified against fixed criteria. Few mismatches between the results and acceptability thresholds are discussed and understood. The analysis involves the relevant features of the input models developed, the steady state conditions and the results of the simulations. The results submitted by the participants are discussed in the paper considering the resulting sequence of main events, the qualitative comparison of selected time trends, the analysis of the relevant thermal-hydraulic aspects and, finally, by the application of the Fast Fourier Transform based method.(author).

  9. BN-600 hybrid core benchmark analyses (phases 1, 2 and 3) (draft synthesis report)

    International Nuclear Information System (INIS)

    This report presents the results of benchmark analyses for a hybrid UOX/MOX fuelled core of the BN-600 reactor. This benchmark was proposed during the first Research Co-ordination Meeting (RCM) of the Co-ordinated Research Project (CRP) on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects, which took place in Vienna on 24 - 26 November 1999. The general objective of the CRP is to validate, verify and improve methodologies and computer codes used for the calculation of reactivity coefficients in fast reactors aiming at enhancing the utilization of plutonium and minor actinides. There has been no change in the view that energy production with breeding of fissile materials is the main goal of fast reactor development to ensure long-term fuel supply. However, before the breeding role of fast reactors is recognized economically, due to the increasingly available low-cost uranium from the 1980s onwards, the emphasis of fast reactor development shifted to incineration of stock-piled plutonium and partitioning and transmutation (P and T) of nuclear wastes to meet contemporary demands. Following a proposal of the Russian Federation, at the 32nd Annual Meeting of the International Working Group on Fast Reactors (IWG-FR), held in May 1999, a hybrid UOX/MOX (mixed oxide) fuelled BN-600 reactor core that has a combination of highly enriched uranium (HEU) and mixed oxide (MOX) assemblies in the core region, was chosen as a calculational model. Hence the benchmark clearly addresses the issues of weapons-grade plutonium for energy production in a mixed UOX/MOX fuelled core of the BN-600 reactor. The input data for the benchmark neutronics calculations have been prepared by OKBM and IPPE (Russia). The input data have been reviewed and modified in the first RCM of this CRP. The organizations participating in the BN-600 hybrid core benchmark analyses are: ANL from the USA, CEA and SA (its previous name was AEAT) from EU (France and the

  10. Computer simulation of thermal-hydraulics of MNSR fuel-channel assembly using LabView

    International Nuclear Information System (INIS)

    A LabView simulator of thermal hydraulics has been developed to demonstrate the temperature profile of coolant flow in the reactor core during normal operation. The simulator could equally be used for any transient behaviour of the reactor. Heat generation, transfer and the associated temperature profile in the fuel-channel elements viz: the coolant, cladding and fuel were studied and the corresponding analytical temperature equations in the axial and radial directions for the coolant, outer surface of the cladding, fuel surface and fuel center were obtained for the simulation using LabView. Tables of values for the equations were constructed by MATLAB and excel software programs. Plots of the equations with LabView were verified and validated with the graphs drawn by the MATLAB. In this thesis, an analysis of the effects of the coolant inlet temperature of 24.5°C and exit temperature of 70.0° on the temperature distribution in fuel-channel elements of the reactor core of cylindrical geometry was carried out. Other parameters, including the total fuel channel power, mass flow rate and convective heat transfer coefficient were varied to study the effects on the temperature profile. The analytical temperature equations in the fuel channel elements of the reactor core were obtained. MATLAB and Excel software were used to construct data for the equations. The plots by MATLAB were used to benchmark the LabVIEW simulation. Excellent agreement was obtained between the MATLAB plots and the LabView simulation results with an error margin of 0.001. The analysis of the results by comparing gradients of inlet temperature, total reactor channel power and mass flow indicated that inlet temperature gradient is one of the key parameters in determining the temperature profile in the MNSR core. (au)

  11. Experimental Criticality Benchmarks for SNAP 10A/2 Reactor Cores

    Energy Technology Data Exchange (ETDEWEB)

    Krass, A.W.

    2005-12-19

    This report describes computational benchmark models for nuclear criticality derived from descriptions of the Systems for Nuclear Auxiliary Power (SNAP) Critical Assembly (SCA)-4B experimental criticality program conducted by Atomics International during the early 1960's. The selected experimental configurations consist of fueled SNAP 10A/2-type reactor cores subject to varied conditions of water immersion and reflection under experimental control to measure neutron multiplication. SNAP 10A/2-type reactor cores are compact volumes fueled and moderated with the hydride of highly enriched uranium-zirconium alloy. Specifications for the materials and geometry needed to describe a given experimental configuration for a model using MCNP5 are provided. The material and geometry specifications are adequate to permit user development of input for alternative nuclear safety codes, such as KENO. A total of 73 distinct experimental configurations are described.

  12. Benchmark field study of deep neutron penetration

    Science.gov (United States)

    Morgan, J. F.; Sale, K.; Gold, R.; Roberts, J. H.; Preston, C. C.

    1991-06-01

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry.

  13. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  14. Computational study of cellular assembly on hydrophobic/hydrophilic micro-patterns

    Czech Academy of Sciences Publication Activity Database

    Ukraintsev, Egor; Brož, A.; Kalbáčová, M.H.; Kromka, Alexander; Rezek, Bohuslav

    Ostrava : Tanger, 2014. ISBN 978-80-87294-55-0. [International Conference NANOCON /6./. Brno (CZ), 05.11.2014-07.11.2014] R&D Projects: GA ČR GAP108/12/0996 Institutional support: RVO:68378271 Keywords : SAOS-2 cell s * cell movement * adhesion * stochastic computer simulations Subject RIV: BM - Solid Matter Physics ; Magnetism

  15. Establishment of consistent benchmark framework for performing high-fidelity whole core transport/diffusion calculations

    International Nuclear Information System (INIS)

    This paper presents a benchmark framework established as a basis for investigation of the validity of multi-group approximation with respect to the continuous energy approach, of the level of spatial homogenization with respect to heterogeneous solution, and of the level of angular approximation to the linear Boltzmann transport equation in respect to the Monte Carlo reference solution. Several steady-state solutions of this benchmark have been generated using three different computer codes focusing on the two-dimensional (2-D) geometry model. MCNP5 has been used to generate the reference solution using the continuous energy library. HELIOS is then used for both to solve the problem using a 45 group cross-section library and to generate new sets of few-group cross-sections for the core simulator NEM. The results from the diffusion option of the NEM code on pin-by-pin and Fuel Assembly (FA) basis are presented and discussed in the paper. The benchmark is being designed for evaluation of number of energy groups (number of energy groups and energy cut off points) and spatial (homogenized assembly level vs. homogenized pin cell level) representation needed for high-fidelity reactor core calculation schemes developed at the Pennsylvania State Univ. such as NEM SP3, hybrid NEM-BEM and some recent developments of embedded three-dimensional pin-by-pin diffusion / SP3 finite element calculation schemes. (authors)

  16. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  17. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  18. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  19. Solution of the international benchmark with trip of one of four reactor coolant pumps for VVER-1000 reactor plants using the computer code package KORSAR/GP and complex reactor nodalization

    International Nuclear Information System (INIS)

    The International OECD/NEA test benchmark for the trip of one of four operating reactor coolant pumps (RCPs) was solved using thermohydraulic code package KORSAR/GP. KORSAR/GP applies 1D calculational units. This benchmark was based on experimental results obtained during commissioning of Kalinin NPP, Unit 3. During the experiments a large amount of experimental data was obtained that enabled us to supplement the validation of the computer codes and nodalizations of 1D thermohydraulic codes. In the given transient there was a difference between coolant temperatures in different loops that resulted in the necessity of numerical simulation of the coolant mixing in the reactor plenums. To solve this problem, complex branched nodalization (i.e. the set of code calculational units) was used. The analysis results matched closely with the experimental data. Thus it was shown that the nodalization developed with the use of KORSAR/GP and the code itself can be applied for the simulation of VVER-1000 transients with one ore more RCPs in operation and sharp difference between coolant temperature in loops. (author)

  20. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells

    International Nuclear Information System (INIS)

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424–7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20–30%) extent of Hartree–Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO–LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed. (paper)

  1. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells

    Science.gov (United States)

    Tortorella, Sara; Mastropasqua Talamo, Maurizio; Cardone, Antonio; Pastore, Mariachiara; De Angelis, Filippo

    2016-02-01

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424-7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20-30%) extent of Hartree-Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO-LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed.

  2. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells.

    Science.gov (United States)

    Tortorella, Sara; Talamo, Maurizio Mastropasqua; Cardone, Antonio; Pastore, Mariachiara; De Angelis, Filippo

    2016-02-24

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424-7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20-30%) extent of Hartree-Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO-LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed. PMID:26808717

  3. Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center

    International Nuclear Information System (INIS)

    This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art

  4. Computational analysis of mixing properties of mixing grids in WWER fuel assemblies

    International Nuclear Information System (INIS)

    The results of the numerical calculations of the impact of the mixing grid designed by NCCP on flow are presented. The phase mixing in three directions is discussed: from fuel surface to flow, between fuel assembly subchannels and around fuel rod, and in azimuth direction. The mixing grids were demonstrated to have the impact on steam being generated on fuel rod in two ways. They are the carry-out of steam across fuel surface with varying velocity at the grid location and steam flowing along fuel surface with its injection to flow at flow breakaway. By using of grids, the fuel rods are divided in two groups according to flow pattern, with a beneficial effect produced by the grid only on one group. The steam flow along fuel surface results in azimuth non-uniformity, and in case of a compact group of hot fuel rods the margin to CHF is reduced. A comparison with the results obtained for a plate-type mixing grid is presented. (authors)

  5. Benchmark testing calculations for 232Th

    International Nuclear Information System (INIS)

    The cross sections of 232Th from CNDC and JENDL-3.3 were processed with NJOY97.45 code in the ACE format for the continuous-energy Monte Carlo Code MCNP4C. The Keff values and central reaction rates based on CENDL-3.0, JENDL-3.3 and ENDF/B-6.2 were calculated using MCNP4C code for benchmark assembly, and the comparisons with experimental results are given. (author)

  6. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  7. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  8. Benchmarking in University Toolbox

    OpenAIRE

    Katarzyna Kuźmicz

    2015-01-01

    In the face of global competition and rising challenges that higher education institutions (HEIs) meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indica...

  9. Benchmarking conflict resolution algorithms

    OpenAIRE

    Vanaret, Charlie; Gianazza, David; Durand, Nicolas; Gotteland, Jean-Baptiste

    2012-01-01

    Applying a benchmarking approach to conflict resolution problems is a hard task, as the analytical form of the constraints is not simple. This is especially the case when using realistic dynamics and models, considering accelerating aircraft that may follow flight paths that are not direct. Currently, there is a lack of common problems and data that would allow researchers to compare the performances of several conflict resolution algorithms. The present paper introduces a benchmarking approa...

  10. Benchmarking and regulation

    OpenAIRE

    Agrell, Per Joakim; Bogetoft, Peter

    2013-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publication...

  11. Benchmark problems and results for verifying resonance calculation methodologies

    International Nuclear Information System (INIS)

    Resonance calculation is one of the most important procedures for the multi-group neutron transport calculation. With the development of nuclear reactor concepts, many new types of fuel assembly are raised. Compared to the traditional designs, most of the new fuel assemblies have different fuel types either with complex isotopes or with complicated geometry. This makes the traditional resonance calculation method invalid. Recently, many advanced resonance calculation methods are proposed. However, there are few benchmark problems for evaluating those methods with a comprehensive comparison. In this paper, we design 5 groups of benchmark problems including 21 typical cases of different geometries and fuel contents. The reference results of the benchmark problems are generated based on the sub-group method, ultra-fine group method, function expanding method and Monte Carlo method. It is shown that those benchmark problems and their results could be helpful to evaluate the validity of the newly developed resonance calculation method in the future work. (authors)

  12. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  13. Shielding Integral Benchmark Archive and Database (SINBAD)

    International Nuclear Information System (INIS)

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  14. Parallel molecular computation of modular-multiplication with two same inputs over finite field GF(2(n)) using self-assembly of DNA tiles.

    Science.gov (United States)

    Li, Yongnan; Xiao, Limin; Ruan, Li

    2014-06-01

    Two major advantages of DNA computing - huge memory capacity and high parallelism - are being explored for large-scale parallel computing, mass data storage and cryptography. Tile assembly model is a highly distributed parallel model of DNA computing. Finite field GF(2(n)) is one of the most commonly used mathematic sets for constructing public-key cryptosystem. It is still an open question that how to implement the basic operations over finite field GF(2(n)) using DNA tiles. This paper proposes how the parallel tile assembly process could be used for computing the modular-square, modular-multiplication with two same inputs, over finite field GF(2(n)). This system could obtain the final result within less steps than another molecular computing system designed in our previous study, because square and reduction are executed simultaneously and the previous system computes reduction after calculating square. Rigorous theoretical proofs are described and specific computing instance is given after defining the basic tiles and the assembly rules. Time complexity of this system is 3n-1 and space complexity is 2n(2). PMID:24534382

  15. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  16. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  17. Abstracts of digital computer code packages. Assembled by the Radiation Shielding Information Center

    International Nuclear Information System (INIS)

    The term ''code package'' is used to describe a miscellaneous grouping of materials which, when interpreted in connection with a digital computer, enables the scientist--user to solve technical problems in the area for which the material was designed. In general, a ''code package'' consists of written material--reports, instructions, flow charts, listings of data, and other useful material and IBM card decks (or, more often, a reel of magnetic tape) on which the source decks, sample problem input (including libraries of data) and the BCD/EBCDIC output listing from the sample problem are written. In addition to the main code, and any available auxiliary routines are also included. The abstract format was chosen to give to a potential code user several criteria for deciding whether or not he wishes to request the code package

  18. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  19. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  20. The IAEA Co-ordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator-driven Systems'

    International Nuclear Information System (INIS)

    Document in abstract form only. Full text of publication follows: Since December 2005, the International Atomic Energy Agency (IAEA) has been conducting the Co-ordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator-driven Systems' within the framework of the Technical Working Group on Fast Reactors (TWG-FR). The overall objective of the CRP is to increase the capability of interested member states in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilisation and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative subcritical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a subcritical core and an external neutron source [e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)]. The objective of these experimental programmes is to validate computational methods, to obtain high-energy nuclear data, to characterise the performance of subcritical assemblies driven by external sources, and to develop and improve techniques for subcriticality monitoring. With the CRP in its final year, the paper summarises, on behalf of all the participants, the status of work and preliminary CRP benchmarks results. (authors)

  1. Analyses of Weapons-Grade MOX VVER-1000 Neutronics Benchmarks: Pin-Cell Calculations with SCALE/SAS2H

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, R.J.

    2001-01-11

    A series of unit pin-cell benchmark problems have been analyzed related to irradiation of mixed oxide fuel in VVER-1000s (water-water energetic reactors). One-dimensional, discrete-ordinates eigenvalue calculations of these benchmarks were performed at ORNL using the SAS2H control sequence module of the SCALE-4.3 computational code system, as part of the Fissile Materials Disposition Program (FMDP) of the US DOE. Calculations were also performed using the SCALE module CSAS to confirm the results. The 238 neutron energy group SCALE nuclear data library 238GROUPNDF5 (based on ENDF/B-V) was used for all calculations. The VVER-1000 pin-cell benchmark cases modeled with SAS2H included zero-burnup calculations for eight fuel material variants (from LEU UO{sub 2} to weapons-grade MOX) at five different reactor states, and three fuel depletion cases up to high burnup. Results of the SAS2H analyses of the VVER-1000 neutronics benchmarks are presented in this report. Good general agreement was obtained between the SAS2H results, the ORNL results using HELIOS-1.4 with ENDF/B-VI nuclear data, and the results from several Russian benchmark studies using the codes TVS-M, MCU-RFFI/A, and WIMS-ABBN. This SAS2H benchmark study is useful for the verification of HELIOS calculations, the HELIOS code being the principal computational tool at ORNL for physics studies of assembly design for weapons-grade plutonium disposition in Russian reactors.

  2. Solvent-driven symmetry of self-assembled nanocrystal superlattices-A computational study

    KAUST Repository

    Kaushik, Ananth P.

    2012-10-29

    The preference of experimentally realistic sized 4-nm facetted nanocrystals (NCs), emulating Pb chalcogenide quantum dots, to spontaneously choose a crystal habit for NC superlattices (Face Centered Cubic (FCC) vs. Body Centered Cubic (BCC)) is investigated using molecular simulation approaches. Molecular dynamics simulations, using united atom force fields, are conducted to simulate systems comprised of cube-octahedral-shaped NCs covered by alkyl ligands, in the absence and presence of experimentally used solvents, toluene and hexane. System sizes in the 400,000-500,000-atom scale followed for nanoseconds are required for this computationally intensive study. The key questions addressed here concern the thermodynamic stability of the superlattice and its preference of symmetry, as we vary the ligand length of the chains, from 9 to 24 CH2 groups, and the choice of solvent. We find that hexane and toluene are "good" solvents for the NCs, which penetrate the ligand corona all the way to the NC surfaces. We determine the free energy difference between FCC and BCC NC superlattice symmetries to determine the system\\'s preference for either geometry, as the ratio of the length of the ligand to the diameter of the NC is varied. We explain these preferences in terms of different mechanisms in play, whose relative strength determines the overall choice of geometry. © 2012 Wiley Periodicals, Inc.

  3. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  4. 'MIDICORE' WWER-1000 core periphery power distribution benchmark proposal

    International Nuclear Information System (INIS)

    The MIDICORE benchmark is a 2D calculation benchmark based on the WWER-1000 reactor core cold state geometry with taking into the account the geometry of explicit radial reflector. The main task of this benchmark is to test the pin by pin power distribution in selected fuel assemblies that are placed mainly at the border of the WWER-1000 core. This is due to an observed phenomenon at calculation of the 'first core loading' (completely composed from TVEL TVSA-T fresh fuel assemblies) at Temelin NPP (WWER-1000 core), where the maximum of fuel pin power was found in a peripheral fuel assemblies in a fuel pin at assembly edge in direction to the core centre. This phenomenon consist not only in position where this maximum occurs, but especially in relatively big difference of fuel pin power value observed when this value was determined by codes based on pin to pin diffusion difference method on one side and by codes based on nodal diffusion method with pin power reconstruction on the other side. Because value of fuel pin power is not directly measured by the core monitoring system, a decision about a proposal of benchmark of this kind has been made. In this contribution we define the MIDICORE benchmark; we present the preliminary reference Monte Carlo calculation results and also preliminary MOBY-DICK macro code calculation results. (Authors)

  5. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    More than 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The benchmark calculations reported here are part of an ongoing multiyear, multiperson effort to benchmark version 4 of the MCNP code. The MCNP is a Monte Carlo three-dimensional general-purpose, continuous-energy neutron, photon, and electron transport code. It is used around the world for many applications including aerospace, oil-well logging, physics experiments, criticality safety, reactor analysis, medical imaging, defense applications, accelerator design, radiation hardening, radiation shielding, health physics, fusion research, and education. The first phase of the benchmark project consisted of analytic and photon problems. The second phase consists of the ENDF/B-V neutron problems reported in this paper and in more detail in the comprehensive report. A cooperative program being carried out a General Electric, San Jose, consists of light water reactor benchmark problems. A subsequent phase focusing on electron problems is planned

  6. Track 3: growth of nuclear technology and research numerical and computational aspects of the coupled three-dimensional core/plant simulations: organization for economic cooperation and development/U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-I. 4. Methods and Results for the MSLB NEA Benchmark Using SIMTRAN and RELAP-5

    International Nuclear Information System (INIS)

    . The neutronic constants are then nearly implicitly calculated in the next time step as a function of the extrapolated T-H variables (water density and water and fuel temperatures), where the limited half-step extrapolation prevents significant oscillations, allowing for larger time steps. For the MSLB Benchmark, the SIMTRAN code was extended to deal with axial subdivision of cross-section sets, including varying and moving boundaries, to allow for control rod continuous movement in axially subdivided zones/compositions. The synthetic two-group nodal discontinuity factors were generated by 2-D fine-mesh diffusion calculations of the different (15) core planes, with un-rodded and rodded configurations and for the initial, mid-transient, and final quasi-steady-state conditions, with axial buckling and local T-H conditions per node (quarter of assembly), obtained by iterating the 3-D and 2-D solutions that converge in two or three iterations. For the NEA/OECD MSLB benchmark, we have contributed results for exercise 2, the guided core transient analysis, using our full SIMTRAN code (with COBRA for the 3-D core T-H transient solution with given core inlet boundary conditions along the transient), and for exercise 3, the full system transient, using our reduced SIMTRAN code (without COBRA) coupled with RELAP-5, using the same code version and input deck for RELAP-5 as supplied by the Purdue-NRC group, which we fully acknowledge. This system model was validated by them for exercise 1 and for exercises 2 and 3 using their PARCS 3-D neutronic code. Our results for the steady states and the transients proposed for exercise 2 of the MSLB Benchmark, including a best-estimate scenario, with the physical control rod absorption XS sets, and a return-to-power scenario, with reduced control rod absorption XS sets, show small deviations from the mean results of other participants, especially for core average parameters, as will be fully documented in the final reports of the benchmark

  7. QUAST: quality assessment tool for genome assemblies

    OpenAIRE

    Gurevich, Alexey; Saveliev, Vladislav; Vyahhi, Nikolay; Tesler, Glenn

    2013-01-01

    Summary: Limitations of genome sequencing techniques have led to dozens of assembly algorithms, none of which is perfect. A number of methods for comparing assemblers have been developed, but none is yet a recognized benchmark. Further, most existing methods for comparing assemblies are only applicable to new assemblies of finished genomes; the problem of evaluating assemblies of previously unsequenced species has not been adequately considered. Here, we present QUAST—a quality assessment too...

  8. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  9. Track 3: growth of nuclear technology and research numerical and computational aspects of the coupled three-dimensional core/plant simulations: organization for economic cooperation and development/U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-I. 5. Analyses of the OECD MSLB Benchmark with the Codes DYN3D and DYN3D/ATHLET

    International Nuclear Information System (INIS)

    The code DYN3D coupled with ATHLET was used for the analysis of the OECD Main-Steam-Line-Break (MSLB) Benchmark, which is based on real plant design and operational data of the TMI-1 pressurized water reactor (PWR). Like the codes RELAP or TRAC,ATHLET is a thermal-hydraulic system code with point or one-dimensional neutron kinetic models. ATHLET, developed by the Gesellschaft for Anlagen- und Reaktorsicherheit, is widely used in Germany for safety analyses of nuclear power plants. DYN3D consists of three-dimensional nodal kinetic models and a thermal-hydraulic part with parallel coolant channels of the reactor core. DYN3D was coupled with ATHLET for analyzing more complex transients with interactions between coolant flow conditions and core behavior. It can be applied to the whole spectrum of operational transients and accidents, from small and intermediate leaks to large breaks of coolant loops or steam lines at PWRs and boiling water reactors. The so-called external coupling is used for the benchmark, where the thermal hydraulics is split into two parts: DYN3D describes the thermal hydraulics of the core, while ATHLET models the coolant system. Three exercises of the benchmark were simulated: Exercise 1: point kinetics plant simulation (ATHLET) Exercise 2: coupled three-dimensional neutronics/core thermal-hydraulics evaluation of the core response for given core thermal-hydraulic boundary conditions (DYN3D) Exercise 3: best-estimate coupled core-plant transient analysis (DYN3D/ATHLET). Considering the best-estimate cases (scenarios 1 of exercises 2 and 3), the reactor does not reach criticality after the reactor trip. Defining more serious tests for the codes, the efficiency of the control rods was decreased (scenarios 2 of exercises 2 and 3) to obtain recriticality during the transient. Besides the standard simulation given by the specification, modifications are introduced for sensitivity studies. The results presented here show (a) the influence of a reduced

  10. The Benchmark Beta, CAPM, and Pricing Anomalies.

    OpenAIRE

    Cheol S. Eun

    1994-01-01

    Recognizing that a part of the unobservable market portfolio is certainly observable, the author first reformulate the capital asset pricing model so that asset returns can be related to the 'benchmark' beta computed against a set of observable assets as well as the 'latent' beta computed against the remaining unobservable assets, and then shows that when the pricing effect of the latent beta is ignored, assets would appear to be systematically mispriced even if the capital asset pricing mode...

  11. Numerical and computational aspects of the coupled three-dimensional core/ plant simulations: organization for economic cooperation and development/ U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-II. 8. Analysis of the OECD MSLB Benchmark Exercise III Using Coupled Codes RELAP5/PARCS and TRAC-M/PARCS

    International Nuclear Information System (INIS)

    The OECD Nuclear Science Committee has released a set of computational benchmark problems for calculation of reactivity transients in pressurized water reactors (PWR). A main steam line break (MSLB) transient based on the Three Mile Island (TMI-1) PWR was developed to assess the capability of coupled neutronics and thermal-hydraulics codes to analyze complex transients having coupled core-plant interactions. The PWR MSLB accident scenario is characterized by a rupture in one of the main steam lines of the secondary system, leading to a sudden overcooling of the corresponding primary loop water. The overcooled moderator represents a positive reactivity insertion, which must be overcome by the control rods. Best-estimate modeling of this event requires three-dimensional (3-D) spatial kinetics because of space-time variations of the core power distribution arising from the asymmetric cooling of the core and from the scram of the reactor with the highest worth rod stuck out of the core. The benchmark was split into three separate exercises: a plant system model with point reactor kinetics, a spatial kinetics model of the core with the plant response modeled with time-dependent core thermal-hydraulic boundary conditions, and a plant system model with spatial kinetics model of the core. Results presented here are only for Exercise III of the benchmark with the cross-section set that leads to a return to power (RTP) during the transient. The work utilized the U.S. Nuclear Regulatory Commission (NRC) best-estimate thermal-hydraulic codes RELAP5 and TRAC-M coupled with the NRC neutronics code PARCS. The codes are coupled using a general interface (GI) incorporated into PARCS, which allows for the coupling to any thermal-hydraulics code. The thermal-hydraulics and neutronics codes are executed as separate processes with inter-process communication made possible through the use of message-passing protocols in the Parallel Virtual Machine (PVM) package. The spatial coupling of

  12. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  13. Remote Sensing Segmentation Benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.

    Piscataway, NJ : IEEE Press, 2012, s. 1-4. ISBN 978-1-4673-4960-4. [IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS). Tsukuba Science City (JP), 11.11.2012] R&D Projects: GA ČR GAP103/11/0335; GA ČR GA102/08/0593 Grant ostatní: CESNET(CZ) 409/2011 Keywords : remote sensing * segmentation * benchmark Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2013/RO/mikes-remote sensing segmentation benchmark.pdf

  14. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  15. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  16. Assembler for de novo assembly of large genomes

    OpenAIRE

    Chu, Te-Chin; Lu, Chen-Hua; Liu, Tsunglin; Lee, Greg C.; Li, Wen-Hsiung; Shih, Arthur Chun-Chieh

    2013-01-01

    Assembling a large genome faces three challenges: assembly quality, computer memory requirement, and execution time. Our developed assembler, JR-Assembler, uses (a) a strategy that selects good seeds for contig construction, (b) an extension strategy that uses whole sequencing reads to increase the chance to jump over repeats and to expedite extension, and (c) detecting misassemblies by remapping reads to assembled sequences. Compared with current assemblers, JR-Assembler achieves a better ov...

  17. Comparing Neuromorphic Solutions in Action: Implementing a Bio-Inspired Solution to a Benchmark Classification Task on Three Parallel-Computing Platforms

    Science.gov (United States)

    Diamond, Alan; Nowotny, Thomas; Schmuker, Michael

    2016-01-01

    Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and “neuromorphic algorithms” are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture

  18. Comparing neuromorphic solutions in action: implementing a bio-inspired solution to a benchmark classification task on three parallel-computing platforms

    Directory of Open Access Journals (Sweden)

    Alan eDiamond

    2016-01-01

    Full Text Available Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and neuromorphic algorithms are being developed. As they are maturing towards deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability and power efficiency.Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analogue Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task.We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model’s ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached.With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication

  19. Benchmarking the World's Best

    Science.gov (United States)

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  20. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  1. Benchmark problem proposal

    International Nuclear Information System (INIS)

    The meeting of the Radiation Energy Spectra Unfolding Workshop organized by the Radiation Shielding Information Center is discussed. The plans of the unfolding code benchmarking effort to establish methods of standardization for both the few channel neutron and many channel gamma-ray and neutron spectroscopy problems are presented

  2. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  3. CCF benchmark test

    International Nuclear Information System (INIS)

    A benchmark test on common cause failures (CCF) was performed giving interested institutions in Germany the opportunity of demonstrating and justifying their interpretations of events, their methods and models for analyzed CCF. The participants of this benchmark test belonged to expert and consultant organisations and to industrial institutions. The task for the benchmark test was to analyze two typical groups of motor-operated valves in German nuclear power plants. The benchmark test was carried out in two steps. In the first step the participants were to assess in a qualitative way some 200 event-reports on isolation valves. They then were to establish, quantitatively, the reliability parameters for the CCF in the two groups of motor-operated valves using their own methods and their own calculation models. In a second step the reliability parameters were to be recalculated on the basis of a common reference of well defined events, chosen from all given events, in order to analyze the influence of the calculation models on the reliability parameters. (orig.)

  4. Benchmarking Public Procurement 2016

    OpenAIRE

    World Bank Group

    2015-01-01

    Benchmarking Public Procurement 2016 Report aims to develop actionable indicators which will help countries identify and monitor policies and regulations that impact how private sector companies do business with the government. The project builds on the Doing Business methodology and was initiated at the request of the G20 Anti-Corruption Working Group.

  5. NAS Parallel Benchmarks Results

    Science.gov (United States)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  6. Implementation of the NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for High Performance Computing (HPC). In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for CFD applications.

  7. Analysis of VRML-Based Computer Experimental Assembly%浅析基于VRML的计算机组装实验

    Institute of Scientific and Technical Information of China (English)

    叶龙妹

    2011-01-01

    VRML is Virtual Reality Modeling Language short,is the virtual reality modeling language,is to create simulated real-world scenes and man-made fictional scenario of a three-dimensional modeling language.Assembled in the computer experiments,often due to lack of equipment and renovation of a faster computer components sake,so that the computer has a certain degree of difficulty experimental operation,affecting the experimental results.Therefore,this paper,the assembly of computer virtual reality technology in a variety of experimental simulation of the hardware devices in order to build a computer assembly of virtual experiments.Therefore,this paper,the VRML virtual environment,computer assembly overview of the experiment,analysis of the assembly process of virtual experiments,virtual experiments to explore the effect of the assembly or function.%VRML是Virtual Reality Modeling Language的简称,是指虚拟现实建模语言,是为了建立模拟真实世界中的场景而人为虚构的一种三维的场景建模语言。在计算机组装实验中,往往由于相关设备的不足和计算机部件翻新较快的缘故,从而使得计算机实验操作具有一定的难度,影响了实验的效果。所以,本文通过虚拟现实技术对计算机组装实验中的各种设备硬件进行仿真模拟,从而建立计算机组装虚拟实验。因此,本文通过对VRML环境下计算机组装虚拟实验进行概况,分析了组装虚拟实验的过程,探讨了组装虚拟实验的效果或者作用。

  8. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  9. A thermo mechanical benchmark calculation of a hexagonal can in the BTI accident with INCA code

    International Nuclear Information System (INIS)

    The thermomechanical behaviour of an hexagonal can in a benchmark problem (simulating the conditions of a BTI accident in a fuel assembly) is examined by means of the INCA code and the results systematically compared with those of ADINA

  10. Texture Segmentation Benchmark

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Mikeš, Stanislav

    Los Alamitos : IEEE Press, 2008, s. 2933-2936. ISBN 978-1-4244-2174-9. [19th International Conference on Pattern Recognition. Tampa (US), 07.12.2008-11.12.2008] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA ČR GA102/07/1594; GA ČR GA102/08/0593 Grant ostatní: GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : texture segmentation * image segmentation * benchmark Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2008/RO/haindl-texture segmentation benchmark.pdf

  11. Radiography benchmark 2014

    International Nuclear Information System (INIS)

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed

  12. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of di?erent architectural and hyperparameter choices on performance. Significant ?ndings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperfor...

  13. Reduced-order computational model in nonlinear structural dynamics for structures having numerous local elastic modes in the low-frequency range. Application to fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Batou, A., E-mail: anas.batou@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Soize, C., E-mail: christian.soize@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Brie, N., E-mail: nicolas.brie@edf.fr [EDF R and D, Département AMA, 1 avenue du général De Gaulle, 92140 Clamart (France)

    2013-09-15

    Highlights: • A ROM of a nonlinear dynamical structure is built with a global displacements basis. • The reduced order model of fuel assemblies is accurate and of very small size. • The shocks between grids of a row of seven fuel assemblies are computed. -- Abstract: We are interested in the construction of a reduced-order computational model for nonlinear complex dynamical structures which are characterized by the presence of numerous local elastic modes in the low-frequency band. This high modal density makes the use of the classical modal analysis method not suitable. Therefore the reduced-order computational model is constructed using a basis of a space of global displacements, which is constructed a priori and which allows the nonlinear dynamical response of the structure observed on the stiff part to be predicted with a good accuracy. The methodology is applied to a complex industrial structure which is made up of a row of seven fuel assemblies with possibility of collisions between grids and which is submitted to a seismic loading.

  14. 计算机硬件组装及维护技术的探讨%Discussion of the Assembly and Maintenance of Computer Hardware Technology

    Institute of Scientific and Technical Information of China (English)

    马钊

    2015-01-01

    With the progress of the development of society and economy, science and technology, the computer has become indispensable at work, learning tools. Therefore, we need to have some basic computer hardware assembly and maintenance techniques.%随着社会经济的不断发展和科学技术的不断进步,计算机已成为人们在工作、学习中必不可少的工具。因此,我们需要掌握一些基本的计算机硬件组装和维护技术。

  15. Application of FORSS sensitivity and uncertainty methodology to fast reactor benchmark analysis

    Energy Technology Data Exchange (ETDEWEB)

    Weisbin, C.R.; Marable, J.H.; Lucius, J.L.; Oblow, E.M.; Mynatt, F.R.; Peelle, R.W.; Perey, F.G.

    1976-12-01

    FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions, and associated uncertainties. This paper presents the theory and code description as well as the first results of applying FORSS to fast reactor benchmarks. Specifically, for various assemblies and reactor performance parameters, the nuclear data sensitivities were computed by nuclide, reaction type, and energy. Comprehensive libraries of energy-dependent coefficients have been developed in a computer retrievable format and released for distribution by RSIC and NNCSC. Uncertainties induced by nuclear data were quantified using preliminary, energy-dependent relative covariance matrices evaluated with ENDF/B-IV expectation values and processed for /sup 238/U(n,f), /sup 238/U(n,..gamma..), /sup 239/Pu(n,f), and /sup 239/Pu(..nu..). Nuclear data accuracy requirements to meet specified performance criteria at minimum experimental cost were determined.

  16. Preliminary analysis of the proposed BN-600 benchmark core

    International Nuclear Information System (INIS)

    The Indira Gandhi Centre for Atomic Research is actively involved in the design of Fast Power Reactors in India. The core physics calculations are performed by the computer codes that are developed in-house or by the codes obtained from other laboratories and suitably modified to meet the computational requirements. The basic philosophy of the core physics calculations is to use the diffusion theory codes with the 25 group nuclear cross sections. The parameters that are very sensitive is the core leakage, like the power distribution at the core blanket interface etc. are calculated using transport theory codes under the DSN approximations. All these codes use the finite difference approximation as the method to treat the spatial variation of the neutron flux. Criticality problems having geometries that are irregular to be represented by the conventional codes are solved using Monte Carlo methods. These codes and methods have been validated by the analysis of various critical assemblies and calculational benchmarks. Reactor core design procedure at IGCAR consists of: two and three dimensional diffusion theory calculations (codes ALCIALMI and 3DB); auxiliary calculations, (neutron balance, power distributions, etc. are done by codes that are developed in-house); transport theory corrections from two dimensional transport calculations (DOT); irregular geometry treated by Monte Carlo method (KENO); cross section data library used CV2M (25 group)

  17. 计算机组装实践课程改革的探索与研究%Exploration and Research of the Computer Assembly Practice Curriculum Reform

    Institute of Scientific and Technical Information of China (English)

    聂幸

    2012-01-01

    计算机组装实践课程是计算机系学生的一门必修课程,通过课程的学习,可以使学生对计算机内部结构有一个清晰的认识,了解和掌握计算机组装的完整步骤。本文从计算机组装实践实际教学出发,综合实践教学与虚拟环境教学各自的优缺点,在实践教学中适当引入虚拟环境,既保持了实践教学的特点,又发挥了虚拟环境的优势,使两者达到一个相对平衡的状态。%The computer assembly practice course is a compulsory course of computer science students, through the learning, aUow students to have a clear understanding of the internal structure of computer, understand and master the computer assembly complete steps. Actual teaching practice from the computer assembly, comprehensive practice teaching with virtual environment teaching their advantages and disadvantages, in the practice of teaching appropriate to introduce a virtual environment, that is, to maintain the characteristics of practice teaching, but also play to the advantages of a virtual environ- ment, so that the two reached a relatively balanced state.

  18. WIDER FACE: A Face Detection Benchmark

    OpenAIRE

    Yang, Shuo; Luo, Ping; Loy, Chen Change; Tang, Xiaoou

    2015-01-01

    Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categori...

  19. POLCA-T Neutron Kinetics Model Benchmarking

    OpenAIRE

    Kotchoubey, Jurij

    2015-01-01

    The demand for computational tools that are capable to reliably predict the behavior of a nuclear reactor core in a variety of static and dynamic conditions does inevitably require a proper qualification of these tools for the intended purposes. One of the qualification methods is the verification of the code in question. Hereby, the correct implementation of the applied model as well as its flawless implementation in the code are scrutinized. The present work concerns with benchmarking as a ...

  20. Benchmarking spatial joins à la carte

    OpenAIRE

    Günther, Oliver; Oria, Vincent; Picouet, Philippe; Saglio, Jean-Marc; Scholl, Michel

    1997-01-01

    Spatial joins are join operations that involve spatial data types and operators. Spatial access methods are often used to speed up the computation of spatial joins. This paper addresses the issue of benchmarking spatial join operations. For this purpose, we first present a WWW-based tool to produce sets of rectangles. Experimentators can use a standard Web browser to specify the number of rectangles, as well as the statistical distributions of their sizes, shapes, and locations. Second, using...

  1. First CSNI numerical benchmark problem: comparison report

    International Nuclear Information System (INIS)

    In order to be able to make valid statements about a model's ability to describe a certain physical situation, it is indispensable that the numerical errors are much smaller than the modelling errors; otherwise, numerical errors could compensate or over pronounce model errors in an uncontrollable way. Therefore, knowledge about the numerical errors dependence on discretization parameters (e.g. size of spatial and temporal mesh) is required. In recognition of this need, numerical benchmark problems have been introduced. In the area of transient two-phase flow, numerical benchmarks are rather new. In June 1978, the CSNI Working Group on Emergency Core Cooling of Water Reactors has proposed to ICD /CSNI to sponsor a First CSNI Numerical Benchmark exercise. By the end of October 1979, results of the computation had been received from 10 organisations in 10 different countries. Based on these contributions, a preliminary comparison report has been prepared and distributed to the members of the CSNI Working Group on Emergency Core Cooling of Water Reactors, and to the contributors to the benchmark exercise. Comments on the preliminary comparison report by some contributors have subsequently been received. They have been considered in writing this final comparison report

  2. Investigation of the PWR subchannel void distribution benchmark (OECD/NRC PSBT benchmark) using ANSYS CFX

    International Nuclear Information System (INIS)

    The presented CFD investigations using ANSYS CFX 13.0 are focused on the “Phase I - Void Distribution Benchmark, Exercise 1 -Steady-state Single Subchannel Benchmark” of the OECD/NRC PSBT benchmark. In this particular subsection of the entire benchmark flow through a test section representing a central subchannel of a PWR fuel assembly under nucleate subcooled boiling conditions is investigated. The investigations using ANSYS CFX had been carried out for 10 different test conditions (with respect to pressure, inlet fluid temperature, power and mass flow rate) from the PSBT test matrix. Emphasis had been given to a CFD best practice guidelines oriented investigation of the subcooled nucleate boiling flow through the subchannel configuration of the test section. By comparing CFD results to the benchmark data reasonably good agreement could be observed. Depending on the applied CFD submodels the results differ from the measured data by ±8% with respect to cross-sectional averaged void fraction at the measurement plane, where the averaged void fraction varied between 0.038 and 0.62 for the test conditions under investigation. (author)

  3. Diffusion benchmark calculations of a VVER-440 core with 180 deg symmetry

    International Nuclear Information System (INIS)

    A diffusion benchmark of the VVER-440 core with 180 deg symmetry and fixed cross sections is proposed. The new benchmark is the modification of Seidel's 3-dimensional 30 degree benchmark, which plays an important role in the verification and validation of nodal neutronic codes. In the new benchmark the 180 deg symmetry is assured by a stuck eccentric control assembly. The recommended reference solution is derived from diverse solutions of the DIF3D finite difference code. The results of the HEXAN module of the KARATE code system are also presented. (author)

  4. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano; Ferrara, Liberato; Krenzer, Knut; Mechtcherine, Viktor; Shyshko, Sergiy; Skocec, Jan; Spangenberg, Jon; Svec, Oldrich; Thrane, Lars Nyholm; Vasilic, Ksenija

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  5. HPC in Java: Experiences in Implementing the NAS Parallel Benchmarks

    OpenAIRE

    Amedro, Brian; Caromel, Denis; Huet, Fabrice; Bodnartchouk, Vladimir; Delbé, Christian; L. Taboada, Guillermo

    2010-01-01

    This paper reports on the design, implementation and benchmarking of a Java version of the Nas Parallel Benchmarks. We first briefly describe the implementation and the performance pitfalls. We then compare the overall performance of the Fortran MPI (PGI) version with a Java implementation using the ProActive middleware for distribution. All Java experiments were conducted on virtual machines with different vendors and versions. We show that the performance varies with the type of computation...

  6. Performance and Scalability of the NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for scientific applications. In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for scientific applications.

  7. Artificial Emotion Engine Benchmark Problem Based on Psychological Test Paradigm

    OpenAIRE

    Wang Yi; Wang Zhi-liang

    2013-01-01

    Most of testing and evaluations of emotion model in the field of affective computing are self-evaluation, which aims at the application-specific background, while the research on the problem of the Benchmark emotional model is scarce. This paper firstly proposed the feasibility of making psychological test paradigm a part of artificial Benchmark engine, and with taking versatility and effectiveness as the evolutional factor to judge the engine by testing psychological paradigms. In addition, ...

  8. Scalable randomized benchmarking of non-Clifford gates

    OpenAIRE

    Cross, Andrew W.; Magesan, Easwar; Bishop, Lev S.; Smolin, John A.; Gambetta, Jay M.

    2015-01-01

    Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of $n$-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized bench...

  9. Benchmarking Open-Source Tree Learners in R/RWeka

    OpenAIRE

    Schauerhuber, Michael; Zeileis, Achim; Meyer, David; Hornik, Kurt

    2007-01-01

    The two most popular classification tree algorithms in machine learning and statistics - C4.5 and CART - are compared in a benchmark experiment together with two other more recent constant-fit tree learners from the statistics literature (QUEST, conditional inference trees). The study assesses both misclassification error and model complexity on bootstrap replications of 18 different benchmark datasets. It is carried out in the R system for statistical computing, made possible by means of the...

  10. Benchmarking Parallel Natural Algorithms for Telecommunications Devices Design

    Directory of Open Access Journals (Sweden)

    Carlos Henrique da Silva-Santos

    2013-06-01

    Full Text Available This work presents the benchmark of three different adapted parallel natural inspired algorithms (Genetic Algorithm, Evolutionary Strategy and Artificial Immune System integrated to some numerical techniques to optimize a microstrip antenna and crystal based photonic filter. The evaluations were focused on parallel computing impact, considering convergence and runtime simulation analyses. This benchmark contributes to point out their efficiency of these algorithms to optimize telecommunication devices integrated with some numerical solution, and it also provide runtime equation estimative for these optimizations.

  11. Development and validation of burnup dependent computational schemes for the analysis of assemblies with advanced lattice codes

    Science.gov (United States)

    Ramamoorthy, Karthikeyan

    The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant

  12. Entropy-based benchmarking methods

    OpenAIRE

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  13. Benchmarking in the Semantic Web

    OpenAIRE

    García-Castro, Raúl; Gómez-Pérez, A.

    2009-01-01

    The Semantic Web technology needs to be thoroughly evaluated for providing objective results and obtaining massive improvement in its quality; thus, the transfer of this technology from research to industry will speed up. This chapter presents software benchmarking, a process that aims to improve the Semantic Web technology and to find the best practices. The chapter also describes a specific software benchmarking methodology and shows how this methodology has been used to benchmark the inter...

  14. Numerical and computational aspects of the coupled three-dimensional core/ plant simulations: organization for economic cooperation and development/ U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-II. 3. Analysis of the OECD TMI-1 Main-Steam- Line-Break Benchmark Accident Using the Coupled RELAP5/PANTHER Codes

    International Nuclear Information System (INIS)

    The RELAP5 best-estimate thermal-hydraulic system code has been coupled with the PANTHER three-dimensional (3-D) neutron kinetics code via the TALINK dynamic data exchange control and processing tool. The coupled RELAP5/PANTHER code package is being qualified and will be used at British Energy (BE) and Tractebel Energy Engineering (TEE), independently, to analyze pressurized water reactor (PWR) transients where strong core-system interactions occur. The Organization for Economic Cooperation and Development/Nuclear Energy Agency PWR Main-Steam-Line-Break (MSLB) Benchmark problem was performed to demonstrate the capability of the coupled code package to simulate such transients, and this paper reports the BE and TEE contributions. In the first exercise, a point-kinetics (PK) calculation is performed using the RELAP5 code. Two solutions have been derived for the PK case. The first corresponds to scenario, 1 where calculations are carried out using the original (BE) rod worth and where no significant return to power (RTP) occurs. The second corresponds to scenario 2 with arbitrarily reduced rod worth in order to obtain RTP (and was not part of the 'official' results). The results, as illustrated in Fig. 1, show that the thermalhydraulic system response and rod worth are essential in determining the core response. The second exercise consists of a 3-D neutron kinetics transient calculation driven by best-estimate time-dependent core inlet conditions on a 18 T and H zones basis derived from TRAC-PF1/MOD2 (PSU), again analyzing two scenarios of different rod worths. Two sets of PANTHER solutions were submitted for exercise 2. The first solution uses a spatial discretization of one node per assembly and 24 core axial layers for both flux and T and H mesh. The second is characterized by spatial refinement (2 x 2 nodes per assembly, 48 core layers for flux, and T and H calculation), time refinement (half-size time steps), and an increased radial discretization for solution

  15. Analysis of Unplated Subcritical Experiments Using Fresh Fuel Assemblies

    International Nuclear Information System (INIS)

    The number of spent nuclear fuel assemblies taken from nuclear power plants and to be stored in existing storage pools is increasing. Therefore, there is a need to optimize the storage configurations. The computer codes and cross sections used to analyze proposed storage configurations must be validated through comparison with experimental data. Restrictive values of ksafe, caused by limited data, can prevent optimal storage utilization. As a collaborative effort between Westinghouse Safety Management Solutions, Oak Ridge National Laboratory (ORNL), Georgia Institute of Technology, and the University of Missouri Research Reactor (MURR), more than 120 experiments were performed using four highly enriched MURR fuel assemblies. The 252Cf-source-driven noise analysis technique developed at ORNL was used as the measurement method for these experiments. This method is based on calculating a specific ratio of measured auto-power and cross-power spectral densities. Twenty-two unique configurations from the MURR experimental program were analyzed for benchmarking purposes.These subcritical experiments were described and analyzed in this paper to provide new measurements to increase the amount of data available for benchmarking criticality codes and cross sections for systems that are far from critical (keff eff values. Inferred keff values ranged from 0.648 ± 0.005 to 0.860 ± 0.006. A simplified benchmark model is described that consists of the four fuel assemblies, four 3He detectors, detector drywells, and the water reflector. For these measurements, the calculated ratio and keff values agreed with the measurement results within the measurement uncertainty. All of the analyzed configurations were considered acceptable for validation of computer codes and cross sections

  16. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  17. Core neutronics methodologies applied to the MOX-loaded KAIST 1A benchmark. Reference to industrial calculations

    International Nuclear Information System (INIS)

    EDF R and D is presently developing a new, state-of-the-art calculation chain called ANDROMEDE including the APOLLO2/JEFF3-based CEA multigroup library/REL2005 scheme package for assembly computations and COCAGNE 3D code for core computations. The goal of this paper is to validate the calculation chain and its methodologies on a numerical benchmark of a small PWR which has been loaded with mixed fuel, KAIST 1A. The latter is challenging, being highly heterogeneous as it has assemblies with burnable poison, offers a rodded configuration and includes both UOX-MOX and core-reflector interfaces. Thus, we will test the capabilities of the models used in ANDROMEDE to compute such cores. The validation methodology employed is as follows: stochastic calculations are used to validate the ability of assembly schemes SHEM-MOC and REL2005 for the computation of 2D full cores. Afterwards, industrial two-group diffusion calculations were set up. Reactivity coefficients and pin-by-pin power distributions were compared with those obtained from REL2005. Finally, the last section gives the prospects of the use of multigroup SPn for industrial calculations. They raise several questions such as the energy meshes to be used as well the 2D reflector model to be applied. A reflector model is set up to test the SPn solver on full-core calculations with results compared to those of the REL2005 scheme. (author)

  18. Benchmarking Memory Performance with the Data Cube Operator

    Science.gov (United States)

    Frumkin, Michael A.; Shabanov, Leonid V.

    2004-01-01

    Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.

  19. Parallel processing of neutron transport in fuel assembly calculation

    International Nuclear Information System (INIS)

    Group constants, which are used for reactor analyses by nodal method, are generated by fuel assembly calculations based on the neutron transport theory, since one or a quarter of the fuel assembly corresponds to a unit mesh in the current nodal calculation. The group constant calculation for a fuel assembly is performed through spectrum calculations, a two-dimensional fuel assembly calculation, and depletion calculations. The purpose of this study is to develop a parallel algorithm to be used in a parallel processor for the fuel assembly calculation and the depletion calculations of the group constant generation. A serial program, which solves the neutron integral transport equation using the transmission probability method and the linear depletion equation, was prepared and verified by a benchmark calculation. Small changes from the serial program was enough to parallelize the depletion calculation which has inherent parallel characteristics. In the fuel assembly calculation, however, efficient parallelization is not simple and easy because of the many coupling parameters in the calculation and data communications among CPU's. In this study, the group distribution method is introduced for the parallel processing of the fuel assembly calculation to minimize the data communications. The parallel processing was performed on Quadputer with 4 CPU's operating in NURAD Lab. at KAIST. Efficiencies of 54.3 % and 78.0 % were obtained in the fuel assembly calculation and depletion calculation, respectively, which lead to the overall speedup of about 2.5. As a result, it is concluded that the computing time consumed for the group constant generation can be easily reduced by parallel processing on the parallel computer with small size CPU's

  20. Introduction to the HPC Challenge Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  1. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins, Robert

    2010-01-01

    Im Bereich der Regionalpolitik erfreuen sich Benchmarking-Untersuchungen wachsender Beliebtheit. In diesem Beitrag werden das Konzept des regionalen Benchmarking sowie seine Verbindungen mit den regionalpolitischen Gestaltungsprozessen analysiert. Ich entwickle eine Typologie der regionalen Benchmarking-Untersuchungen und Benchmarker und unterziehe die Literatur einer kritischen Uumlberpruumlfung. Ich argumentiere, dass die Kritiker des regionalen Benchmarking nicht die Vielfalt und Entwicklu...

  2. Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems

    International Nuclear Information System (INIS)

    A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, Sn order effect and mesh size effect were systematically evaluated and summarized in this report. (author)

  3. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  4. Shielding benchmark test

    International Nuclear Information System (INIS)

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  5. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  6. Benchmark experiments for nuclear data

    International Nuclear Information System (INIS)

    Benchmark experiments offer the most direct method for validation of nuclear data. Benchmark experiments for several areas of application of nuclear data were specified by CSEWG. These experiments are surveyed and tests of recent versions of ENDF/B are presented. (U.S.)

  7. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  8. Computation of neutron fluxes in clusters of fuel pins arranged in hexagonal assemblies (2D and 3D)

    International Nuclear Information System (INIS)

    For computations of fluxes, we have used Carvik's method of collision probabilities. This method requires tracking algorithms. An algorithm to compute tracks (in 2D and 3D) has been developed for seven hexagonal geometries with cluster of fuel pins. This has been implemented in the NXT module of the code DRAGON. The flux distribution in cluster of pins has been computed by using this code. For testing the results, they are compared when possible with the EXCELT module of the code DRAGON. Tracks are plotted in the NXT module by using MATLAB, these plots are also presented here. Results are presented with increasing number of lines to show the convergence of these results. We have numerically computed volumes, surface areas and the percentage errors in these computations. These results show that 2D results converge faster than 3D results. The accuracy on the computation of fluxes up to second decimal is achieved with fewer lines. (authors)

  9. Quantum benchmarks for Gaussian states

    CERN Document Server

    Chiribella, Giulio

    2014-01-01

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.

  10. A proposal of a benchmark for calculation of the power distribution next to the absorber

    International Nuclear Information System (INIS)

    A proposal of a new benchmark problem was formulated to consider the characteristics of the VVER-440 fuel assembly with enrichment zoning, i.e. to study the space dependence of the power distribution near to a control assembly. A quite detailed geometry and the material composition of the fuel and the control assemblies were modeled by the help of MCNP calculations in AEKI. The results of the MCNP calculations were built in the KARATE code system as the new albedo matrices. The comparison of the KARATE calculation results and the MCNP calculations for this benchmark is presented. (author)

  11. Comparison of Computational Estimations of Reactivity Margin From Fission Products and Minor Actinides in PWR Burnup Credit

    International Nuclear Information System (INIS)

    This paper has presented the results of a computational benchmark and independent calculations to verify the benchmark calculations for the estimation of the additional reactivity margin available from fission products and minor actinides in a PWR burnup credit storage/transport environment. The calculations were based on a generic 32 PWR-assembly cask. The differences between the independent calculations and the benchmark lie within 1% for the uniform axial burnup distribution, which is acceptable. The Δk for KENO - MCNP results are generally lower than the other Δk values, due to the fact that HELIOS performed the depletion part of the calculation for both the KENO and MCNP results. The differences between the independent calculations and the benchmark for the non-uniform axial burnup distribution were within 1.1%

  12. Benchmarking of the FENDL-3 Neutron Cross-section Data Starter Library for Fusion Applications

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, U., E-mail: ulrich.fischer@kit.edu [Association KIT-Euratom, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Angelone, M. [Associazione ENEA-Euratom, ENEA Fusion Division, Via E. Fermi 27, I-00044 Frascati (Italy); Bohm, T. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States); Kondo, K. [Association KIT-Euratom, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Konno, C. [Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan); Sawan, M. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States); Villari, R. [Associazione ENEA-Euratom, ENEA Fusion Division, Via E. Fermi 27, I-00044 Frascati (Italy); Walker, B. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States)

    2014-06-15

    This paper summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) on a computational ITER benchmark and a series of 14 MeV neutron benchmark experiments. The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses. In general, FENDL-3 shows an improved performance for fusion neutronics applications.

  13. Benchmarking of the FENDL-3 Neutron Cross-section Data Starter Library for Fusion Applications

    International Nuclear Information System (INIS)

    This paper summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) on a computational ITER benchmark and a series of 14 MeV neutron benchmark experiments. The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses. In general, FENDL-3 shows an improved performance for fusion neutronics applications

  14. Comet whole-core solution to a stylized 3-dimensional pressurized water reactor benchmark problem with UO2and MOX fuel

    International Nuclear Information System (INIS)

    A stylized pressurized water reactor (PWR) benchmark problem with UO2 and MOX fuel was used to test the accuracy and efficiency of the coarse mesh radiation transport (COMET) code. The benchmark problem contains 125 fuel assemblies and 44,000 fuel pins. The COMET code was used to compute the core eigenvalue and assembly and pin power distributions for three core configurations. In these calculations, a set of tensor products of orthogonal polynomials were used to expand the neutron angular phase space distribution on the interfaces between coarse meshes. The COMET calculations were compared with the Monte Carlo code MCNP reference solutions using a recently published an 8-group material cross section library. The comparison showed both the core eigenvalues and assembly and pin power distributions predicated by COMET agree very well with the MCNP reference solution if the orders of the angular flux expansion in the two spatial variables and the polar and azimuth angles on the mesh boundaries are 4, 4, 2 and 2. The mean and maximum differences in the pin fission density distribution ranged from 0.28%-0.44% and 3.0%-5.5%, all within 3-sigma uncertainty of the MCNP solution. These comparisons indicate that COMET can achieve accuracy comparable to Monte Carlo. It was also found that COMET's computational speed is 450 times faster than MCNP. (authors)

  15. Computation of concentration changes of heavy metals in the fuel assemblies with 1.6% enrichment by ORIGEN code for VVER-1000

    International Nuclear Information System (INIS)

    ORIGEN code is a widely used computer code for calculating the buildup, decay, and processing of radioactive materials. During the past few years, a sustained effort was undertaken by ORNL to update the original ORIGEN code [4] and its associated data bases. The results of this effort were updated on the reactor model, cross section, fission product yields, decay data, decay photon data and the ORIGEN computer code itself. In this paper we have obtained concentration changes of uranium and plutonium isotopes by ORIGEN code at different burn-up and then the results have been compared with VVER-1000 results in the first fuel cycle for fuel assemblies with 1.6% enrichment in the BUSHEHR Nuclear Power Plant. (author)

  16. An Interactive Assembly Process Planner

    Institute of Scientific and Technical Information of China (English)

    廖华飞; 张林鍹; 肖田元; 曾理; 古月

    2004-01-01

    This paper describes the implementation and performance of the virtual assembly support system (VASS), a new system that can provide designers and assembly process engineers with a simulation and visualization environment where they can evaluate the assemblability/disassemblability of products, and thereby use a computer to intuitively create assembly plans and interactively generate assembly process charts. Subassembly planning and assembly priority reasoning techniques were utilized to find heuristic information to improve the efficiency of assembly process planning. Tool planning was implemented to consider tool requirements in the product design stage. New methods were developed to reduce the computation amount involved in interference checking. As an important feature of the VASS, human interaction was integrated into the whole process of assembly process planning, extending the power of computer reasoning by including human expertise, resulting in better assembly plans and better designs.

  17. Benchmark analysis of MCNP{trademark} ENDF/B-VI iron

    Energy Technology Data Exchange (ETDEWEB)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets.

  18. A CFD simulation process for fast reactor fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Hamman, Kurt D., E-mail: Kurt.Hamman@inl.go [Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Berry, Ray A. [Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States)

    2010-09-15

    A CFD modeling and simulation process for large-scale problems using an arbitrary fast reactor fuel assembly design was evaluated. Three-dimensional flow distributions of sodium for several fast reactor fuel assembly pin spacing configurations were simulated on high performance computers using commercial CFD software. This research focused on 19-pin fuel assembly 'benchmark' geometry, similar in design to the Advanced Burner Test Reactor, where each pin is separated by helical wire-wrap spacers. Several two-equation turbulence models including the k-{epsilon} and SST (Menter) k-{omega} were evaluated. Considerable effort was taken to resolve the momentum boundary layer, so as to eliminate the need for wall functions and reduce computational uncertainty. High performance computers were required to generate the hybrid meshes needed to predict secondary flows created by the wire-wrap spacers; computational meshes ranging from 65 to 85 million elements were common. A general validation methodology was followed, including mesh refinement and comparison of numerical results with empirical correlations. Predictions for velocity, temperature, and pressure distribution are shown. The uncertainty of numerical models, importance of high fidelity experimental data, and the challenges associated with simulating and validating large production-type problems are presented.

  19. The International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)

  20. A heterogeneous analytical benchmark for particle transport methods development

    International Nuclear Information System (INIS)

    A heterogeneous analytical benchmark has been designed to provide a quality control measure for large-scale neutral particle computational software. Assurance that particle transport methods are efficiently implemented and that current codes are adequately maintained for reactor and weapons applications is a major task facing today's transport code developers. An analytical benchmark, as used here, refers to a highly accurate evaluation of an analytical solution to the neutral particle transport equation. Because of the requirement of an analytical solution, however, only relatively limited transport scenarios can be treated. To some this may seem to be a major disadvantage of analytical benchmarks. However, to the code developer, simplicity by no means diminishes the usefulness of these benchmarks since comprehensive transport codes must perform adequately for simple as well as comprehensive transport scenarios

  1. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  2. Alkyl-Based Surfactants at a Liquid Mercury Surface: Computer Simulation of Structure, Self-Assembly, and Phase Behavior.

    Science.gov (United States)

    Iakovlev, Anton; Bedrov, Dmitry; Müller, Marcus

    2016-04-21

    Self-assembled organic films on liquid metals feature a very rich phase behavior, which qualitatively differs from the one on crystalline metals. In contrast to conventional crystalline supports, self-assembled alkylthiol monolayers on liquid metals possess a considerably higher degree of molecular order, thus enabling much more robust metal-molecule-semiconductor couplings for organic electronics applications. Yet, compared to crystalline substrates, the self-assembly of organic surfactants on liquid metals has been studied to a much lesser extent. In this Letter we report the first of its kind molecular simulation investigation of alkyl-based surfactants on a liquid mercury surface. The focus of our investigation is the surfactant conformations as a function of surface coverage and surfactant type. First, we consider normal alkanes because these systems set the basis for simulations of all other organic surfactants on liquid mercury. Subsequently, we proceed with the discussion of alkylthiols that are the most frequently used surfactants in the surface science of hybrid organometallic interfaces. Our results indicate a layering transition of normal alkanes as well as alkylthiols from an essentially bare substrate to a completely filled monolayer of laying molecules. As the surface coverage increases further, we observe a partial wetting of the laying monolayer by the bulk phase of alkanes. In the case of alkylthiols, we clearly see the coexistence of molecules in laying-down and standing-up conformations, in which the sulfur headgroups of the thiols are chemically bound to mercury. In the standing-up phase, the headgroups form an oblique lattice. For the first time we were able to explicitly characterize the molecular-scale structure and transitions between phases of alkyl-based surfactants and to demonstrate how the presence of a thiol headgroup qualitatively changes the phase equilibrium and structure in these systems. The observed phenomena are consistent with

  3. Comparative analysis of CTF and trace thermal-hydraulic codes using OECD/NRC PSBT benchmark void distribution database

    International Nuclear Information System (INIS)

    The international OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark has been established to provide a test bed for assessing the capabilities of various thermal-hydraulic subchannel, system, and computational fluid dynamics (CFD) codes and to encourage advancement in the analysis of fluid flow in rod bundles. The aim is to improve the reliability of the nuclear reactor safety margin evaluations. The benchmark is based on one of the most valuable databases identified for the thermal-hydraulics modeling, which was developed by the Nuclear Power Engineering Corporation (NUPEC) in Japan. The database includes subchannel void fraction and departure from nucleate boiling (DNB) measurements in a representative Pressurized Water Reactor (PWR) fuel assembly. Part of this database is made available for the international PSBT benchmark activity. The PSBT benchmark team is organized based on the collaboration between the Pennsylvania State University (PSU) and the Japan Nuclear Energy Safety organization (JNES) including the participation and support of the U.S. Nuclear Regulatory Commission (NRC) and the Nuclear Energy Agency (NEA), OECD. On behalf of the PSBT benchmark team, PSU in collaboration with US NRC is performing supporting calculations of the benchmark exercises using its in-house advanced thermalhydraulic subchannel code CTF and the US NRC system code TRACE. CTF is a version of the well-known and widely used code COBRA-TF whose models have been continuously improved and validated over the last years at the Reactor Dynamics and Fuel Management Group (RDFMG) at PSU. TRACE is a reactor systems code developed by the U.S. Nuclear Regulatory Commission to analyze transient and steady-state thermal-hydraulic behavior in Light Water Reactors (LWRs) and it has been designed to perform best-estimate analyses of loss-of-coolant accidents (LOCAs), operational transients, and other accident scenarios in PWRs and boiling light-water reactors (BWRs). The paper presents

  4. Fast neutron benchmark proposal at TRIGA-ACPR Reactor

    International Nuclear Information System (INIS)

    The development of fast neutron benchmarks is a historical aim of reactor physics. The dry experimental tube situated in the central region of the core in TRIGA Annular-Core Pulsing Reactor (ACPR) offers a suitable neutron source for fast neutron benchmark development. Our proposal consists in mounting a high-enriched uranium annular converter into the dry channel of the core. Preliminary computations and measurements are presented in this paper. Neutron flux computations in the dry channel and the uranium converter were performed using MCNP and WIMS codes. Also neutron flux spectrum measurements and fast and thermal neutron flux distribution measurements were performed using foil activation techniques. (authors)

  5. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  6. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  7. Cleanroom energy benchmarking results

    Energy Technology Data Exchange (ETDEWEB)

    Tschudi, William; Xu, Tengfang

    2001-09-01

    A utility market transformation project studied energy use and identified energy efficiency opportunities in cleanroom HVAC design and operation for fourteen cleanrooms. This paper presents the results of this work and relevant observations. Cleanroom owners and operators know that cleanrooms are energy intensive but have little information to compare their cleanroom's performance over time, or to others. Direct comparison of energy performance by traditional means, such as watts/ft{sup 2}, is not a good indicator with the wide range of industrial processes and cleanliness levels occurring in cleanrooms. In this project, metrics allow direct comparison of the efficiency of HVAC systems and components. Energy and flow measurements were taken to determine actual HVAC system energy efficiency. The results confirm a wide variation in operating efficiency and they identify other non-energy operating problems. Improvement opportunities were identified at each of the benchmarked facilities. Analysis of the best performing systems and components is summarized, as are areas for additional investigation.

  8. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  9. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  10. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing...

  11. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  12. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Catalina SITNIKOV; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  13. Self-organization of Dynamic Distributed Computational Systems Applying Principles of Integrative Activity of Brain Neuronal Assemblies

    OpenAIRE

    Eugene Burmakin; Fingelkurts, Alexander A.; Fingelkurts, Andrew A

    2009-01-01

    This paper presents a method for self-organization of the distributed systems operating in a dynamic context. We propose the use of a simple biologically (cognitive neuroscience) inspired method for system configuration that allows allocating most of the computational load to off-line in order to improve the scalability property of the system. The method proposed has less computational burden at runtime than traditional system adaptation approaches.

  14. Two benchmarks for qualification of pressure vessel fluence calculational methodology

    International Nuclear Information System (INIS)

    Two benchmarks for the qualification of the pressure vessel fluence calculational methodology were formulated and are briefly described. The Pool Critical Assembly (PCA) benchmark is based on the experiments performed at the PCA in Oak Ridge. The measured quantities to be compared against the calculated values are the equivalent fission fluxes at several locations in front, behind, and inside the pressure-vessel wall simulator. This benchmark is particularly suitable to test the capabilities of the calculational methodology and cross-section libraries to predict in-vessel gradients because only a few approximations are necessary in the analysis. The HBR-2 benchmark is based on the data for the H.B. Robinson-2 plant, which is a 2,300 MW (thermal) pressurized light-water reactor. The benchmark provides the reactor geometry, the material compositions, the core power distributions, and the power historical data. The quantities to be calculated are the specific activities of the radiometric monitors that were irradiated in the surveillance capsule and in the cavity location during one fuel cycle. The HBR-2 benchmark requires modeling approximations, power-to-neutron source conversion, and treatment of time dependant variations. It can therefore be used to test the overall performance and adequacy of the calculational methodology for power-reactor pressure-vessel flux calculations. Both benchmarks were analyzed with the DORT code and the BUGLE-96 cross-section library that is based on ENDF/B-VI evaluations. The calculations agreed with the measurements within 10%, and the calculations underpredicted the measurements in all the cases. This indicates that the ENDF/B-VI cross sections resolve most of the discrepancies between the measurements and calculations. The decrease of the CIM ratios with increased thickness of iron, which was typical for pre-ENDF/B-VI libraries, is almost completely removed

  15. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    Science.gov (United States)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  16. Core periphery power tilt benchmark for WWER-440 definition

    International Nuclear Information System (INIS)

    The unstable accuracy of power forecasts at periphery fuel pins and utilization of exploitation data from WWER-440 reactors are main motivations for benchmark definition. Second generation fuel assemblies with mean enrichment 4,25 % at 5-year cycle at Unit 4 of NPP Bohunice are analyzed with emphasis on the last cycle. Starting point, calculated period and results are characterized. SCORPIO data will be used for comparison. (Authors)

  17. BEGAFIP. Programming service, development and benchmark calculations

    International Nuclear Information System (INIS)

    This report summarizes improvements to BEGAFIP (the Swedish equivalent to the Oak Ridge computer code ORIGEN). The improvements are: addition of a subroutine making it possible to calculate neutron sources, exchange of fission yields and branching ratios in the data library to those published by Meek and Rider in 1978. In addition, BENCHMARK-calculations have been made with BEGAFIP as well as with ORIGEN regarding the build-up of actinides for a fuel burnup of 33 MWd/kg U. The results were compared to those arrived upon from the more sophisticated code CASMO. (author)

  18. COVE 2A Benchmarking calculations using NORIA

    International Nuclear Information System (INIS)

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs

  19. International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook

    International Nuclear Information System (INIS)

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy (DOE). The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirements and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross-section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span approximately 69000 pages and contain 567 evaluations with benchmark specifications for 4874 critical, near-critical or subcritical configurations, 31 criticality alarm placement/shielding configurations with multiple dose points for each, and 207 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the handbook are benchmark specifications for neutron activation foil and thermoluminescent dosimeter measurements performed at the SILENE critical assembly in Valduc, France as part of a joint venture in 2010 between the US DOE and the French Alternative Energies and Atomic Energy Commission (CEA). A photograph of this experiment is shown on the front cover. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these

  20. Shielding integral benchmark archive and database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L.; Grove, R.E. [Radiation Safety Information Computational Center RSICC, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831-6171 (United States); Kodeli, I. [Josef Stefan Inst., Jamova 39, 1000 Ljubljana (Slovenia); Gulliford, J.; Sartori, E. [OECD NEA Data Bank, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  1. Shielding integral benchmark archive and database (SINBAD)

    International Nuclear Information System (INIS)

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  2. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking as a...... market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type...

  3. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  4. Benchmarking Developing Asia's Manufacturing Sector

    OpenAIRE

    Felipe, Jesus; Gemma ESTRADA

    2007-01-01

    This paper documents the transformation of developing Asia's manufacturing sector during the last three decades and benchmarks its share in GDP with respect to the international regression line by estimating a logistic regression.

  5. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  6. The VENUS-7 benchmarks. Results from state-of-the-art transport codes and nuclear data

    International Nuclear Information System (INIS)

    For the validation of both nuclear data and computational methods, comparisons with experimental data are necessary. Most advantageous are assemblies where not only the multiplication factors or critical parameters were measured, but also additional quantities like reactivity differences or pin-wise fission rate distributions have been assessed. Currently there is a comprehensive activity to evaluate such measure-ments and incorporate them in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. A large number of such experiments was performed at the VENUS zero power reactor at SCK/CEN in Belgium in the sixties and seventies. The VENUS-7 series was specified as an international benchmark within the OECD/NEA Working Party on Scientific Issues of Reactor Systems (WPRS), and results obtained with various codes and nuclear data evaluations were summarized. In the present paper, results of high-accuracy transport codes with full spatial resolution with up-to-date nuclear data libraries from the JEFF and ENDF/B evaluations are presented. The comparisons of the results, both code-to-code and with the measured data are augmented by uncertainty and sensitivity analyses with respect to nuclear data uncertainties. For the multiplication factors, these are performed with the TSUNAMI-3D code from the SCALE system. In addition, uncertainties in the reactivity differences are analyzed with the TSAR code which is available from the current SCALE-6 version. (orig.)

  7. Simplified benchmark based on 2670 ISTC WWER post-irradiation examinations - specification and preliminary results

    International Nuclear Information System (INIS)

    Experimental validation of depletion computer codes is an ongoing need in spent fuel management. In WWER application area the lack of well-documented experimental data concerning depleted fuel is serious, being an obstacle to introduce new effective technologies and approaches in spent fuel management, e.g. burnup credit (BUC). In 2004, the final report of ISTC 2670 project on post-irradiation examinations (PIE) of eight samples from Novovoronezh-4 NPP (specimens taken from one assembly covering burnup range from 22 to 45 MWd/kgU) was released and published. The 2670 WWER-440 post-irradiation is the first publicly available measurement providing also fission product concentrations for the 'BUC set' of isotopes. Although the documentation of the experiment was quite comprehensive, still there were missing some important data needed for a precise depletion simulation. Therefore in 2006, NRI in collaboration with RIAR Dimitrovgrad, where the measurements were carried out, gathered the missing data and prepared a well-specified simplified benchmark, based on this measurement. Its specification as well as results of preliminary calculations using several depletion codes are presented in this paper. Final evaluation of the results calculated by all benchmark participants is expected for presentation in 2008 (Authors)

  8. Benchmarking Ligand-Based Virtual High-Throughput Screening with the PubChem Database

    Directory of Open Access Journals (Sweden)

    Mariusz Butkiewicz

    2013-01-01

    Full Text Available With the rapidly increasing availability of High-Throughput Screening (HTS data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR models are built using Artificial Neural Networks (ANNs, Support Vector Machines (SVMs, Decision Trees (DTs, and Kohonen networks (KNs. Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed.

  9. Analytical benchmarks for nuclear engineering applications. Case studies in neutron transport theory

    International Nuclear Information System (INIS)

    The developers of computer codes involving neutron transport theory for nuclear engineering applications seldom apply analytical benchmarking strategies to ensure the quality of their programs. A major reason for this is the lack of analytical benchmarks and their documentation in the literature. The few such benchmarks that do exist are difficult to locate, as they are scattered throughout the neutron transport and radiative transfer literature. The motivation for this benchmark compendium, therefore, is to gather several analytical benchmarks appropriate for nuclear engineering applications under one cover. We consider the following three subject areas: neutron slowing down and thermalization without spatial dependence, one-dimensional neutron transport in infinite and finite media, and multidimensional neutron transport in a half-space and an infinite medium. Each benchmark is briefly described, followed by a detailed derivation of the analytical solution representation. Finally, a demonstration of the evaluation of the solution representation includes qualified numerical benchmark results. All accompanying computer codes are suitable for the PC computational environment and can serve as educational tools for courses in nuclear engineering. While this benchmark compilation does not contain all possible benchmarks, by any means, it does include some of the most prominent ones and should serve as a valuable reference. (author)

  10. Strategic Behaviour under Regulation Benchmarking

    OpenAIRE

    Jamasb, Tooraj; Nillesen, Paul; Michael G. Pollitt

    2003-01-01

    Liberalisation of generation and supply activities in the electricity sectors is often followed by regulatory reform of distribution networks. In order to improve the efficiency of distribution utilities, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ?regulation game?, the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behav...

  11. A Hierarchical Coarse-Grained (All-Atom-to-All-Residue) Computer Simulation Approach: Self-Assembly of Peptides

    OpenAIRE

    Pandey, Ras B.; Kuang, Zhifeng; Farmer, Barry L.

    2013-01-01

    A hierarchical computational approach (all-atom residue to all-residue peptide) is introduced to study self-organizing structures of peptides as a function of temperature. A simulated residue-residue interaction involving all-atom description, analogous to knowledge-based analysis (with different input), is used as an input to a phenomenological coarse-grained interaction for large scales computer simulations. A set of short peptides P1 (1H 2S 3S 4Y 5W 6Y 7A 8F 9N 10N 11K 12T) is considered a...

  12. WIPP Benchmark calculations with the large strain SPECTROM codes

    International Nuclear Information System (INIS)

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems

  13. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    International Nuclear Information System (INIS)

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  14. Computer organization and design the hardware/software interface

    CERN Document Server

    Patterson, David A

    2009-01-01

    The classic textbook for computer systems analysis and design, Computer Organization and Design, has been thoroughly updated to provide a new focus on the revolutionary change taking place in industry today: the switch from uniprocessor to multicore microprocessors. This new emphasis on parallelism is supported by updates reflecting the newest technologies with examples highlighting the latest processor designs, benchmarking standards, languages and tools. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, compu

  15. Study on the characteristic statistic algorithm by use of the LP search benchmark problem

    International Nuclear Information System (INIS)

    The Characteristic Statistic Algorithm (CSA), which was proposed in the literature, is studied by use of the PWR LP search benchmark problem. Results demonstrated that the algorithm is capable of finding solutions that are close to the global optimum at quite low computational costs. The mechanism of the optimum searching process is also illustrated by use of the benchmark problem. (authors)

  16. Diffusion benchmark calculations of a WWER-440 core with 180 deg symmetry

    International Nuclear Information System (INIS)

    A diffusion benchmark of the VVER-440 core with 180 degree symmetry and fixed cross sections is proposed. The new benchmark is the modification of Seidel's 3 dimensional 30 degree benchmark, which plays an important role in the verification and validation of nodal neutronic codes. In the 180 degree symmetry is assured by a stuck eccentric control assembly. The recommended reference solution is derived from diverse solution of the DIF3D finite difference code. The results of the HEXAN module of the KARATE code system are also presented.(Authors)

  17. JNC's review and proposal for BN-600 hybrid core benchmark calculation

    International Nuclear Information System (INIS)

    This contribution includes questions on benchmark description (geometry, composition, data evaluation) and proposals for the BN-600 benchmark project. Proposals are related to benchmark of cell heterogeneity evaluation (fuel assembly, control rod); additional burnup properties (burnup reactivity loss, fuel composition change); analysis by using the cross section sensitivity method (application of perturbation theory, influence of cross section difference, estimation of analytical method difference); evaluation of BN-600 design value and its errors (best estimated design value of hybrid core, error estimation of the design value)

  18. Thermal fatigue benchmark final - research report

    International Nuclear Information System (INIS)

    DNV (Det Norske Veritas) has analysed a 3D mock-up, loaded with variable temperature. The load is applied to the internal of a pipe, and deviates from the axisymmetrical case. The calculations were performed in blind in an international benchmark project. DNV's contribution was funded by SKI. The calculations show the importance of taking the non-axisymmetry into account. An axisymmetrical analysis would underestimate the stresses in the pipe. The temperature field in the mock-up was measured at several locations in the pre-test condition. It turned out to be difficult to capture the measured field by applying only convection, adjusting heat transfer coefficients. The adjustment of the heat transfer coefficient proved to be a major problem. No standard estimation of these parameters were capable of satisfyingly capture the temperature fields. This highlights the complexity of this kind of problems. It was reported by CEA that modelling of radiation was required for accurately resolving the stresses. The time to crack initiation was computed, as well as crack propagation rates. The computed crack initiation time is significantly longer than the crack propagation time. All results by DNV in terms of maximum stress range, computed design life and crack propagation time are comparable to those obtained by other contributors to the benchmark project. The DNV computed maximum stress range is Δσ = 715 MPa (von Mises). The contribution by other members range from 507 to 805 MPa. The DNV computed fatigue life (from two mean curves, ASME and CEA) range from 100.000 to 1.000.000 depending on different assumptions

  19. SMORN-III benchmark test on reactor noise analysis methods

    International Nuclear Information System (INIS)

    A computational benchmark test was performed in conjunction with the Third Specialists Meeting on Reactor Noise (SMORN-III) which was held in Tokyo, Japan in October 1981. This report summarizes the results of the test as well as the works made for preparation of the test. (author)

  20. Spherical harmonic results for the 3D Kobayashi Benchmark suite

    International Nuclear Information System (INIS)

    Spherical harmonic solutions are presented for the Kobayashi benchmark suite. The results were obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL

  1. A Meta-Theory of Boundary Detection Benchmarks

    OpenAIRE

    Hou, Xiaodi; Yuille, Alan; Koch, Christof

    2012-01-01

    Human labeled datasets, along with their corresponding evaluation algorithms, play an important role in boundary detection. We here present a psychophysical experiment that addresses the reliability of such benchmarks. To find better remedies to evaluate the performance of any boundary detection algorithm, we propose a computational framework to remove inappropriate human labels and estimate the instrinsic properties of boundaries.

  2. Analysis of BFS-62-3A critical experiment benchmark model - IGCAR results

    International Nuclear Information System (INIS)

    The BFS 62-3A assembly is a full scale model of BN-600 hybrid core. The MOX zone is represented as a ring between medium enriched (MEZ) and high enriched zones (HEZ). The hybrid core with steel reflector is represented in a 120 deg sector of BFS. For a homogenised 3-D core of BFS, equivalent experimental data of keff and SVRE values were derived by including the following corrections to the actually obtained experimental results: (a) heterogeneity effect and (b) 3-D model simplification effect. The nuclear data used was XSET-98. It is a 26 group set with ABBN type self-shielding factor table. The benchmark models were analysed by diffusion theory. 3-D calculations were done by TREDFR code in 26 groups with 6 triangular meshes per fuel assembly. The number of triangles was 24414. Axial mesh size corrections were estimated for some cases. The convergence criteria for were 0.000001 for keff and 0.0001 for point wise fission source. The multiplication factor of the reference core of the benchmark is compared with measured. The multiplication factor is predicted with in the uncertainty margin. The SVRE values were computed as Δk/k1k2 and compared to measured values. It is found that the predictions are with in the uncertainty margin except in the MOX region. Reason for this needs to be investigated. As a first step, axial mesh size effect was estimated for MOX SVRE (sodium void reactivity effect) case with use finer meshes in the reference core as well the MOX voided core. By increasing the axial meshe from 35 to 54 both the keff reduced by the same amount leaving the MOX SVRE worth unchanged

  3. Closed-Loop Neuromorphic Benchmarks

    Science.gov (United States)

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  4. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  5. ZZ IHEAS-BENCHMARKS, High-Energy Accelerator Shielding Benchmarks

    International Nuclear Information System (INIS)

    Description of program or function: Six kinds of Benchmark problems were selected for evaluating the model codes and the nuclear data for the intermediate and high energy accelerator shielding by the Shielding Subcommittee in the Research Committee on Reactor Physics. The benchmark problems contain three kinds of neutron production data from thick targets due to proton, alpha and electron, and three kinds of shielding data for secondary neutron and photon generated by proton. Neutron and photo-neutron reaction cross section data are also provided for neutrons up to 500 MeV and photons up to 300 MeV, respectively

  6. Benchmarking of the ZR-6 critical assemblies using WIMS

    International Nuclear Information System (INIS)

    During the 1970 and early 1980 a wide ranging series of experiments was performed in the ZR-6 facility in Budapest. The cores consisted of arrays of UO2 fuel rods on a hexagonal pitch with light water moderator. Criticality was achieved by varying the moderator height.(Authors)

  7. Self-assembled via axial coordination magnesium porphyrin-imidazole appended fullerene dyad: spectroscopic, electrochemical, computational, and photochemical studies.

    Science.gov (United States)

    D'Souza, Francis; El-Khouly, Mohamed E; Gadde, Suresh; McCarty, Amy L; Karr, Paul A; Zandler, Melvin E; Araki, Yasuyaki; Ito, Osamu

    2005-05-26

    Spectroscopic, redox, and electron transfer reactions of a self-assembled donor-acceptor dyad formed by axial coordination of magnesium meso-tetraphenylporphyrin (MgTPP) and fulleropyrrolidine appended with an imidazole coordinating ligand (C(60)Im) were investigated. Spectroscopic studies revealed the formation of a 1:1 C(60)Im:MgTPP supramolecular complex, and the anticipated 1:2 complex could not be observed because of the needed large amounts of the axial coordinating ligand. The formation constant, K(1), for the 1:1 complex was found to be (1.5 +/- 0.3) x 10(4) M(-1), suggesting fairly stable complex formation. The geometric and electronic structures of the dyads were probed by ab initio B3LYP/3-21G() methods. The majority of the highest occupied frontier molecular orbital (HOMO) was found to be located on the MgTPP entity, while the lowest unoccupied molecular orbital (LUMO) was on the fullerene entity, suggesting that the charge-separated state of the supramolecular complex is C(60)Im(*-):MgTPP(*+). Redox titrations involving MgTPP and C(60)Im allowed accurate determination of the oxidation and reduction potentials of the donor and acceptor entities in the supramolecular complex. These studies revealed more difficult oxidation, by about 100 mV, for MgTPP in the pentacoordinated C(60)Im:MgTPP compared to pristine MgTPP in o-dichlorobenzene. A total of six one-electron redox processes corresponding to the oxidation and reduction of the zinc porphyrin ring and the reduction of fullerene entities was observed within the accessible potential window of the solvent. The excited state events were monitored by both steady state and time-resolved emission as well as transient absorption techniques. In o-dichlorobenzene, upon coordination of C(60)Im to MgTPP, the main quenching pathway involved electron transfer from the singlet excited MgTPP to the C(60)Im moiety. The rate of forward electron transfer, k(CS), calculated from the picosecond time-resolved emission

  8. PapaBench: a Free Real-Time Benchmark

    OpenAIRE

    Nemer, Fadia; Cassé, Hugues; Sainrat, Pascal; Bahsoun, Jean-Paul; De Michiel, Marianne; Potpourri

    2006-01-01

    This paper presents PapaBench, a free real-time benchmark and compares it with the existing benchmark suites. It is designed to be valuable for experimental works in WCET computation and may be also useful for scheduling analysis. This bench is based on the Paparazzi project that represents a real-time application, developed to be embedded on different Unmanned Aerial Vehicles (UAV). In this paper, we explain the transformation process of Paparazzi applied to obtain the PapaBench. We provide ...

  9. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    OpenAIRE

    Dreher, Patrick; Byun, Chansup; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many...

  10. On the fast multipole method for computing the energy of periodic assemblies of charged and dipolar particles

    International Nuclear Information System (INIS)

    In two dimensions, it is convenient to represent the coordinates (x, y) of particles as complex numbers z = x + iy. The energy of interaction of two point charges q1 and q 2 at points represented by the complex numbers z1 and z2 is then since the natural logarithm is the singular part of the Greens function for the two-dimensional Laplace equation. In performing molecular dynamics and Monte Carlo simulations of neutral systems of charged particles or of point dipoles, it is necessary to compute the energies and forces of an infinite periodic system in which the N charges or dipoles at the points z1 ... zn resident in the primary (usually square) simulation cell are replicated everywhere in the plane

  11. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of Keff, control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  12. Assembly and annotation of a non-model gastropod (Nerita melanotragus) transcriptome: a comparison of De novo assemblers

    OpenAIRE

    Amin, Shorash; Prentis, Peter J.; Gilding, Edward K.; Pavasovic, Ana

    2014-01-01

    Background The sequencing, de novo assembly and annotation of transcriptome datasets generated with next generation sequencing (NGS) has enabled biologists to answer genomic questions in non-model species with unprecedented ease. Reliable and accurate de novo assembly and annotation of transcriptomes, however, is a critically important step for transcriptome assemblies generated from short read sequences. Typical benchmarks for assembly and annotation reliability have been performed with mode...

  13. Compilation of benchmark results for fusion related Nuclear Data

    International Nuclear Information System (INIS)

    This report compiles results of benchmark tests for validation of evaluated nuclear data to be used in nuclear designs of fusion reactors. Parts of results were obtained under activities of the Fusion Neutronics Integral Test Working Group organized by the members of both Japan Nuclear Data Committee and the Reactor Physics Committee. The following three benchmark experiments were employed used for the tests: (i) the leakage neutron spectrum measurement experiments from slab assemblies at the D-T neutron source at FNS/JAERI, (ii) in-situ neutron and gamma-ray measurement experiments (so-called clean benchmark experiments) also at FNS, and (iii) the pulsed sphere experiments for leakage neutron and gamma-ray spectra at the D-T neutron source facility of Osaka University, OKTAVIAN. Evaluated nuclear data tested were JENDL-3.2, JENDL Fusion File, FENDL/E-1.0 and newly selected data for FENDL/E-2.0. Comparisons of benchmark calculations with the experiments for twenty-one elements, i.e., Li, Be, C, N, O, F, Al, Si, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zr, Nb, Mo, W and Pb, are summarized. (author). 65 refs

  14. Compilation report of VHTRC temperature coefficient benchmark calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yasuda, Hideshi; Yamane, Tsuyoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-11-01

    A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, `Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors` to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k{sub eff}, by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other`s ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.).

  15. Benchmark calculations in multigroup and multidimensional time-dependent transport

    International Nuclear Information System (INIS)

    It is widely recognized that reliable benchmarks are essential in many technical fields in order to assess the response of any approximation to the physics of the problem to be treated and to verify the performance of the numerical methods used. The best possible benchmarks are analytical solutions to paradigmatic problems where no approximations are actually introduced and the only error encountered is connected to the limitations of computational algorithms. Another major advantage of analytical solutions is that they allow a deeper understanding of the physical features of the model, which is essential for the intelligent use of complicated codes. In neutron transport theory, the need for benchmarks is particularly great. In this paper, the authors propose to establish accurate numerical solutions to some problems concerning the migration of neutron pulses. Use will be made of the space asymptotic theory, coupled with a Laplace transformation inverted by a numerical technique directly evaluating the inversion integral

  16. Fault detection of a benchmark wind turbine using interval analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Odgaard, Peter Fogh; Bak, Thomas

    of the measurement with a closed set that is computed based on the past measurements and a model of the system. If the measurement is not consistent with this set, a fault is detected. The result demonstrates effectiveness of the method for fault detection of the benchmark wind turbine.......This paper investigates a state estimation set- membership approach for fault detection of a benchmark wind turbine. The main challenges in the benchmark are high noise on the wind speed measurement and the nonlinearities in the aerodynamic torque such that the overall model of the turbine is...... nonlinear. We use an effective wind speed estimator to estimate the effective wind speed and then using interval analysis and monotonicity of the aerodynamic torque with respect to the effective wind speed, we can apply the method to the nonlinear system. The fault detection algorithm checks the consistency...

  17. COMPUTER SIMULATION OF MISCIBILITY AND SELF-ASSEMBLY STRUCTURE FOR POLYMER-CONTAINING SYSTEMS WITH SPECIAL INTERACTIONS

    Institute of Scientific and Technical Information of China (English)

    Tong-fei Shi; Ying Zhang; Wei Jiang; Li-jia An; Bin-yao Li

    2003-01-01

    The miscibility and structure ofA-B copolymer/C homopolymer blends with special interactions were studied by a Monte Carlo simulation in two dimensions. The interaction between segment A and segment C was repulsive, whereas it was attractive between segment B and segment C. In order to study the effect of copolymer chain structure on the morphology and structure of A-B copolymer/C homopolymer blends, the alternating, random and block A-B copolymers were introduced into the blends, respectively. The simulation results indicated that the miscibility of A-B block copolymer/C homopolymer blends depended on the chain structure of the A-B copolymer. Compared with. alternating or random copolymer, the block copolymer, especially the diblock copolymer, could lead to a poor miscibility of A-B copolymer/C homopolymer blends.Moreover, for diblock A-B copolymer/C homopolymer blends, obvious self-organized core-shell structure was observed in the segment B composition region from 20% to 60%. However, if diblock copolymer composition in the blends is less than 40%, obvious self-organized core-shell structure could be formed in the B-segment component region from 10 to 90%.Furthermore, computer statistical analysis for the simulation results showed that the core sizes tended to increase continuously and their distribution became wider with decreasing B-segment component.

  18. Analysis of a multigroup stylized CANDU half-core benchmark

    International Nuclear Information System (INIS)

    Highlights: → This paper provides a benchmark that is a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. → An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core CANDU benchmark problem. → Reference eigenvalues and selected pin and bundle fission rates are included. → 2-, 4- and 47-group Monte Carlo solutions are compared to analyze homogenization-free transport approximations that result from energy condensation. - Abstract: An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem. Reference eigenvalues and selected pin and bundle fission rates are also included. This benchmark is intended to provide computational reactor physicists and methods developers with a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. In addition to transport theory code verification, the 8-group energy structure provides reactor physicist with an ideal problem for examining cross section homogenization and collapsing effects in a full-core environment. To this end, additional 2-, 4- and 47-group full-core Monte Carlo benchmark solutions are compared to analyze homogenization-free transport approximations incurred as a result of energy group condensation.

  19. Testing of cross section libraries on zirconium benchmarks

    International Nuclear Information System (INIS)

    Highlights: ► Calculations with ENDF/B-VII.0 nuclear data overpredict keff of Zr benchmarks. ► TRIGA criticality benchmark sensitive to Zr data. ► Zr scattering cross section responsible for differences in keff. ► Need for new experimental data on Zr cross sections. - Abstract: In this paper we investigate the influence of various up-to-date nuclear data libraries, such as ENDF/B-VI.6, ENDF/B-VII.0 and JEFF 3.1, on the multiplication factor of the TRIGA benchmark with fuel made of enriched uranium and zirconium hydride and SB light-water reactor benchmarks with fuel made of fissile material in zirconium matrix. The calculations are performed with the Monte Carlo computer code MCNP. Differences of ∼600 pcm in keff are observed for the benchmark model of the TRIGA reactor, while there are practically no differences in the kinf of the fuel. Therefore, an investigation is performed also for hypothetical homogeneous and heterogeneous systems with different leakage. The uncertainty analysis shows that the most important contributors to the difference in keff are the Zr isotopes (especially 90Zr and 91Zr) and thermal scattering data for H and Zr in ZrH. As the differences in keff due to the use of different cross section libraries are relatively large, there is certainly a need for a review of the evaluated cross section data of the zirconium isotopes.

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  1. Benchmarking biodiversity performances of farmers

    NARCIS (Netherlands)

    Snoo, de G.R.; Lokhorst, A.M.; Dijk, van J.; Staats, H.; Musters, C.J.M.

    2010-01-01

    Farmers are the key players when it comes to the enhancement of farmland biodiversity. In this study, a benchmark system that focuses on improving farmers’ nature conservation was developed and tested among Dutch arable farmers in different social settings. The results show that especially tailored

  2. Benchmark calculations for EGS5

    International Nuclear Information System (INIS)

    In the past few years, EGS4 has undergone an extensive upgrade to EGS5, in particularly in the areas of low-energy electron physics, low-energy photon physics, PEGS cross section generation, and the coding from Mortran to Fortran programming. Benchmark calculations have been made to assure the accuracy, reliability and high quality of the EGS5 code system. This study reports three benchmark examples that show the successful upgrade from EGS4 to EGS5 based on the excellent agreements among EGS4, EGS5 and measurements. The first benchmark example is the 1969 Crannell Experiment to measure the three-dimensional distribution of energy deposition for 1-GeV electrons shower in water and aluminum tanks. The second example is the 1995 Compton-scattered spectra measurements for 20-40 keV, linearly polarized photon by Namito et. al., in KEK, which was a main part of the low-energy photon expansion work for both EGS4 and EGS5. The third example is the 1986 heterogeneity benchmark experiment by Shortt et. al., who used a monoenergetic 20-MeV electron beam to hit the front face of a water tank containing both air and aluminum cylinders and measured spatial depth dose distribution using a small solid-state detector. (author)

  3. Nominal GDP: Target or Benchmark?

    OpenAIRE

    Hetzel, Robert L.

    2015-01-01

    Some observers have argued that the Federal Reserve would best fulfill its mandate by adopting a target for nominal gross domestic product (GDP). Insights from the monetarist tradition suggest that nominal GDP targeting could be destabilizing. However, adopting benchmarks for both nominal and real GDP could offer useful information about when monetary policy is too tight or too loose.

  4. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  5. Numerical and computational aspects of the coupled three-dimensional core/ plant simulations: organization for economic cooperation and development/ U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-II. 1. Significance of Refined Core Thermal- Hydraulics Nodalization in the MSLB Analysis

    International Nuclear Information System (INIS)

    In the three-dimensional kinetics coupled system calculations, coarse thermal-hydraulics (T-H) nodes are frequently employed in the core region for modeling and computational efficiencies. It is obvious, however, that a refined core T-H nodalization would result in a better solution. In this paper, improvements achievable by such refinement are evaluated for the OECD Main-Steam-Line-Break (MSLB) Benchmark Exercise III problem using the MARS/MASTER and RELAP5/PARCS codes (Refs. 2 and 3, respectively). The MARS/MASTER code can be run in two modes: with and without the internal COBRA III-CP T-H module of MASTER turned on. In the coarse T-H mode, the core T-H calculation is performed entirely by the MARS T-H module with a relatively coarse core T-H nodalization. In the refined T-H mode, both MARS and COBRA modules perform the core T-H calculation. The COBRA module takes flow boundary conditions from the MARS results and performs the core T-H calculation employing refined T-H nodes. The first MARS/MASTER MSLB model is for the coarse T-H mode, and it consists of 18 flow channels and heat structures and 6 axial levels in the active core region. The second model is for the refined T-H mode and assigns a channel to each assembly. It consists of 177 radial and 24 axial nodes. The other two models are for RELAP5/PARCS. One RELAP5/PARCS model employs the same core T-H nodalization as the first MARS/MASTER model. The other one still has 18 flow channels, but it has 192 heat structures constructed by assigning one heat structure to each fuel assembly. Multiple heat structures are immersed in a channel in this model. The keff obtained with the MARS/MASTER coarse and refined models are 1.00550 and 1.00317, respectively, while the RELAP5/PARCS values are 1.00641 and 1.00511. The keff is lowered by the refined model because of the lowered importance of reactive nodes experiencing stronger feedback with the refined model. This trend is consistent with the results obtained by others

  6. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  9. Validation of CITVAP Power Distribution via the IAEA 3-D PWR Benchmark Problem

    International Nuclear Information System (INIS)

    Calculation of effective multiplication factor (keff) value and power distribution is a very important task for fuel assembly design and whole core safety analysis. In this work, used deterministic method used with CITVAP v 3.1 code , which is based on the diffusion theory with finite difference method, to assess the accuracy of keff calculation and power distribution of the IAEA 3-D PWR benchmark problem. The study of power maps achieve on fuel assembly-by-assembly level. The code results for keff value and power map distribution are compared with reference values. The results demonstrate that the CITVAP v 3.1 codes showed good estimates of both keff value and power distribution for the critical core benchmark provided adequate number of meshes is chosen per each assembly

  10. Conclusion of the I.C.T. benchmark exercise

    International Nuclear Information System (INIS)

    The ICT Benchmark exercise made within the RIV working group of ESARDA on reprocessing data supplied by COGEMA for 53 routines reprocessing input batches made of 110 irradiated fuel assemblies from KWO Nuclear Power Plant was finally evaluated. The conclusions are: all seven different ICT methods applied verified the operator data on plutonium within about one percent; anomalies intentionally introduced to the operator data were detected in 90% of the cases; the nature of the introduced anomalies, which were unknown to the participants, was completely resolved for the safeguards relevant cases; the false alarm rate was in a few percent range. The ICT Benchmark results shows that this technique is capable of detecting and resolving anomalies in the reprocessing input data to the order of a percent

  11. Thermal Analysis of a TREAT Fuel Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Papadias, Dionissios [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, Arthur E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-07-09

    The objective of this study was to explore options as to reduce peak cladding temperatures despite an increase in peak fuel temperatures. A 3D thermal-hydraulic model for a single TREAT fuel assembly was benchmarked to reproduce results obtained with previous thermal models developed for a TREAT HEU fuel assembly. In exercising this model, and variants thereof depending on the scope of analysis, various options were explored to reduce the peak cladding temperatures.

  12. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  13. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Science.gov (United States)

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available. PMID:26335248

  14. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    International Nuclear Information System (INIS)

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of keff. Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for keff, f28/f25, c28/f25, and βeff. These limited results demonstrate the importance of studying other integral parameters in addition to keff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files

  15. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  16. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  17. A Framework for Urban Transport Benchmarking

    OpenAIRE

    Henning, Theuns; Essakali, Mohammed Dalil; Oh, Jung Eun

    2011-01-01

    This report summarizes the findings of a study aimed at exploring key elements of a benchmarking framework for urban transport. Unlike many industries where benchmarking has proven to be successful and straightforward, the multitude of the actors and interactions involved in urban transport systems may make benchmarking a complex endeavor. It was therefore important to analyze what has bee...

  18. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  19. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  1. Computational efficiency analysis of fuel pin damage registration and fuel assembly damage location by means of a sector fuel failure detection and location system

    International Nuclear Information System (INIS)

    Fuel pin clad integrity loss detection at early accident stage is of major importance and therefore detection should be carried out timely. The solution allows to minimize consequences of the initial failure event. Another problem important for mitigating the accident consequences is to execute high-speed location of an fuel assembly with damaged pin within a reactor core configuration. Both abovementioned problems have been solved by means of computational transport analysis of fission products ingressing after fuel pin clad integrity loss into reactor coolant flow and migrating along a primary circuit train. The calculation runs have been carried out as implemented for the BN-600 fast reactor which is provided with an especially designed fuel pin clad integrity loss registration system (FPCILRS). Six delayed neutron detectors are mounted on outer surface of a safety reactor closure within faces of the intermediate heat exchangers (IHX) at their income window level. When clad integrity of a fuel pin is occurring, fission products generate delayed neutrons, ingress into coolant volume, thereafter the products are captured by coolant flow and are transported toward the IHX income windows, where their presence is detected by the delayed neutron detectors. Computational analysis have been carried out after three steps as follows: First step: Detailed calculation of a coolant flow velocity field at top reactor region including a reactor core part located above integrity loss point plus upper reactor chamber and intermediate heat exchangers. Second step: Transport calculation for the fission products ingressed into coolant flow train. Third step: FPCILRS detector signal estimate under established unsteady-state incore delayed neutron source concentration fields. The reactor design has been described under 3-D cylindrical geometry approximation. Unsteady-state in-core delayed neutron source concentration fields have been computed under two approximations (Euler

  2. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  3. Benchmark Specification for HTGR Fuel Element Depletion

    International Nuclear Information System (INIS)

    explicitly represent the dynamics of neutron slowing down in a heterogeneous environment with randomised grain distributions, but traditional tracking simulations can be extremely slow, and the large number of grains in a fuel element may often represent an extreme burden on computational resources. A number of approximations or simplifying assumptions have been developed to simplify the computational process and reduce the effort. Multi-group (MG) methods, on the other hand, require special treatment of DH fuels in order to properly capture resonance effects, and generally cannot explicitly represent a random distribution of grains due to the excessive computational burden resulting from the spatial grain distribution. The effect of such approximations may be important and has potential to misrepresent the spectrum within a fuel grain. Depletion methods utilised in lattice calculations typically rely on point depletion methods, based on the isotopic inventory of fuel depleted, assuming a single localised neutron flux. This flux is generally determined using either a CE or MG transport solver. Hence, in application to DH fuels, the primary factor influencing the accuracy of a depletion calculation will be the accuracy of the local flux calculated within the transport solution and the cross-sections. The current lack of well-qualified experimental measurements for spent HGTR fuel elements limits the validation of advanced DH depletion method. Because of this shortage of data, this benchmark has been developed as the first, simplest phase in a planned series of increasingly complex set of code-to-code benchmarks. The intent of this benchmark is to encourage submission of a wide range of computational results for depletion calculations in a set of basic fuel cell models. Comparison of results using independent methods and data should provide insight into potential limitations in various modelling approximations. The benchmark seeks to provide the simplest possible models, in

  4. A new benchmark for pose estimation with ground truth from virtual reality

    DEFF Research Database (Denmark)

    Schlette, Christian; Buch, Anders Glent; Aksoy, Eren Erdal;

    2014-01-01

    assembly tasks. Following the eRobotics methodology, a simulatable 3D representation of this platform was modelled in virtual reality. Based on a detailed camera and sensor simulation, we generated a set of benchmark images and point clouds with controlled levels of noise as well as ground truth data such...

  5. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  7. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  8. Gaming in a benchmarking environment. A non-parametric analysis of benchmarking in the water sector

    OpenAIRE

    De Witte, Kristof; Marques, Rui

    2009-01-01

    This paper discusses the use of benchmarking in general and its application to the drinking water sector. It systematizes the various classifications on performance measurement, discusses some of the pitfalls of benchmark studies and provides some examples of benchmarking in the water sector. After presenting in detail the institutional framework of the water sector of the Belgian region of Flanders (without benchmarking experiences), Wallonia (recently started a public benchmark) and the Net...

  9. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past the doorstep and is now in early trials in the project and construction environment....

  10. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  11. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Experimental results of pulse parameters and control rod worth measurements at TRIGA Mark 2 reactor in Ljubljana are presented. The measurements were performed with a completely fresh, uniform, and compact core. Only standard fuel elements with 12 wt% uranium were used. Special efforts were made to get reliable and accurate results at well-defined experimental conditions, and it is proposed to use the results as a benchmark test case for TRIGA reactors

  12. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  13. Local Innovation Systems and Benchmarking

    OpenAIRE

    Cantner, Uwe

    2008-01-01

    This paper reviews approaches used for evaluating the performance of local or regional innovation systems. This evaluation is performed by a benchmarking approach in which a frontier production function can be determined, based on a knowledge production function relating innovation inputs and innovation outputs. In analyses on the regional level and especially when acknowledging regional innovation systems those approaches have to take into account cooperative invention and innovation - the c...

  14. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  15. Prismatic VHTR neutronic benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Connolly, Kevin John, E-mail: connolly@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA (United States); Rahnema, Farzad, E-mail: farzad@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA (United States); Tsvetkov, Pavel V. [Department of Nuclear Engineering, Texas A& M University, College Station, TX (United States)

    2015-04-15

    Highlights: • High temperature gas-cooled reactor neutronics benchmark problems. • Description of a whole prismatic VHTR core in its full heterogeneity. • Modeled using continuous energy nuclear data at a representative hot operating temperature. • Benchmark results for core eigenvalue, block-averaged power, and some selected pin fission density results. - Abstract: This paper aims to fill an apparent scarcity of benchmarks based on high temperature gas-cooled reactors. Within is a description of a whole prismatic VHTR core in its full heterogeneity and modeling using continuous energy nuclear data at a representative hot operating temperature. Also included is a core which has been simplified for ease in modeling while attempting to preserve as faithfully as possible the neutron physics of the core. Fuel and absorber pins have been homogenized from the particle level, however, the blocks which construct the core remain strongly heterogeneous. A six group multigroup (discrete energy) cross section set has been developed via Monte Carlo using the original heterogeneous core as a basis. Several configurations of the core have been solved using these two cross section sets; eigenvalue results, block-averaged power results, and some selected pin fission density results are presented in this paper, along with the six-group cross section data, so that method developers may use these problems as a standard reference point.

  16. Prismatic VHTR neutronic benchmark problems

    International Nuclear Information System (INIS)

    Highlights: • High temperature gas-cooled reactor neutronics benchmark problems. • Description of a whole prismatic VHTR core in its full heterogeneity. • Modeled using continuous energy nuclear data at a representative hot operating temperature. • Benchmark results for core eigenvalue, block-averaged power, and some selected pin fission density results. - Abstract: This paper aims to fill an apparent scarcity of benchmarks based on high temperature gas-cooled reactors. Within is a description of a whole prismatic VHTR core in its full heterogeneity and modeling using continuous energy nuclear data at a representative hot operating temperature. Also included is a core which has been simplified for ease in modeling while attempting to preserve as faithfully as possible the neutron physics of the core. Fuel and absorber pins have been homogenized from the particle level, however, the blocks which construct the core remain strongly heterogeneous. A six group multigroup (discrete energy) cross section set has been developed via Monte Carlo using the original heterogeneous core as a basis. Several configurations of the core have been solved using these two cross section sets; eigenvalue results, block-averaged power results, and some selected pin fission density results are presented in this paper, along with the six-group cross section data, so that method developers may use these problems as a standard reference point

  17. Uranium-fuel thermal reactor benchmark testing of CENDL-3

    International Nuclear Information System (INIS)

    CENDL-3, the new version of China Evaluated Nuclear Data Library are being processed, and distributed for thermal reactor benchmark analysis recently. The processing was carried out using the NJOY nuclear data processing system. The calculations and analyses of uranium-fuel thermal assemblies TRX-1,2, BAPL-1,2,3, ZEEP-1,2,3 were done with lattice code WIMSD5A. The results were compared with the experimental results, the results of the '1986'WIMS library and the results based on ENDF/B-VI. (author)

  18. Benchmark analysis of the DeCART MOC code with the VENUS-2 critical experiment

    International Nuclear Information System (INIS)

    Computational benchmarks based on well-defined problems with a complete set of input and a unique solution are often used as a means of verifying the reliability of numerical solutions. VENUS is a widely used MOX benchmark problem for the validation of numerical methods and nuclear data set. In this paper, the results of benchmarking the DeCART (Deterministic Core Analysis based on Ray Tracing) integral transport code is reported using the OECD/NEA VENUS-2 MOX benchmark problem. Both 2-D and 3-D DeCART calculations were performed and comparisons are reported with measured data, as well as with the results of other benchmark participants. In general the DeCART results agree well with both the experimental data as well as those of other participants. (authors)

  19. Hydrologic information server for benchmark precipitation dataset

    Science.gov (United States)

    McEnery, John A.; McKee, Paul W.; Shelton, Gregory P.; Ramsey, Ryan W.

    2013-01-01

    This paper will present the methodology and overall system development by which a benchmark dataset of precipitation information has been made available. Rainfall is the primary driver of the hydrologic cycle. High quality precipitation data is vital for hydrologic models, hydrometeorologic studies and climate analysis,and hydrologic time series observations are important to many water resources applications. Over the past two decades, with the advent of NEXRAD radar, science to measure and record rainfall has improved dramatically. However, much existing data has not been readily available for public access or transferable among the agricultural, engineering and scientific communities. This project takes advantage of the existing CUAHSI Hydrologic Information System ODM model and tools to bridge the gap between data storage and data access, providing an accepted standard interface for internet access to the largest time-series dataset of NEXRAD precipitation data ever assembled. This research effort has produced an operational data system to ingest, transform, load and then serve one of most important hydrologic variable sets.

  20. BENCHMARKING OF CT FOR PATIENT EXPOSURE OPTIMISATION.

    Science.gov (United States)

    Racine, Damien; Ryckx, Nick; Ba, Alexandre; Ott, Julien G; Bochud, François O; Verdun, Francis R

    2016-06-01

    Patient dose optimisation in computed tomography (CT) should be done using clinically relevant tasks when dealing with image quality assessments. In the present work, low-contrast detectability for an average patient morphology was assessed on 56 CT units, using a model observer applied on images acquired with two specific protocols of an anthropomorphic phantom containing spheres. Images were assessed using the channelised Hotelling observer (CHO) with dense difference of Gaussian channels. The results were computed by performing receiver operating characteristics analysis (ROC) and using the area under the ROC curve (AUC) as a figure of merit. The results showed a small disparity at a volume computed tomography dose index (CTDIvol) of 15 mGy depending on the CT units for the chosen image quality criterion. For 8-mm targets, AUCs were 0.999 ± 0.018 at 20 Hounsfield units (HU) and 0.927 ± 0.054 at 10 HU. For 5-mm targets, AUCs were 0.947 ± 0.059 and 0.702 ± 0.068 at 20 and 10 HU, respectively. The robustness of the CHO opens the way for CT protocol benchmarking and optimisation processes. PMID:26940439

  1. Statistical benchmark for BosonSampling

    Science.gov (United States)

    Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas

    2016-03-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.

  2. Application of a nodal collocation approximation for the multidimensional PL equations to the 3D Takeda benchmark problems

    International Nuclear Information System (INIS)

    Highlights: ► The multidimensional PL approximation to the nuclear transport equation is reviewed. ► A nodal collocation method is developed for the spatial discretization of PL equations. ► Advantages of the method are lower dimension and good characterists of the associated algebraic eigenvalue problem. ► The PL nodal collocation method is implemented into the computer code SHNC. ► The SHNC code is verified with 2D and 3D benchmark eigenvalue problems from Takeda and Ikeda, giving satisfactory results. - Abstract: PL equations are classical approximations to the neutron transport equations, which are obtained expanding the angular neutron flux in terms of spherical harmonics. These approximations are useful to study the behavior of reactor cores with complex fuel assemblies, for the homogenization of nuclear cross-sections, etc., and most of these applications are in three-dimensional (3D) geometries. In this work, we review the multi-dimensional PL equations and describe a nodal collocation method for the spatial discretization of these equations for arbitrary odd order L, which is based on the expansion of the spatial dependence of the fields in terms of orthonormal Legendre polynomials. The performance of the nodal collocation method is studied by means of obtaining the keff and the stationary power distribution of several 3D benchmark problems. The solutions are obtained are compared with a finite element method and a Monte Carlo method.

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  6. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  8. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  9. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  10. CEA-IPSN Participation in the MSLB Benchmark

    International Nuclear Information System (INIS)

    The OECD/NEA Main Steam Line Break (MSLB) Benchmark allows the comparison of state-of-the-art and best-estimate models used to compute reactivity accidents. The three exercises of the MSLB benchmark are defined with the aim of analyzing the space and time effects in the core and their modeling with computational tools. Point kinetics (exercise 1) simulation results in a return to power (RTP) after scram, whereas 3-D kinetics (exercises 2 and 3) does not display any RTP. The objective is to understand the reasons for the conservative solution of point kinetics and to assess the benefits of best-estimate models. First, the core vessel mixing model is analyzed; second, sensitivity studies on point kinetics are compared to 3-D kinetics; third, the core thermal hydraulics model and coupling with neutronics is presented; finally, RTP and a suitable model for MSLB are discussed

  11. Benchmarking of neutron production of heavy-ion transport codes

    International Nuclear Information System (INIS)

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)

  12. RESRAD benchmarking against six radiation exposure pathway models

    International Nuclear Information System (INIS)

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, input parameters such as occupancy, shielding, and consumption factors

  13. Analysis of the impact of correlated benchmark experiments on the validation of codes for criticality safety analysis

    International Nuclear Information System (INIS)

    The validation of a code for criticality safety analysis requires the recalculation of benchmark experiments. The selected benchmark experiments are chosen such that they have properties similar to the application case that has to be assessed. A common source of benchmark experiments is the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) compiled by the 'International Criticality Safety Benchmark Evaluation Project' (ICSBEP). In order to take full advantage of the information provided by the individual benchmark descriptions for the application case, the recommended procedure is to perform an uncertainty analysis. The latter is based on the uncertainties of experimental results included in most of the benchmark descriptions. They can be performed by means of the Monte Carlo sampling technique. The consideration of uncertainties is also being introduced in the supplementary sheet of DIN 25478 'Application of computer codes in the assessment of criticality safety'. However, for a correct treatment of uncertainties taking into account the individual uncertainties of the benchmark experiments is insufficient. In addition, correlations between benchmark experiments have to be handled correctly. For example, these correlations can arise due to different cases of a benchmark experiment sharing the same components like fuel pins or fissile solutions. Thus, manufacturing tolerances of these components (e.g. diameter of the fuel pellets) have to be considered in a consistent manner in all cases of the benchmark experiment. At the 2012 meeting of the Expert Group on 'Uncertainty Analysis for Criticality Safety Assessment' (UACSA) of the OECD/NEA a benchmark proposal was outlined that aimed for the determination of the impact on benchmark correlations on the estimation of the computational bias of the neutron multiplication factor (keff). The analysis presented here is based on this proposal. (orig.)

  14. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  15. International benchmark study of advanced thermal hydraulic safety analysis codes against measurements on IEA-R1 research reactor

    International Nuclear Information System (INIS)

    Highlights: • A set of advanced system thermal hydraulic codes are benchmarked against IFA of IEA-R1. • Comparative safety analysis of IEA-R1 reactor during LOFA by 7 working teams. • This work covers both experimental and calculation effort and presents new out findings on TH of RR that have not been reported before. • LOFA results discrepancies from 7% to 20% for coolant and peak clad temperatures are predicted conservatively. - Abstract: In the framework of the IAEA Coordination Research Project on “Innovative methods in research reactor analysis: Benchmark against experimental data on neutronics and thermal hydraulic computational methods and tools for operation and safety analysis of research reactors” the Brazilian research reactor IEA-R1 has been selected as reference facility to perform benchmark calculations for a set of thermal hydraulic codes being widely used by international teams in the field of research reactor (RR) deterministic safety analysis. The goal of the conducted benchmark is to demonstrate the application of innovative reactor analysis tools in the research reactor community, validation of the applied codes and application of the validated codes to perform comprehensive safety analysis of RR. The IEA-R1 is equipped with an Instrumented Fuel Assembly (IFA) which provided measurements for normal operation and loss of flow transient. The measurements comprised coolant and cladding temperatures, reactor power and flow rate. Temperatures are measured at three different radial and axial positions of IFA summing up to 12 measuring points in addition to the coolant inlet and outlet temperatures. The considered benchmark deals with the loss of reactor flow and the subsequent flow reversal from downward forced to upward natural circulation and presents therefore relevant phenomena for the RR safety analysis. The benchmark calculations were performed independently by the participating teams using different thermal hydraulic and safety

  16. International benchmark study of advanced thermal hydraulic safety analysis codes against measurements on IEA-R1 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Hainoun, A., E-mail: pscientific2@aec.org.sy [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Doval, A. [Nuclear Engineering Department, Av. Cmdt. Luis Piedrabuena 4950, C.P. 8400 S.C de Bariloche, Rio Negro (Argentina); Umbehaun, P. [Centro de Engenharia Nuclear – CEN, IPEN-CNEN/SP, Av. Lineu Prestes 2242-Cidade Universitaria, CEP-05508-000 São Paulo, SP (Brazil); Chatzidakis, S. [School of Nuclear Engineering, Purdue University, West Lafayette, IN 47907 (United States); Ghazi, N. [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Park, S. [Research Reactor Design and Engineering Division, Basic Science Project Operation Dept., Korea Atomic Energy Research Institute (Korea, Republic of); Mladin, M. [Institute for Nuclear Research, Campului Street No. 1, P.O. Box 78, 115400 Mioveni, Arges (Romania); Shokr, A. [Division of Nuclear Installation Safety, Research Reactor Safety Section, International Atomic Energy Agency, A-1400 Vienna (Austria)

    2014-12-15

    Highlights: • A set of advanced system thermal hydraulic codes are benchmarked against IFA of IEA-R1. • Comparative safety analysis of IEA-R1 reactor during LOFA by 7 working teams. • This work covers both experimental and calculation effort and presents new out findings on TH of RR that have not been reported before. • LOFA results discrepancies from 7% to 20% for coolant and peak clad temperatures are predicted conservatively. - Abstract: In the framework of the IAEA Coordination Research Project on “Innovative methods in research reactor analysis: Benchmark against experimental data on neutronics and thermal hydraulic computational methods and tools for operation and safety analysis of research reactors” the Brazilian research reactor IEA-R1 has been selected as reference facility to perform benchmark calculations for a set of thermal hydraulic codes being widely used by international teams in the field of research reactor (RR) deterministic safety analysis. The goal of the conducted benchmark is to demonstrate the application of innovative reactor analysis tools in the research reactor community, validation of the applied codes and application of the validated codes to perform comprehensive safety analysis of RR. The IEA-R1 is equipped with an Instrumented Fuel Assembly (IFA) which provided measurements for normal operation and loss of flow transient. The measurements comprised coolant and cladding temperatures, reactor power and flow rate. Temperatures are measured at three different radial and axial positions of IFA summing up to 12 measuring points in addition to the coolant inlet and outlet temperatures. The considered benchmark deals with the loss of reactor flow and the subsequent flow reversal from downward forced to upward natural circulation and presents therefore relevant phenomena for the RR safety analysis. The benchmark calculations were performed independently by the participating teams using different thermal hydraulic and safety

  17. Comparative Analysis of CTF and Trace Thermal-Hydraulic Codes Using OECD/NRC PSBT Benchmark Void Distribution Database

    OpenAIRE

    Avramova, M.; A. Velazquez-Lozada; Rubin, A.

    2013-01-01

    The international OECD/NRC PSBT benchmark has been established to provide a test bed for assessing the capabilities of thermal-hydraulic codes and to encourage advancement in the analysis of fluid flow in rod bundles. The benchmark was based on one of the most valuable databases identified for the thermal-hydraulics modeling developed by NUPEC, Japan. The database includes void fraction and departure from nucleate boiling measurements in a representative PWR fuel assembly. On behalf of the be...

  18. Assessment of a Subchannel Code MATRA for OECD/NRC PSBT Benchmark Exercises

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae Hyun; Kim, Seong Jin; Seo, Kyong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-05-15

    The OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark was organized on the basis of NUPEC database. The purposes of the benchmark are the encouragement to develop a theoretically-base microscopic approach as well as the comparison of currently available computational approaches. The benchmark consists of two separate phases: void distribution benchmark and DNB benchmark. Subchannel-grade void distribution data was employed for validation of a subchannel analysis code under steady-state and transient conditions. DNB benchmark provided subchannel fluid temperature data which can be used to determine the turbulent mixing parameter for a subchannel code. The NUPEC PWR test facility consists of high pressure and high temperature recirculation loop, a cooling loop, and data recording system. The void fraction was measured by two different methods: A gamma-ray beam CT scanner system was used to determine the distribution of density/void fraction over the subchannel at steady-state flow and to define the subchannel averaged void fraction with an accuracy by {+-}3%. A multi-beam system was used to measure chordal averaged subchannel void fraction in rod bundle with accuracies of {+-}4% and {+-}5% for steady-state and transient, respectively. The purpose of this study is to provide analysis results for PSBT benchmark problems for void distribution, subchannel mixing, and DNB, as well as to evaluate the applicability of some mechanistic DNB models to PSBT benchmark data with the aid of subchannel analysis results calculated by the MATRA code

  19. Assessment of a Subchannel Code MATRA for OECD/NRC PSBT Benchmark Exercises

    International Nuclear Information System (INIS)

    The OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark was organized on the basis of NUPEC database. The purposes of the benchmark are the encouragement to develop a theoretically-base microscopic approach as well as the comparison of currently available computational approaches. The benchmark consists of two separate phases: void distribution benchmark and DNB benchmark. Subchannel-grade void distribution data was employed for validation of a subchannel analysis code under steady-state and transient conditions. DNB benchmark provided subchannel fluid temperature data which can be used to determine the turbulent mixing parameter for a subchannel code. The NUPEC PWR test facility consists of high pressure and high temperature recirculation loop, a cooling loop, and data recording system. The void fraction was measured by two different methods: A gamma-ray beam CT scanner system was used to determine the distribution of density/void fraction over the subchannel at steady-state flow and to define the subchannel averaged void fraction with an accuracy by ±3%. A multi-beam system was used to measure chordal averaged subchannel void fraction in rod bundle with accuracies of ±4% and ±5% for steady-state and transient, respectively. The purpose of this study is to provide analysis results for PSBT benchmark problems for void distribution, subchannel mixing, and DNB, as well as to evaluate the applicability of some mechanistic DNB models to PSBT benchmark data with the aid of subchannel analysis results calculated by the MATRA code

  20. BENCHMARKING THE ACCURACY OF INERTIAL SENSORS IN CELL PHONES

    OpenAIRE

    An, Bin

    2012-01-01

    Many ubiquitous computing applications rely on data from a cell phone's inertial sensors. Unfortunately, the accuracy of this data is often unknown, which impedes predictive analysis of applications that require high sensor accuracy (e.g., dead reckoning). This work focuses on benchmarking the accuracy of the accelerometers and gyroscopes on a cell phone. The cell phones are attached to a robotic arm, which provides ground truth measurements. The misalignment between the cell phone's and the ...

  1. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.

    Energy Technology Data Exchange (ETDEWEB)

    Domm, T.C.; Underwood, R.S.

    1999-10-13

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  2. Research Reactor Benchmarking Database: Facility Specification and Experimental Data

    International Nuclear Information System (INIS)

    This web publication contains the facility specifications, experiment descriptions, and corresponding experimental data for nine different research reactors covering a wide range of research reactor types, power levels and experimental configurations. Each data set was prepared in order to serve as a stand-alone resource of well documented experimental data, which can subsequently be used in benchmarking and validation of the neutronic and thermal-hydraulic computational methods and tools employed for improved utilization, operation and safety analysis of research reactors

  3. A Generic Environment for Full Automation of Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Kalibera, T.; Bulej, Lubomír; Tůma, P.

    Ilmenau: tranSIT GmbH, 2004, s. 35-41. ISBN 3-9808628-3-6. [SOQUA 2004. International Workshop on Software Quality /1./. Erfurt (DE), 27.09.2004-30.09.2004] R&D Projects: GA ČR GA201/03/0911 Institutional research plan: CEZ:AV0Z1030915 Keywords : regression benchmarking * CORBA Subject RIV: JD - Computer Applications, Robotics

  4. Energy-efficient Benchmarking for Energy-efficient Software

    OpenAIRE

    Pukhkaiev, Dmytro

    2016-01-01

    With respect to the continuous growth of computing systems, the energy-efficiency requirement of their processes becomes even more important. Different configurations, implying different energy-efficiency of the system, could be used to perform the process. A configuration denotes the choice among different hard- and software settings (e.g., CPU frequency, number of threads, the concrete algorithm, etc.). The identification of the most energy-efficient configuration demands to benchmark all ...

  5. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  6. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  7. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  8. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  11. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  12. Pre-evaluation of fusion shielding benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, K. [Hitachi Engineering Co. Ltd., Ibaraki (Japan); Handa, H. [Hitachi Engineering Co. Ltd., Ibaraki (Japan); Konno, C. [Japan Atomic Energy Research Institute, Tokai-mura, Ibaraki, 319-11 (Japan); Maekawa, F. [Japan Atomic Energy Research Institute, Tokai-mura, Ibaraki, 319-11 (Japan); Maekawa, H. [Japan Atomic Energy Research Institute, Tokai-mura, Ibaraki, 319-11 (Japan); Maki, K. [Energy Research Laboratory, Hitachi Ltd., Omika-cho, Hitachi, Ibaraki, 316 (Japan); Yamada, K. [Business Automation Co. Ltd., Toranomon, 1-24-10, Minato-ku, Tokyo, 105 (Japan); Abe, T. [Business Automation Co. Ltd., Toranomon, 1-24-10, Minato-ku, Tokyo, 105 (Japan)

    1995-03-01

    A shielding benchmark experiment is very useful to test the design code and nuclear data for fusion devices. There are many types of benchmark experiment that should be done in fusion shielding problems, but time and budget are limited. Therefore it will be important to select and determine the effective experimental configurations by precalculation before the experiment. We did pre-evaluations of three types of shielding benchmark experiment to determine the experimental assembly configurations. The types of experiment discussed are the void effect experiment, the auxiliary shield experiment, and the SCM (superconductive magnet) nuclear heating experiment. These calculations were performed by using two-dimensional discrete ordinate transport code DOT3.5 with first collision source prepared by GRTUNCL code. Group constants used was FUSION-40 (neutron 42 group, photon 21 group, P5 Legendre expansion) processed from Japanese Evaluated Nuclear Data Library JENDL-3. All three types of configuration were finally determined with consideration of detector efficiencies and measurement time. (orig.).

  13. Pre-evaluation of fusion shielding benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, K.; Handa, H. [Hitachi Engineering Company, Ltd., Ibaraki (Japan); Konno, C. [Japan Atomic Energy Research Inst. Ibaraki (Japan)] [and others

    1994-12-31

    Shielding benchmark experiment is very useful to test the design code and nuclear data for fusion devices. There are many types of benchmark experiments that should be done in fusion shielding problems, but time and budget are limited. Therefore it will be important to select and determine the effective experimental configurations by precalculation before the experiment. The authors did three types of pre-evaluation to determine the experimental assembly configurations of shielding benchmark experiments planned in FNS, JAERI. (1) Void Effect Experiment - The purpose of this experiment is to measure the local increase of dose and nuclear heating behind small void(s) in shield material. Dimension of the voids and its arrangements were decided as follows. Dose and nuclear heating were calculated both for with and without void(s). Minimum size of the void was determined so that the ratio of these two results may be larger than error of the measurement system. (2) Auxiliary Shield Experiment - The purpose of this experiment is to measure shielding properties of B{sub 4}C, Pb, W, and dose around superconducting magnet (SCM). Thickness of B{sub 4}C, Pb, W and their arrangement including multilayer configuration were determined. (3) SCM Nuclear Heating Experiment - The purpose of this experiment is to measure nuclear heating and dose distribution in SCM material. Because it is difficult to use liquid helium as a part of SCM mock up material, material composition of SCM mock up are surveyed to have similar nuclear heating property of real SCM composition.

  14. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  15. Discussion of OECD LWR Uncertainty Analysis in Modelling Benchmark

    International Nuclear Information System (INIS)

    The demand for best estimate calculations in nuclear reactor design and safety evaluations has increased in recent years. Uncertainty quantification has been highlighted as part of the best estimate calculations. The modelling aspects of uncertainty and sensitivity analysis are to be further developed and validated on scientific grounds in support of their performance and application to multi-physics reactor simulations. The Organization for Economic Co-operation and Development (OECD) / Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC) has endorsed the creation of an Expert Group on Uncertainty Analysis in Modelling (EGUAM). Within the framework of activities of EGUAM/NSC the OECD/NEA initiated the Benchmark for Uncertainty Analysis in Modelling for Design, Operation, and Safety Analysis of Light Water Reactor (OECD LWR UAM benchmark). The general objective of the benchmark is to propagate the predictive uncertainties of code results through complex coupled multi-physics and multi-scale simulations. The benchmark is divided into three phases with Phase I highlighting the uncertainty propagation in stand-alone neutronics calculations, while Phase II and III are focused on uncertainty analysis of reactor core and system respectively. This paper discusses the progress made in Phase I calculations, the Specifications for Phase II and the incoming challenges in defining Phase 3 exercises. The challenges of applying uncertainty quantification to complex code systems, in particular the time-dependent coupled physics models are the large computational burden and the utilization of non-linear models (expected due to the physics coupling). (authors)

  16. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  17. Benchmarking and testing the "Sea Level Equation

    Science.gov (United States)

    Spada, G.; Barletta, V. R.; Klemann, V.; van der Wal, W.; James, T. S.; Simon, K.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2012-04-01

    The study of the process of Glacial Isostatic Adjustment (GIA) and of the consequent sea level variations is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying GIA can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated. Following the example of the mantle convection community and our recent successful Benchmark for Post Glacial Rebound codes (Spada et al., 2011, doi: 10.1111/j.1365-246X.2011.04952.x), here we present the results of a benchmark study of independently developed codes designed to solve the SLE. This study has taken place within a collaboration facilitated through the European Cooperation in Science and Technology (COST) Action ES0701. The tests involve predictions of past and current sea level variations, and 3D deformations of the Earth surface. In spite of the signi?cant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found, which can be often attributed to the different numerical algorithms employed within the community, help to constrain the intrinsic errors in model predictions. These are of fundamental importance for a correct interpretation of the geodetic variations observed today, and

  18. Gaia FGK benchmark stars: Metallicity

    Science.gov (United States)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  19. Coded nanoscale self-assembly

    Indian Academy of Sciences (India)

    Prathyush Samineni; Debabrata Goswami

    2008-12-01

    We demonstrate coded self-assembly in nanostructures using the code seeded at the component level through computer simulations. Defects or cavities occur in all natural assembly processes including crystallization and our simulations capture this essential aspect under surface minimization constraints for self-assembly. Our bottom-up approach to nanostructures would provide a new dimension towards nanofabrication and better understanding of defects and crystallization process.

  20. NFS Tricks and Benchmarking Traps

    OpenAIRE

    Seltzer, Margo; Ellard, Daniel

    2003-01-01

    We describe two modi cations to the FreeBSD 4.6 NFS server to increase read throughput by improving the read-ahead heuristic to deal with reordered requests and stride access patterns. We show that for some stride access patterns, our new heuristics improve end-to-end NFS throughput by nearly a factor of two. We also show that benchmarking and experimenting with changes to an NFS server can be a subtle and challenging task, and that it is often difficult to distinguish the impact of a new ...