WorldWideScience

Sample records for benchmark parameter scan

  1. BENCHMARKING MOBILE LASER SCANNING SYSTEMS USING A PERMANENT TEST FIELD

    Directory of Open Access Journals (Sweden)

    H. Kaartinen

    2012-07-01

    Full Text Available The objective of the study was to benchmark the geometric accuracy of mobile laser scanning (MLS systems using a permanent test field under good coverage of GNSS. Mobile laser scanning, also called mobile terrestrial laser scanning, is currently a rapidly developing area in laser scanning where laser scanners, GNSS and IMU are mounted onboard a moving vehicle. MLS can be considered to fill the gap between airborne and terrestrial laser scanning. Data provided by MLS systems can be characterized with the following technical parameters: a point density in the range of 100-1000 points per m2 at 10 m distance, b distance measurement accuracy of 2-5 cm, and c operational scanning range from 1 to 100 m. Several commercial, including e.g. Riegl, Optech and others, and some research mobile laser scanning systems surveyed the test field using predefined driving speed and directions. The acquired georeferenced point clouds were delivered for analyzing. The geometric accuracy of the point clouds was determined using the reference targets that could be identified and measured from the point cloud. Results show that in good GNSS conditions most systems can reach an accuracy of 2 cm both in plane and elevation. The accuracy of a low cost system, the price of which is less than tenth of the other systems, seems to be within a few centimetres at least in ground elevation determination. Inaccuracies in the relative orientation of the instruments lead to systematic errors and when several scanners are used, in multiple reproductions of the objects. Mobile laser scanning systems can collect high density point cloud data with high accuracy. A permanent test field suits well for verifying and comparing the performance of different mobile laser scanning systems. The accuracy of the relative orientation between the mapping instruments needs more attention. For example, if the object is seen double in the point cloud due to imperfect boresight calibration between two

  2. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    Energy Technology Data Exchange (ETDEWEB)

    Kljenak, Ivo, E-mail: ivo.kljenak@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Kuznetsov, Mikhail, E-mail: mike.kuznetsov@kit.edu [Karlsruhe Institute of Technology, Kaiserstraße 12, 76131 Karlsruhe (Germany); Kostka, Pal, E-mail: kostka@nubiki.hu [NUBIKI Nuclear Safety Research Institute, Konkoly-Thege Miklós út 29-33, 1121 Budapest (Hungary); Kubišova, Lubica, E-mail: lubica.kubisova@ujd.gov.sk [Nuclear Regulatory Authority of the Slovak Republic, Bajkalská 27, 82007 Bratislava (Slovakia); Maltsev, Mikhail, E-mail: maltsev_MB@aep.ru [JSC Atomenergoproekt, 1, st. Podolskykh Kursantov, Moscow (Russian Federation); Manzini, Giovanni, E-mail: giovanni.manzini@rse-web.it [Ricerca sul Sistema Energetico, Via Rubattino 54, 20134 Milano (Italy); Povilaitis, Mantas, E-mail: mantas.p@mail.lei.lt [Lithuania Energy Institute, Breslaujos g.3, 44403 Kaunas (Lithuania)

    2015-03-15

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description.

  3. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    International Nuclear Information System (INIS)

    Kljenak, Ivo; Kuznetsov, Mikhail; Kostka, Pal; Kubišova, Lubica; Maltsev, Mikhail; Manzini, Giovanni; Povilaitis, Mantas

    2015-01-01

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description

  4. A benchmark test of computer codes for calculating average resonance parameters

    International Nuclear Information System (INIS)

    Ribon, P.; Thompson, A.

    1983-01-01

    A set of resonance parameters has been generated from known, but secret, average values; the parameters have then been adjusted to mimic experimental data by including the effects of Doppler broadening, resolution broadening and statistical fluctuations. Average parameters calculated from the dataset by various computer codes are compared with each other, and also with the true values. The benchmark test is fully described in the report NEANDC160-U (NEA Data Bank Newsletter No. 27 July 1982); the present paper is a summary of this document. (Auth.)

  5. DRAGON solutions to the 3D transport benchmark over a range in parameter space

    International Nuclear Information System (INIS)

    Martin, Nicolas; Hebert, Alain; Marleau, Guy

    2010-01-01

    DRAGON solutions to the 'NEA suite of benchmarks for 3D transport methods and codes over a range in parameter space' are discussed in this paper. A description of the benchmark is first provided, followed by a detailed review of the different computational models used in the lattice code DRAGON. Two numerical methods were selected for generating the required quantities for the 729 configurations of this benchmark. First, S N calculations were performed using fully symmetric angular quadratures and high-order diamond differencing for spatial discretization. To compare S N results with those of another deterministic method, the method of characteristics (MoC) was also considered for this benchmark. Comparisons between reference solutions, S N and MoC results illustrate the advantages and drawbacks of each methods for this 3-D transport problem.

  6. Benchmarking Parameter-free AMaLGaM on Functions With and Without Noise

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); J. Grahl; D. Thierens (Dirk)

    2013-01-01

    htmlabstractWe describe a parameter-free estimation-of-distribution algorithm (EDA) called the adapted maximum-likelihood Gaussian model iterated density-estimation evolutionary algorithm (AMaLGaM-IDEA, or AMaLGaM for short) for numerical optimization. AMaLGaM is benchmarked within the 2009 black

  7. Multi-parameters scanning in HTI media

    KAUST Repository

    Masmoudi, Nabil

    2014-08-05

    Building credible anisotropy models is crucial in imaging. One way to estimate anisotropy parameters is to relate them analytically to traveltime, which is challenging in inhomogeneous media. Using perturbation theory, we develop traveltime approximations for transversely isotropic media with horizontal symmetry axis (HTI) as explicit functions of the anellipticity parameter η and the symmetry axis azimuth ϕ in inhomogeneous background media. Specifically, our expansion assumes an inhomogeneous elliptically anisotropic background medium, which may be obtained from well information and stacking velocity analysis in HTI media. This formulation has advantages on two fronts: on one hand, it alleviates the computational complexity associated with solving the HTI eikonal equation, and on the other hand, it provides a mechanism to scan for the best fitting parameters η and ϕ without the need for repetitive modeling of traveltimes, because the traveltime coefficients of the expansion are independent of the perturbed parameters η and ϕ. The accuracy of our expansion is further enhanced by the use of shanks transform. We show the effectiveness of our scheme with tests on a 3D model and we propose an approach for multi-parameters scanning in TI media.

  8. Multi-parameters scanning in HTI media

    KAUST Repository

    Masmoudi, Nabil; Alkhalifah, Tariq Ali

    2014-01-01

    Building credible anisotropy models is crucial in imaging. One way to estimate anisotropy parameters is to relate them analytically to traveltime, which is challenging in inhomogeneous media. Using perturbation theory, we develop traveltime approximations for transversely isotropic media with horizontal symmetry axis (HTI) as explicit functions of the anellipticity parameter η and the symmetry axis azimuth ϕ in inhomogeneous background media. Specifically, our expansion assumes an inhomogeneous elliptically anisotropic background medium, which may be obtained from well information and stacking velocity analysis in HTI media. This formulation has advantages on two fronts: on one hand, it alleviates the computational complexity associated with solving the HTI eikonal equation, and on the other hand, it provides a mechanism to scan for the best fitting parameters η and ϕ without the need for repetitive modeling of traveltimes, because the traveltime coefficients of the expansion are independent of the perturbed parameters η and ϕ. The accuracy of our expansion is further enhanced by the use of shanks transform. We show the effectiveness of our scheme with tests on a 3D model and we propose an approach for multi-parameters scanning in TI media.

  9. Codification of scan path parameters and development of perimeter scan strategies for 3D bowl-shaped laser forming

    Science.gov (United States)

    Tavakoli, A.; Naeini, H. Moslemi; Roohi, Amir H.; Gollo, M. Hoseinpour; Shahabad, Sh. Imani

    2018-01-01

    In the 3D laser forming process, developing an appropriate laser scan pattern for producing specimens with high quality and uniformity is critical. This study presents certain principles for developing scan paths. Seven scan path parameters are considered, including: (1) combined linear or curved path; (2) type of combined linear path; (3) order of scan sequences; (4) the position of the start point in each scan; (5) continuous or discontinuous scan path; (6) direction of scan path; and (7) angular arrangement of combined linear scan paths. Regarding these path parameters, ten combined linear scan patterns are presented. Numerical simulations show continuous hexagonal, scan pattern, scanning from outer to inner path, is the optimized. In addition, it is observed the position of the start point and the angular arrangement of scan paths is the most effective path parameters. Also, further experimentations show four sequences due to creat symmetric condition enhance the height of the bowl-shaped products and uniformity. Finally, the optimized hexagonal pattern was compared with the similar circular one. In the hexagonal scan path, distortion value and standard deviation rather to edge height of formed specimen is very low, and the edge height despite of decreasing length of scan path increases significantly compared to the circular scan path. As a result, four-sequence hexagonal scan pattern is proposed as the optimized perimeter scan path to produce bowl-shaped product.

  10. General squark flavour mixing: constraints, phenomenology and benchmarks

    CERN Document Server

    De Causmaecker, Karen; Herrmann, Bjoern; Mahmoudi, Farvah; O'Leary, Ben; Porod, Werner; Sekmen, Sezen; Strobbe, Nadja

    2015-11-19

    We present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.

  11. Data Based Parameter Estimation Method for Circular-scanning SAR Imaging

    Directory of Open Access Journals (Sweden)

    Chen Gong-bo

    2013-06-01

    Full Text Available The circular-scanning Synthetic Aperture Radar (SAR is a novel working mode and its image quality is closely related to the accuracy of the imaging parameters, especially considering the inaccuracy of the real speed of the motion. According to the characteristics of the circular-scanning mode, a new data based method for estimating the velocities of the radar platform and the scanning-angle of the radar antenna is proposed in this paper. By referring to the basic conception of the Doppler navigation technique, the mathematic model and formulations for the parameter estimation are firstly improved. The optimal parameter approximation based on the least square criterion is then realized in solving those equations derived from the data processing. The simulation results verified the validity of the proposed scheme.

  12. Identification of critical parameters for PEMFC stack performance characterization and control strategies for reliable and comparable stack benchmarking

    DEFF Research Database (Denmark)

    Mitzel, Jens; Gülzow, Erich; Kabza, Alexander

    2016-01-01

    This paper is focused on the identification of critical parameters and on the development of reliable methodologies to achieve comparable benchmark results. Possibilities for control sensor positioning and for parameter variation in sensitivity tests are discussed and recommended options for the ...

  13. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the benchmarking...

  14. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several...... visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  15. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  16. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  17. Scanning anisotropy parameters in horizontal transversely isotropic media

    KAUST Repository

    Masmoudi, Nabil

    2016-10-12

    The horizontal transversely isotropic model, with arbitrary symmetry axis orientation, is the simplest effective representative that explains the azimuthal behaviour of seismic data. Estimating the anisotropy parameters of this model is important in reservoir characterisation, specifically in terms of fracture delineation. We propose a travel-time-based approach to estimate the anellipticity parameter η and the symmetry axis azimuth ϕ of a horizontal transversely isotropic medium, given an inhomogeneous elliptic background model (which might be obtained from velocity analysis and well velocities). This is accomplished through a Taylor\\'s series expansion of the travel-time solution (of the eikonal equation) as a function of parameter η and azimuth angle ϕ. The accuracy of the travel time expansion is enhanced by the use of Shanks transform. This results in an accurate approximation of the solution of the non-linear eikonal equation and provides a mechanism to scan simultaneously for the best fitting effective parameters η and ϕ, without the need for repetitive modelling of travel times. The analysis of the travel time sensitivity to parameters η and ϕ reveals that travel times are more sensitive to η than to the symmetry axis azimuth ϕ. Thus, η is better constrained from travel times than the azimuth. Moreover, the two-parameter scan in the homogeneous case shows that errors in the background model affect the estimation of η and ϕ differently. While a gradual increase in errors in the background model leads to increasing errors in η, inaccuracies in ϕ, on the other hand, depend on the background model errors. We also propose a layer-stripping method valid for a stack of arbitrary oriented symmetry axis horizontal transversely isotropic layers to convert the effective parameters to the interval layer values.

  18. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    Kovtonyuk, A.; Petruzzi, A.; D'Auria, F.

    2015-01-01

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  19. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  20. Efficient cascaded parameter scan approach for studying top-off safety in storage rings

    Directory of Open Access Journals (Sweden)

    Yongjun Li

    2011-03-01

    Full Text Available We introduce a new algorithm, which we call the cascaded parameter scan method, to efficiently carry out the scan over magnet parameters in the safety analysis for top-off injection in synchrotron radiation storage rings. In top-off safety analysis, one must track particles populating phase space through a beam line containing magnets and apertures and clearly demonstrate that, for all possible magnet settings and errors, all particles are lost on scrapers within the properly shielded region. In the usual approach, if one considers m magnets and scans each magnet through n setpoints, then one must carry out n^{m} tracking runs. In the cascaded parameter scan method, the number of tracking runs is reduced to n×m. This reduction of exponential to linear dependence on the number of setpoints n greatly reduces the required computation time and allows one to more densely populate phase space and to increase the number n of setpoints scanned for each magnet.

  1. Scanning anisotropy parameters in complex media

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Parameter estimation in an inhomogeneous anisotropic medium offers many challenges; chief among them is the trade-off between inhomogeneity and anisotropy. It is especially hard to estimate the anisotropy anellipticity parameter η in complex media. Using perturbation theory and Taylor’s series, I have expanded the solutions of the anisotropic eikonal equation for transversely isotropic (TI) media with a vertical symmetry axis (VTI) in terms of the independent parameter η from a generally inhomogeneous elliptically anisotropic medium background. This new VTI traveltime solution is based on a set of precomputed perturbations extracted from solving linear partial differential equations. The traveltimes obtained from these equations serve as the coefficients of a Taylor-type expansion of the total traveltime in terms of η. Shanks transform is used to predict the transient behavior of the expansion and improve its accuracy using fewer terms. A homogeneous medium simplification of the expansion provides classical nonhyperbolic moveout descriptions of the traveltime that are more accurate than other recently derived approximations. In addition, this formulation provides a tool to scan for anisotropic parameters in a generally inhomogeneous medium background. A Marmousi test demonstrates the accuracy of this approximation. For a tilted axis of symmetry, the equations are still applicable with a slightly more complicated framework because the vertical velocity and δ are not readily available from the data.

  2. Scanning anisotropy parameters in complex media

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-03-21

    Parameter estimation in an inhomogeneous anisotropic medium offers many challenges; chief among them is the trade-off between inhomogeneity and anisotropy. It is especially hard to estimate the anisotropy anellipticity parameter η in complex media. Using perturbation theory and Taylor’s series, I have expanded the solutions of the anisotropic eikonal equation for transversely isotropic (TI) media with a vertical symmetry axis (VTI) in terms of the independent parameter η from a generally inhomogeneous elliptically anisotropic medium background. This new VTI traveltime solution is based on a set of precomputed perturbations extracted from solving linear partial differential equations. The traveltimes obtained from these equations serve as the coefficients of a Taylor-type expansion of the total traveltime in terms of η. Shanks transform is used to predict the transient behavior of the expansion and improve its accuracy using fewer terms. A homogeneous medium simplification of the expansion provides classical nonhyperbolic moveout descriptions of the traveltime that are more accurate than other recently derived approximations. In addition, this formulation provides a tool to scan for anisotropic parameters in a generally inhomogeneous medium background. A Marmousi test demonstrates the accuracy of this approximation. For a tilted axis of symmetry, the equations are still applicable with a slightly more complicated framework because the vertical velocity and δ are not readily available from the data.

  3. Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission

  4. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts. The blade shape is considered as a fixed parameter...

  5. Optimal CT scanning parameters for commonly used tumor ablation applicators

    International Nuclear Information System (INIS)

    Eltorai, Adam E.M.; Baird, Grayson L.; Monu, Nicholas; Wolf, Farrah; Seidler, Michael; Collins, Scott; Kim, Jeomsoon; Dupuy, Damian E.

    2017-01-01

    Highlights: • This study aimed to determine optimal scanning parameters for commonly-used tumor ablation applicators. • The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance. • Optimum combinations for each probe are provided. - Abstract: Purpose: CT-beam hardening artifact can make tumor margin visualization and its relationship to the ablation applicator tip challenging. To determine optimal scanning parameters for commonly-used applicators. Materials and methods: Applicators were placed in ex-vivo cow livers with implanted mock tumors, surrounded by bolus gel. Various CT scans were performed at 440 mA with 5 mm thickness changing kVp, scan time, ASiR, scan type, pitch, and reconstruction algorithm. Four radiologists blindly scored the images for image quality and artifact quantitatively. Results: A significant relationship between probe, kVp level, ASiR level, and reconstruction algorithm was observed concerning both image artifact and image quality (both p = <0.0001). Specifically, there are certain combinations of kVp, ASiR, and reconstruction algorithm that yield better images than other combinations. In particular, one probe performed equivalently or better than any competing probe considered here, regardless of kVp, ASiR, and reconstruction algorithm combination. Conclusion: The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance for a certain combinations of kVp, ASiR, reconstruction algorithm and probes at their disposal. Optimum combinations for each probe are provided.

  6. Optimal CT scanning parameters for commonly used tumor ablation applicators

    Energy Technology Data Exchange (ETDEWEB)

    Eltorai, Adam E.M. [Warren Alpert Medical School of Brown University (United States); Baird, Grayson L. [Department of Diagnostic Imaging (United States); Warren Alpert Medical School of Brown University (United States); Lifespan Biostatistics Core (United States); Rhode Island Hospital (United States); Monu, Nicholas; Wolf, Farrah; Seidler, Michael [Department of Diagnostic Imaging (United States); Warren Alpert Medical School of Brown University (United States); Rhode Island Hospital (United States); Collins, Scott [Department of Diagnostic Imaging (United States); Rhode Island Hospital (United States); Kim, Jeomsoon [Department of Medical Physics (United States); Rhode Island Hospital (United States); Dupuy, Damian E., E-mail: ddupuy@comcast.net [Department of Diagnostic Imaging (United States); Warren Alpert Medical School of Brown University (United States); Rhode Island Hospital (United States)

    2017-04-15

    Highlights: • This study aimed to determine optimal scanning parameters for commonly-used tumor ablation applicators. • The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance. • Optimum combinations for each probe are provided. - Abstract: Purpose: CT-beam hardening artifact can make tumor margin visualization and its relationship to the ablation applicator tip challenging. To determine optimal scanning parameters for commonly-used applicators. Materials and methods: Applicators were placed in ex-vivo cow livers with implanted mock tumors, surrounded by bolus gel. Various CT scans were performed at 440 mA with 5 mm thickness changing kVp, scan time, ASiR, scan type, pitch, and reconstruction algorithm. Four radiologists blindly scored the images for image quality and artifact quantitatively. Results: A significant relationship between probe, kVp level, ASiR level, and reconstruction algorithm was observed concerning both image artifact and image quality (both p = <0.0001). Specifically, there are certain combinations of kVp, ASiR, and reconstruction algorithm that yield better images than other combinations. In particular, one probe performed equivalently or better than any competing probe considered here, regardless of kVp, ASiR, and reconstruction algorithm combination. Conclusion: The findings illustrate the overall interaction of the effects of kVp, ASiR, and reconstruction algorithm within and between probes, so that radiologists may easily reference optimal imaging performance for a certain combinations of kVp, ASiR, reconstruction algorithm and probes at their disposal. Optimum combinations for each probe are provided.

  7. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  8. Whole-heart 3D late gadolinium-enhanced MR imaging. Investigation of optimal scan parameters and clinical usefulness

    International Nuclear Information System (INIS)

    Yorimitsu, Misako; Yokoyama, Kenichi; Nitatori, Toshiaki; Yoshino, Hideaki; Isono, Sachiko; Kuhara, Shigehide

    2012-01-01

    Whole-heart 3-dimensional (3D) late-gadolinium-enhanced magnetic resonance (MR) imaging (WH-LGE) uses respiratory gating combined with acquisition of 3D data for the entire heart in a single scan, which permits reconstruction of any plane with high resolution. We investigated the optimal scan parameters and compared WH-LGE with the conventional scanning method. We employed inversion recovery 3D fast field echo using a 1.5-tesla system and scan parameters: repetition time (TR), 6.6 ms; echo time (TE), 2.5 ms; number of segments, 2; parallel imaging factor, 1.8; matrix size, 128 x 256; field of view (FOV), 320 x 320 mm; and acquisition slice thickness, 3 mm (reconstruction slice thickness, 1.5 mm). Five healthy volunteers underwent scanning during free breathing with real-time motion correction, from which we determined optimal scan parameters. We then used those parameters to scan 25 patients with myocardial infarction to compare scan time and image quality between the WH-LGE and conventional 3D breath-holding methods (slice thickness, 10 mm; matrix size, 128 x 256). Results in volunteers showed optimal scan parameters of 12deg flip angle, fat suppression turned off in combination, and interleaved ordering. In clinical cases, scan times did not differ significantly. Sharpness of the margins of normal myocardium at the apex of the heart and contrast between enhanced and nonenhanced myocardium improved significantly with WH-LGE. WH-LGE yields high resolution images during free breathing and is considered useful for accurately estimating the area and transmural extent of myocardial infarction. (author)

  9. Parameter scan for the CLIC Damping rings under the infleunce of intrabeam scattering

    OpenAIRE

    Antoniou, F; Martini, M; Papaphilippou, Y; Vivoli, A

    2010-01-01

    Due to the high bunch density, the output emittances of the CLIC Damping Rings (DR) are strongly dominated by the effect of Intrabeam Scattering (IBS). In an attempt to optimize the ring design, the bench-marking of the multiparticle tracking code SIRE with the classical IBS formalisms and approximations is first considered. The scaling of the steady state emittances and IBS growth rates is also studied, with respect to several ring parameters including energy, bunch charge and wiggler charac...

  10. Parameter scan for the CLIC Damping rings under the infleunce of intrabeam scattering

    CERN Document Server

    Antoniou, F; Papaphilippou, Y; Vivoli, A

    2010-01-01

    Due to the high bunch density, the output emittances of the CLIC Damping Rings (DR) are strongly dominated by the effect of Intrabeam Scattering (IBS). In an attempt to optimize the ring design, the bench-marking of the multiparticle tracking code SIRE with the classical IBS formalisms and approximations is first considered. The scaling of the steady state emittances and IBS growth rates is also studied, with respect to several ring parameters including energy, bunch charge and wiggler characteristics.

  11. MIPS bacterial genomes functional annotation benchmark dataset.

    Science.gov (United States)

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  12. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study.

    Science.gov (United States)

    De Bondt, Timo; Mulkens, Tom; Zanca, Federica; Pyfferoen, Lotte; Casselman, Jan W; Parizel, Paul M

    2017-02-01

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. • Significant differences were observed in the delivered dose between age-groups and hospitals. • Using age-adapted scanning protocols gives a nearly linear dose increase. • Sharing dose-data can be a trigger for hospitals to reduce dose levels.

  13. Reliability of capturing foot parameters using digital scanning and the neutral suspension casting technique

    Science.gov (United States)

    2011-01-01

    Background A clinical study was conducted to determine the intra and inter-rater reliability of digital scanning and the neutral suspension casting technique to measure six foot parameters. The neutral suspension casting technique is a commonly utilised method for obtaining a negative impression of the foot prior to orthotic fabrication. Digital scanning offers an alternative to the traditional plaster of Paris techniques. Methods Twenty one healthy participants volunteered to take part in the study. Six casts and six digital scans were obtained from each participant by two raters of differing clinical experience. The foot parameters chosen for investigation were cast length (mm), forefoot width (mm), rearfoot width (mm), medial arch height (mm), lateral arch height (mm) and forefoot to rearfoot alignment (degrees). Intraclass correlation coefficients (ICC) with 95% confidence intervals (CI) were calculated to determine the intra and inter-rater reliability. Measurement error was assessed through the calculation of the standard error of the measurement (SEM) and smallest real difference (SRD). Results ICC values for all foot parameters using digital scanning ranged between 0.81-0.99 for both intra and inter-rater reliability. For neutral suspension casting technique inter-rater reliability values ranged from 0.57-0.99 and intra-rater reliability values ranging from 0.36-0.99 for rater 1 and 0.49-0.99 for rater 2. Conclusions The findings of this study indicate that digital scanning is a reliable technique, irrespective of clinical experience, with reduced measurement variability in all foot parameters investigated when compared to neutral suspension casting. PMID:21375757

  14. The quality of reconstructed 3D images in multidetector-row helical CT: experimental study involving scan parameters

    International Nuclear Information System (INIS)

    Shin, Ji Hoon; Lee, Ho Kyu; Choi, Choong Gon; Suh, Dae Chul; Lim, Tae Hwan; Kang, Weechang

    2002-01-01

    To determine which multidetector-row helical CT scanning technique provides the best-quality reconstructed 3D images, and to assess differences in image quality according to the levels of the scanning parameters used. Four objects with different surfaces and contours were scanned using multidetector-row helical CT at three detector-row collimations (1.25, 2.50, 5.00 mm), two pitches (3.0, 6.0), and three different degrees of overlap between the reconstructed slices (0%, 25%, 50%). Reconstructed 3D images of the resulting 72 sets of data were produced using volumetric rendering. The 72 images were graded on a scale from 1 (worst) to 5 (best) for each of four rating criteria, giving a mean score for each criterion and an overall mean score. Statistical analysis was used to assess differences in image quality according to scanning parameter levels. The mean score for each rating criterion, and the overall mean score, varied significantly according to the scanning parameter levels used. With regard to detector-row collimation and pitch, all levels of scanning parameters gave rise to significant differences, while in the degree of overlap of reconstructed slices, there were significant differences between overlap of 0% and of 50% in all levels of scanning parameters, and between overlap of 25% and of 50% in overall accuracy and overall mean score. Among the 18 scanning sequences, the highest score (4.94) was achieved with 1.25 mm detector-row collimation, 3.0 pitch, and 50% overlap between reconstructed slices. Comparison of the quality of reconstructed 3D images obtained using multidetector-row helical CT and various scanning techniques indicated that the 1.25 mm, 3.0, 50% scanning sequence was best. Quality improved as detector-row collimation decreased; as pitch was reduced from 6.0 to 3.0; and as overlap between reconstructed slices increased

  15. A study of metal artifacts on MR imaging. Evaluation of scanning parameters

    International Nuclear Information System (INIS)

    Yamashiro, Mitsuaki

    1999-01-01

    The purpose of this study was to evaluate scanning parameters on MR imaging for reducing metal artifacts using phantom study. Metal artifacts on sagittal images, perpendicular to static magnetic direction showed round shape in the relationship between shape of metal artifacts on MR images and scanning direction. Metal artifacts on both axial and coronal images, parallel to static magnetic direction showed oval shape in the direction of X-axis. In spin echo sequences, the largest dimension of metal artifacts was coronal image, followed by axial image and then sagittal image. In gradient echo sequences, the largest dimension of metal artifacts was axial image, followed by coronal image and then sagittal image. The best scanning plane for reducing metal artifacts was perpendicular to static magnetic direction. In scanning sequences, the largest dimensions of metal artifacts were gradient echo sequences, followed by T2-weighted spin echo sequence and then proton density-weighted and T1-weighted spin echo sequences. Large flip angle increased much metal artifacts on both axial and coronal images in gradient echo sequences. Small flip angle was useful for reducing metal artifacts on both axial and coronal images. The influence of flip angle on metal artifacts in sagittal images perpendicular static magnetic direction was less than for images in coronal and axial planes on gradient echo sequences. These results suggested that a study of metal artifacts on MR imaging about evaluation of scanning parameters was useful to reduce metal artifacts on MR images. (K.H.)

  16. WWER in-core fuel management benchmark definition

    International Nuclear Information System (INIS)

    Apostolov, T.; Alekova, G.; Prodanova, R.; Petrova, T.; Ivanov, K.

    1994-01-01

    Two benchmark problems for WWER-440, including design parameters, operating conditions and measured quantities are discussed in this paper. Some benchmark results for infinitive multiplication factor -K eff , natural boron concentration - C β and relative power distribution - K q obtained by use of the code package are represented. (authors). 5 refs., 3 tabs

  17. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we ...

  18. Key performance indicators for stroke from the Ministry of Health of Brazil: benchmarking and indicator parameters.

    Science.gov (United States)

    Lange, Marcos C; Braga, Gabriel Pereira; Nóvak, Edison M; Harger, Rodrigo; Felippe, Maria Justina Dalla Bernardina; Canever, Mariana; Dall'Asta, Isabella; Rauen, Jordana; Bazan, Rodrigo; Zetola, Viviane

    2017-06-01

    All 16 KPIs were analyzed, including the percentage of patients admitted to the stroke unit, venous thromboembolism prophylaxis in the first 48 hours after admission, pneumonia and hospital mortality due to stroke, and hospital discharge on antithrombotic therapy in patients without cardioembolic mechanism. Both centers admitted over 80% of the patients in their stroke unit. The incidence of venous thromboembolism prophylaxis was > 85%, that of in-hospital pneumonia was 70%. Our results suggest using the parameters of all of the 16 KPIs required by the Ministry of Health of Brazil, and the present results for the two stroke units for future benchmarking.

  19. Numisheet2005 Benchmark Analysis on Forming of an Automotive Deck Lid Inner Panel: Benchmark 1

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    Numerical simulations in sheet metal forming processes have been a very challenging topic in industry. There are many computer codes and modeling techniques existing today. However, there are many unknowns affecting the prediction accuracy. Systematic benchmark tests are needed to accelerate the future implementations and to provide as a reference. This report presents an international cooperative benchmark effort for an automotive deck lid inner panel. Predictions from simulations are analyzed and discussed against the corresponding experimental results. The correlations between accuracy of each parameter of interest are discussed in this report

  20. Benchmark models, planes lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    AbdusSalam, S.S.; Allanach, B.C.; Dreiner, H.K.

    2012-03-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  1. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  2. Benchmark Models, Planes, Lines and Points for Future SUSY Searches at the LHC

    CERN Document Server

    AbdusSalam, S S; Dreiner, H K; Ellis, J; Ellwanger, U; Gunion, J; Heinemeyer, S; Krämer, M; Mangano, M L; Olive, K A; Rogerson, S; Roszkowski, L; Schlaffer, M; Weiglein, G

    2011-01-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  3. A Benchmark of Lidar-Based Single Tree Detection Methods Using Heterogeneous Forest Data from the Alpine Space

    Directory of Open Access Journals (Sweden)

    Lothar Eysn

    2015-05-01

    Full Text Available In this study, eight airborne laser scanning (ALS-based single tree detection methods are benchmarked and investigated. The methods were applied to a unique dataset originating from different regions of the Alpine Space covering different study areas, forest types, and structures. This is the first benchmark ever performed for different forests within the Alps. The evaluation of the detection results was carried out in a reproducible way by automatically matching them to precise in situ forest inventory data using a restricted nearest neighbor detection approach. Quantitative statistical parameters such as percentages of correctly matched trees and omission and commission errors are presented. The proposed automated matching procedure presented herein shows an overall accuracy of 97%. Method based analysis, investigations per forest type, and an overall benchmark performance are presented. The best matching rate was obtained for single-layered coniferous forests. Dominated trees were challenging for all methods. The overall performance shows a matching rate of 47%, which is comparable to results of other benchmarks performed in the past. The study provides new insight regarding the potential and limits of tree detection with ALS and underlines some key aspects regarding the choice of method when performing single tree detection for the various forest types encountered in alpine regions.

  4. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  5. SparseBeads data: benchmarking sparsity-regularized computed tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer; Coban, Sophia B.; Lionheart, William R. B.

    2017-01-01

    -regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels...

  6. JENDL-4.0 benchmarking for fission reactor applications

    International Nuclear Information System (INIS)

    Chiba, Go; Okumura, Keisuke; Sugino, Kazuteru; Nagaya, Yasunobu; Yokoyama, Kenji; Kugo, Teruhiko; Ishikawa, Makoto; Okajima, Shigeaki

    2011-01-01

    Benchmark testing for the newly developed Japanese evaluated nuclear data library JENDL-4.0 is carried out by using a huge amount of integral data. Benchmark calculations are performed with a continuous-energy Monte Carlo code and with the deterministic procedure, which has been developed for fast reactor analyses in Japan. Through the present benchmark testing using a wide range of benchmark data, significant improvement in the performance of JENDL-4.0 for fission reactor applications is clearly demonstrated in comparison with the former library JENDL-3.3. Much more accurate and reliable prediction for neutronic parameters for both thermal and fast reactors becomes possible by using the library JENDL-4.0. (author)

  7. Analysis of an OECD/NEA high-temperature reactor benchmark

    International Nuclear Information System (INIS)

    Hosking, J. G.; Newton, T. D.; Koeberl, O.; Morris, P.; Goluoglu, S.; Tombakoglu, T.; Colak, U.; Sartori, E.

    2006-01-01

    This paper describes analyses of the OECD/NEA HTR benchmark organized by the 'Working Party on the Scientific Issues of Reactor Systems (WPRS)', formerly the 'Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles'. The benchmark was specifically designed to provide inter-comparisons for plutonium and thorium fuels when used in HTR systems. Calculations considering uranium fuel have also been included in the benchmark, in order to identify any increased uncertainties when using plutonium or thorium fuels. The benchmark consists of five phases, which include cell and whole-core calculations. Analysis of the benchmark has been performed by a number of international participants, who have used a range of deterministic and Monte Carlo code schemes. For each of the benchmark phases, neutronics parameters have been evaluated. Comparisons are made between the results of the benchmark participants, as well as comparisons between the predictions of the deterministic calculations and those from detailed Monte Carlo calculations. (authors)

  8. Benchmarking of copper(II) LFMM parameters for studying amyloid-β peptides.

    Science.gov (United States)

    Mutter, Shaun T; Deeth, Robert J; Turner, Matthew; Platts, James A

    2018-04-01

    Ligand field molecular mechanics (LFMM) parameters have been benchmarked for copper (II) bound to the amyloid-β 1-16 peptide fragment. Several density functional theory (DFT) optimised small test models, representative of different possible copper coordination modes, have been used to test the accuracy of the LFMM copper bond lengths and angles, resulting in errors typically less than 0.1 Å and 5°. Ligand field molecular dynamics (LFMD) simulations have been carried out on the copper bound amyloid-β 1-16 peptide and snapshots extracted from the subsequent trajectory. Snapshots have been optimised using DFT and the semi-empirical PM7 method resulting in good agreement against the LFMM calculated geometry. Analysis of substructures within snapshots shows that the larger contribution of geometrical difference, as measured by RMSD, lies within the peptide backbone, arising from differences in DFT and AMBER, and the copper coordination sphere is reproduced well by LFMM. PM7 performs excellently against LFMM with an average RMSD of 0.2 Å over 21 tested snapshots. Further analysis of the LFMD trajectory shows that copper bond lengths and angles have only small deviations from average values, with the exception of a carbonyl moiety from the N-terminus, which can act as a weakly bound fifth ligand.

  9. Use of the Benchmarking System for Operational Waste from WWER Reactors

    International Nuclear Information System (INIS)

    2017-06-01

    The focus of this publication is on benchmarking low and intermediate level waste generated and managed during the normal operating life of a WWER, and it identifies and defines the benchmarking parameters selected for WWER type reactors. It includes a brief discussion on why those parameters were selected and their intended benchmarking benefits, and provides a description of the database and graphical user interface selected, designed and developed, including how to use it for data input and data analysis. The CD-ROM accompanying this publication provides an overview of practices at WWER sites, which were to a large extent prepared using the WWER BMS.

  10. Dose responses in a normoxic polymethacrylic acid gel dosimeter using optimal CT scanning parameters

    Energy Technology Data Exchange (ETDEWEB)

    Cho, K.H. [Department of Radiation Oncology, College of Medicine, Soonchunhyang University, Bucheon 420-767 (Korea, Republic of); Department of Medical Physics, Kyonggi University, Suwon 443-760 (Korea, Republic of); Cho, S.J. [Department of Radiation Oncology, College of Medicine, Eulji University, Seongnam 461-713 (Korea, Republic of); Lee, S. [Department of Radiation Oncology, College of Medicine, Korea University, Seoul 130-701 (Korea, Republic of); Lee, S.H. [Cheil General Hospital and Women' s Healthcare Center, Kwandong University College of Medicine, Seoul 100-380 (Korea, Republic of); Min, C.K.; Kim, Y.H.; Moon, S.K.; Kim, E.S.; Chang, A.R. [Department of Radiation Oncology, College of Medicine, Soonchunhyang University, Bucheon 420-767 (Korea, Republic of); Kwon, S.I., E-mail: sikwon@kyonggi.ac.kr [Department of Medical Physics, Kyonggi University, Suwon 443-760 (Korea, Republic of)

    2012-05-21

    The dosimetric characteristics of normoxic polymethacrylic acid gels are investigated using optimal CT scanning parameters and the possibility of their clinical application is also considered. The effects of CT scanning parameters (tube voltage, tube current, scan time, slick thickness, field of view, and reconstruction algorithm) are experimentally investigated to determine the optimal parameters for minimizing the amount of noise in images obtained using normoxic polymethacrylic acid gel. In addition, the dose sensitivity, dose response, accuracy, and reproducibility of the normoxic polymethacrylic acid gel are evaluated. CT images are obtained using a head phantom that is fabricated for clinical applications. In addition, IMRT treatment planning is performed using a Tomotherapy radiation treatment planning system. A program for analyzing the results is produced using Visual C. A comparison between the treatment planning and the CT images of irradiated gels is performed. The dose sensitivity is found to be 2.41{+-}0.04 HGy{sup -1}. The accuracies of dose evaluation at doses of 2 Gy and 4 Gy are 3.0% and 2.6%, respectively, and their reproducibilities are 2.0% and 2.1%, respectively. In the comparison of gel and Tomotherpay planning, the pass rate of the {gamma}-index, based on the reference values of a dose error of 3% and a DTA of 3 mm, is 93.7%.

  11. Dose responses in a normoxic polymethacrylic acid gel dosimeter using optimal CT scanning parameters

    Science.gov (United States)

    Cho, K. H.; Cho, S. J.; Lee, S.; Lee, S. H.; Min, C. K.; Kim, Y. H.; Moon, S. K.; Kim, E. S.; Chang, A. R.; Kwon, S. I.

    2012-05-01

    The dosimetric characteristics of normoxic polymethacrylic acid gels are investigated using optimal CT scanning parameters and the possibility of their clinical application is also considered. The effects of CT scanning parameters (tube voltage, tube current, scan time, slick thickness, field of view, and reconstruction algorithm) are experimentally investigated to determine the optimal parameters for minimizing the amount of noise in images obtained using normoxic polymethacrylic acid gel. In addition, the dose sensitivity, dose response, accuracy, and reproducibility of the normoxic polymethacrylic acid gel are evaluated. CT images are obtained using a head phantom that is fabricated for clinical applications. In addition, IMRT treatment planning is performed using a Tomotherapy radiation treatment planning system. A program for analyzing the results is produced using Visual C. A comparison between the treatment planning and the CT images of irradiated gels is performed. The dose sensitivity is found to be 2.41±0.04 HGy-1. The accuracies of dose evaluation at doses of 2 Gy and 4 Gy are 3.0% and 2.6%, respectively, and their reproducibilities are 2.0% and 2.1%, respectively. In the comparison of gel and Tomotherpay planning, the pass rate of the γ-index, based on the reference values of a dose error of 3% and a DTA of 3 mm, is 93.7%.

  12. Compilation report of VHTRC temperature coefficient benchmark calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yasuda, Hideshi; Yamane, Tsuyoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-11-01

    A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, `Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors` to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k{sub eff}, by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other`s ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.).

  13. Higgs Pair Production: Choosing Benchmarks With Cluster Analysis

    CERN Document Server

    Carvalho, Alexandra; Dorigo, Tommaso; Goertz, Florian; Gottardo, Carlo A.; Tosi, Mia

    2016-01-01

    New physics theories often depend on a large number of free parameters. The precise values of those parameters in some cases drastically affect the resulting phenomenology of fundamental physics processes, while in others finite variations can leave it basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics of different models; a clustering algorithm using that metric may then allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmark points are then guaranteed to be sensitive to a large area of the parameter space. In this doc...

  14. Validation of CENDL and JEFF evaluated nuclear data files for TRIGA calculations through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors

    International Nuclear Information System (INIS)

    Uddin, M.N.; Sarker, M.M.; Khan, M.J.H.; Islam, S.M.A.

    2009-01-01

    The aim of this paper is to present the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through the analysis of the integral parameters of TRX and BAPL benchmark lattices of thermal reactors for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. In this process, the 69-group cross-section library for lattice code WIMS was generated using the basic evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 with the help of nuclear data processing code NJOY99.0. Integral measurements on the thermal reactor lattices TRX-1, TRX-2, BAPL-UO 2 -1, BAPL-UO 2 -2 and BAPL-UO 2 -3 served as standard benchmarks for testing nuclear data files and have also been selected for this analysis. The integral parameters of the said lattices were calculated using the lattice transport code WIMSD-5B based on the generated 69-group cross-section library. The calculated integral parameters were compared to the measured values as well as the results of Monte Carlo Code MCNP. It was found that in most cases, the values of integral parameters show a good agreement with the experiment and MCNP results. Besides, the group constants in WIMS format for the isotopes U-235 and U-238 between two data files have been compared using WIMS library utility code WILLIE and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through benchmarking the integral parameters of TRX and BAPL lattices and can also be essential to implement further neutronic analysis of TRIGA Mark-II research reactor at AERE, Dhaka, Bangladesh.

  15. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  16. Burn-up TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Persic, A.; Ravnik, M.; Zagar, T.

    1998-01-01

    Different reactor codes are used for calculations of reactor parameters. The accuracy of the programs is tested through comparison of the calculated values with the experimental results. Well-defined and accurately measured benchmarks are required. The experimental results of reactivity measurements, fuel element reactivity worth distribution and fuel-up measurements are presented in this paper. The experiments were performed with partly burnt reactor core. The experimental conditions were well defined, so that the results can be used as a burn-up benchmark test case for a TRIGA Mark II reactor calculations.(author)

  17. Key performance indicators for stroke from the Ministry of Health of Brazil: benchmarking and indicator parameters

    Directory of Open Access Journals (Sweden)

    Marcos C Lange

    Full Text Available ABSTRACT The present study aimed to analyze the stroke units in two centers for the key performance indicators (KPIs required by the Ministry of Health in Brazil. Methods All 16 KPIs were analyzed, including the percentage of patients admitted to the stroke unit, venous thromboembolism prophylaxis in the first 48 hours after admission, pneumonia and hospital mortality due to stroke, and hospital discharge on antithrombotic therapy in patients without cardioembolic mechanism. Results Both centers admitted over 80% of the patients in their stroke unit. The incidence of venous thromboembolism prophylaxis was > 85%, that of in-hospital pneumonia was 70%. Conclusion Our results suggest using the parameters of all of the 16 KPIs required by the Ministry of Health of Brazil, and the present results for the two stroke units for future benchmarking.

  18. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  19. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  20. Optimization of imaging parameters for SPECT scans of [99mTc]TRODAT-1 using Taguchi analysis.

    Directory of Open Access Journals (Sweden)

    Cheng-Kai Huang

    Full Text Available Parkinson's disease (PD is a neurodegenerative disease characterized by progressive loss of dopaminergic neurons in the basal ganglia. Single photon emission computed tomography (SPECT scans using [99mTc]TRODAT-1 can image dopamine transporters and provide valuable diagnostic information of PD. In this study, we optimized the scanning parameters for [99mTc]TRODAT-1/SPECT using the Taguchi analysis to improve image quality. SPECT scans were performed on forty-five healthy volunteers according to an L9 orthogonal array. Three parameters were considered, including the injection activity, uptake duration, and acquisition time per projection. The signal-to-noise ratio (SNR was calculated from the striatum/occipital activity ratio as an image quality index. Ten healthy subjects and fifteen PD patients were used to verify the optimal parameters. The estimated optimal parameters were 962 MBq for [99mTc]TRODAT-1 injection, 260 min for uptake duration, and 60 s/projection for data acquisition. The uptake duration and time per projection were the two dominant factors which had an F-value of 18.638 (38% and 25.933 (53%, respectively. Strong cross interactions existed between the injection activity/uptake duration and injection activity/time per projection. Therefore, under the consideration of as low as reasonably achievable (ALARA for radiation protection, we can decrease the injection activity to 740 MBq. The image quality remains almost the same for clinical applications.

  1. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  2. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  3. In-core fuel management benchmarks for PHWRs

    International Nuclear Information System (INIS)

    1996-06-01

    Under its in-core fuel management activities, the IAEA set up two co-ordinated research programmes (CRPs) on complete in-core fuel management code packages. At a consultant meeting in November 1988 the outline of the CRP on in-core fuel management benchmars for PHWRs was prepared, three benchmarks were specified and the corresponding parameters were defined. At the first research co-ordination meeting in December 1990, seven more benchmarks were specified. The objective of this TECDOC is to provide reference cases for the verification of code packages used for reactor physics and fuel management of PHWRs. 91 refs, figs, tabs

  4. Scanning the parameter space of collapsing rotating thin shells

    Science.gov (United States)

    Rocha, Jorge V.; Santarelli, Raphael

    2018-06-01

    We present results of a comprehensive study of collapsing and bouncing thin shells with rotation, framing it in the context of the weak cosmic censorship conjecture. The analysis is based on a formalism developed specifically for higher odd dimensions that is able to describe the dynamics of collapsing rotating shells exactly. We analyse and classify a plethora of shell trajectories in asymptotically flat spacetimes. The parameters varied include the shell’s mass and angular momentum, its radial velocity at infinity, the (linear) equation-of-state parameter and the spacetime dimensionality. We find that plunges of rotating shells into black holes never produce naked singularities, as long as the matter shell obeys the weak energy condition, and so respects cosmic censorship. This applies to collapses of dust shells starting from rest or with a finite velocity at infinity. Not even shells with a negative isotropic pressure component (i.e. tension) lead to the formation of naked singularities, as long as the weak energy condition is satisfied. Endowing the shells with a positive isotropic pressure component allows for the existence of bouncing trajectories satisfying the dominant energy condition and fully contained outside rotating black holes. Otherwise any turning point occurs always inside the horizon. These results are based on strong numerical evidence from scans of numerous sections in the large parameter space available to these collapsing shells. The generalisation of the radial equation of motion to a polytropic equation-of-state for the matter shell is also included in an appendix.

  5. Ultracool dwarf benchmarks with Gaia primaries

    Science.gov (United States)

    Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.

    2017-10-01

    We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.

  6. Technical Note: Using experimentally determined proton spot scanning timing parameters to accurately model beam delivery time.

    Science.gov (United States)

    Shen, Jiajian; Tryggestad, Erik; Younkin, James E; Keole, Sameer R; Furutani, Keith M; Kang, Yixiu; Herman, Michael G; Bues, Martin

    2017-10-01

    To accurately model the beam delivery time (BDT) for a synchrotron-based proton spot scanning system using experimentally determined beam parameters. A model to simulate the proton spot delivery sequences was constructed, and BDT was calculated by summing times for layer switch, spot switch, and spot delivery. Test plans were designed to isolate and quantify the relevant beam parameters in the operation cycle of the proton beam therapy delivery system. These parameters included the layer switch time, magnet preparation and verification time, average beam scanning speeds in x- and y-directions, proton spill rate, and maximum charge and maximum extraction time for each spill. The experimentally determined parameters, as well as the nominal values initially provided by the vendor, served as inputs to the model to predict BDTs for 602 clinical proton beam deliveries. The calculated BDTs (T BDT ) were compared with the BDTs recorded in the treatment delivery log files (T Log ): ∆t = T Log -T BDT . The experimentally determined average layer switch time for all 97 energies was 1.91 s (ranging from 1.9 to 2.0 s for beam energies from 71.3 to 228.8 MeV), average magnet preparation and verification time was 1.93 ms, the average scanning speeds were 5.9 m/s in x-direction and 19.3 m/s in y-direction, the proton spill rate was 8.7 MU/s, and the maximum proton charge available for one acceleration is 2.0 ± 0.4 nC. Some of the measured parameters differed from the nominal values provided by the vendor. The calculated BDTs using experimentally determined parameters matched the recorded BDTs of 602 beam deliveries (∆t = -0.49 ± 1.44 s), which were significantly more accurate than BDTs calculated using nominal timing parameters (∆t = -7.48 ± 6.97 s). An accurate model for BDT prediction was achieved by using the experimentally determined proton beam therapy delivery parameters, which may be useful in modeling the interplay effect and patient throughput. The model may

  7. Theory-motivated benchmark models and superpartners at the Fermilab Tevatron

    International Nuclear Information System (INIS)

    Kane, G.L.; Nelson, Brent D.; Wang Liantao; Wang, Ting T.; Lykken, J.; Mrenna, Stephen

    2003-01-01

    Recently published benchmark models have contained rather heavy superpartners. To test the robustness of this result, several benchmark models have been constructed based on theoretically well-motivated approaches, particularly string-based ones. These include variations on anomaly- and gauge-mediated models, as well as gravity mediation. The resulting spectra often have light gauginos that are produced in significant quantities at the Fermilab Tevatron collider, or will be at a 500 GeV linear collider. The signatures also provide interesting challenges for the CERN LHC. In addition, these models are capable of accounting for electroweak symmetry breaking with less severe cancellations among soft supersymmetry breaking parameters than previous benchmark models

  8. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiology, Osaka University Hospital, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Department of Radiation Oncology, St. Jude Children’s Research Hospital, Memphis, TN 38105 (United States)

    2016-01-15

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm{sup 3}, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm{sup 3} voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation

  9. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    International Nuclear Information System (INIS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm 3 , which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm 3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique

  10. Benchmark calculation of nuclear design code for HCLWR

    International Nuclear Information System (INIS)

    Suzuki, Katsuo; Saji, Etsuro; Gakuhari, Kazuhiko; Akie, Hiroshi; Takano, Hideki; Ishiguro, Yukio.

    1986-01-01

    In the calculation of the lattice cell for High Conversion Light Water Reactors, big differences of nuclear design parameters appear between the results obtained by various methods and nuclear data libraries. The validity of the calculation can be verified by the critical experiment. The benchmark calculation is also efficient for the estimation of the validity in wide range of lattice parameters and burnup. As we do not have many measured data. The benchmark calculations were done by JAERI and MAPI, using SRAC and WIMS-E respectively. The problem covered the wide range of lattice parameters, i.e., from tight lattice to the current PWR lattice. The comparison was made on the effective multiplication factor, conversion ratio, and reaction rate of each nuclide, including burnup and void effects. The difference of the result is largest at the tightest lattice. But even at that lattice, the difference of the effective multiplication factor is only 1.4 %. The main cause of the difference is the neutron absorption rate U-238 in resonance energy region. The difference of other nuclear design parameters and their cause were also grasped. (author)

  11. Ontology for Semantic Data Integration in the Domain of IT Benchmarking.

    Science.gov (United States)

    Pfaff, Matthias; Neubig, Stefan; Krcmar, Helmut

    2018-01-01

    A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.

  12. The Medical Library Association Benchmarking Network: development and implementation.

    Science.gov (United States)

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C; Smith, Bernie Todd

    2006-04-01

    This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program.

  13. Evaluation of neutron thermalization parameters and benchmark reactor calculations using a synthetic scattering function for molecular gases

    International Nuclear Information System (INIS)

    Gillete, V.H.; Patino, N.E.; Granada, J.E.; Mayer, R.E.

    1988-01-01

    Using a synthetic scattering function which describes the interaction of neutrons with molecular gases we provide analytical expressions for zero-and first-order scattering kernels, σ 0 (E 0 →E), σ 1 (E 0 →E), and total cross section σ 0 (E 0 ). Based on these quantities, we have performed calculations of thermalization parameters and transport coefficients for H 2 O, D 2 O, C 6 H 6 and (CH 2 ) n at room temperature. Comparasion of such values with available experimental data and other calculations is satisfactory. We also generated nuclear data libraries for H 2 O with 47 thermal groups at 300K and performed some benchmark calculations ( 235 U, 239 Pu, PWR cell and typical APWR cell); the resulting reactivities are compared with experimental data and ENDF/B-IV calculations. (author) [pt

  14. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  15. (Invited) Microreactors for Characterization and Benchmarking of Photocatalysts

    DEFF Research Database (Denmark)

    Vesborg, Peter Christian Kjærgaard; Dionigi, Fabio; Trimarco, Daniel Bøndergaard

    2015-01-01

    In the field of photocatalysis the batch-nature of the typical benchmarking experiment makes it very laborious to obtain good kinetic data as a function of parameters such as illumination wavelength, irradiance, catalyst temperature, reactant composition, etc. Microreactors with on-line mass...

  16. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study

    Energy Technology Data Exchange (ETDEWEB)

    Bondt, Timo de; Parizel, Paul M. [Antwerp University Hospital and University of Antwerp, Department of Radiology, Antwerp (Belgium); Mulkens, Tom [H. Hart Hospital, Department of Radiology, Lier (Belgium); Zanca, Federica [GE Healthcare, DoseWatch, Buc (France); KU Leuven, Imaging and Pathology Department, Leuven (Belgium); Pyfferoen, Lotte; Casselman, Jan W. [AZ St. Jan Brugge-Oostende AV Hospital, Department of Radiology, Brugge (Belgium)

    2017-02-15

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value < 0.001) dose differences among hospitals were observed. The hospital with lowest dose levels showed smallest dose variability and used age-stratified protocols for standardizing paediatric head exams. Erroneous selection of adult protocols for children still occurred, mostly in the oldest age-group. Even though all hospitals complied with national and international DRLs, dose tracking and benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. (orig.)

  17. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  18. Multislice helical CT (MSCT) for mid-facial trauma: optimization of parameters for scanning and reconstruction

    International Nuclear Information System (INIS)

    Dammert, S.; Funke, M.; Obernauer, S.; Grabbe, E.; Merten, H.A.

    2002-01-01

    Purpose: To determine the optimal scan parameters in multislice helical CT (MSCT) of the facial bone complex for both axial scanning and multiplanar reconstructions. Material and Methods: An anthropomorphic skull phantom was examined with a MSCT. Axial scans were performed with continuously increasing collimations (4 x 1.25 - 4 x 2.5 mm), tube current (20 - 200 mA) and table speeds (3.75 mm/rot. and 7.5 mm/rot.). Multiplanar reconstructions in coronal and parasagittal planes with different reconstruction increment and slice thickness were evaluated in terms of image noise, contour artifacts and visualisation of anatomical structures. Results: The best image quality was obtained with a collimation of 4 x 1.25 mm and a - table speed of 3.75 mm/rot. A reconstruction increment of 0.6 mm achieved the best time to image quality relation. With these parameters the bone structures were depicted in an optimal way without artifacts. The tube current could be reduced to 50 mA without significant loss of image quality. The optimized protocol was used for regular routine examinations in patients with facial trauma (n = 66). Conclusions: Low-dose MSCT using thin collimation, low table speed and small reconstruction increments provides excellent data for both axial images and multiplanar reconstructions in patients with facial trauma. An additional examination in coronal orientation is therefore no longer necessary. (orig.) [de

  19. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  20. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  1. Polynomial friction pendulum isolators (PFPIs) for seismic performance control of benchmark highway bridge

    Science.gov (United States)

    Saha, Arijit; Saha, Purnachandra; Patro, Sanjaya Kumar

    2017-10-01

    The seismic response of a benchmark highway bridge isolated with passive polynomial friction pendulum isolators (PFPIs) is investigated and subjected to six bidirectional ground motion records. The benchmark study is based on a lumped mass finite-element model of the 91/5 highway overcrossing located in Southern California. The PFPI system possesses two important parameters; one is horizontal flexibility and the other is energy absorbing capacity through friction. The evaluation criteria of the benchmark bridge are analyzed considering two parameters, time period of the isolator and coefficient of friction of the isolation surface. The results of the numerical study are compared with those obtained from the traditional friction pendulum system (FPS). Dual design performance of the PFPI system suppressed the displacement and acceleration response of the benchmark highway bridge. The dual design hysteresis loop of the PFPI system is the main advantage over the linear hysteresis loop of the FPS. The numerical result indicates that the seismic performance of the PFPI system is better than that of the traditional FPS isolated system. Further, it is observed that variations of the isolation time period and coefficient of friction of the FPS and PFPI systems have a significant effect on the peak responses of the benchmark highway bridge.

  2. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  3. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  4. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    Science.gov (United States)

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible

  5. Full parameter scan of the Zee model: exploring Higgs lepton flavor violation

    Energy Technology Data Exchange (ETDEWEB)

    Herrero-García, Juan [ARC Center of Excellence for Particle Physics at the Terascale, University of Adelaide,Adelaide, SA 5005 (Australia); Department of Physics, School of Engineering Sciences, KTH Royal Institute of Technology,AlbaNova University Center, Roslagstullsbacken 21, 106 91 Stockholm (Sweden); Ohlsson, Tommy; Riad, Stella; Wirén, Jens [Department of Physics, School of Engineering Sciences, KTH Royal Institute of Technology,AlbaNova University Center, Roslagstullsbacken 21, 106 91 Stockholm (Sweden)

    2017-04-21

    We study the general Zee model, which includes an extra Higgs scalar doublet and a new singly-charged scalar singlet. Neutrino masses are generated at one-loop level, and in order to describe leptonic mixing, both the Standard Model and the extra Higgs scalar doublets need to couple to leptons (in a type-III two-Higgs doublet model), which necessarily generates large lepton flavor violating signals, also in Higgs decays. Imposing all relevant phenomenological constraints and performing a full numerical scan of the parameter space, we find that both normal and inverted neutrino mass orderings can be fitted, although the latter is disfavored with respect to the former. In fact, inverted ordering can only be accommodated if θ{sub 23} turns out to be in the first octant. A branching ratio for h→τμ of up to 10{sup −2} is allowed, but it could be as low as 10{sup −6}. In addition, if future expected sensitivities of τ→μγ are achieved, normal ordering can be almost completely tested. Also, μe conversion is expected to probe large parts of the parameter space, excluding completely inverted ordering if no signal is observed. Furthermore, non-standard neutrino interactions are found to be smaller than 10{sup −6}, which is well below future experimental sensitivity. Finally, the results of our scan indicate that the masses of the additional scalars have to be below 2.5 TeV, and typically they are lower than that and therefore within the reach of the LHC and future colliders.

  6. Cross sections, benchmarks, etc.: What is data testing all about

    International Nuclear Information System (INIS)

    Wagschal, J.; Yeivin, Y.

    1985-01-01

    In order to determine the consistency of two distinct measurements of a physical quantity, the discrepancy d between the two should be compared with its own standard deviation, σ = √(σ/sub 1//sup 2/+σ/sub 2//sup 2/). To properly test a given cross-section library by a set of benchmark (integral) measurements, the quantity corresponding to (d/σ)/sup 2/ is the quadratic d/sup dagger/C/sup -1/d. Here d is the vector of which the components are the discrepancies between the calculated values of the integral parameters and their corresponding measured values, and C is the uncertainty matrix of these discrepancies. This quadratic form is the only true measure of the joint consistency of the library and benchmarks. On the other hand, the very matrix C is essentially all one needs to adjust the library by the benchmarks. Therefore, any argument against adjustment simultaneously disqualifies all serious attempts to test cross-section libraries against integral benchmarks

  7. Influence of scan duration on the estimation of pharmacokinetic parameters for breast lesions: a study based on CAIPIRINHA-Dixon-TWIST-VIBE technique

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Wen; Zhao, Bin; Wang, Guangbin; Wang, Cuiyan [Shandong University, Department of MR Imaging, Shandong Medical Imaging Research Institute, Jinan, Shandong (China); Liu, Hui [Siemens Healthcare, MR Collaborations NE Asia, Shanghai (China)

    2015-04-01

    To evaluate the influence of scan duration on pharmacokinetic parameters and their performance in differentiating benign from malignant breast lesions. Dynamic breast imaging was performed on a 3.0-T MR system using a prototype CAIPIRINHA-Dixon-TWISTVIBE (CDT-VIBE) sequence with a temporal resolution of 11.9 s. Enrolled in the study were 53 women with 55 lesions (26 benign and 29 malignant). Pharmacokinetic parameters (Ktrans, ve, kep and iAUC) were calculated for various scan durations from 1 to 7 min after injection of contrast medium using the Tofts model. Ktrans, kep and ve calculated from the 1-min dataset were significantly different from those calculated from the other datasets. In benign lesions, Ktrans, kep and ve were significantly different only between 1 min and 2 min (corrected P > 0.05), but in malignant lesions there were significant differences for any of the comparisons up to 6 min vs. 7 min (corrected P > 0.05). There were no significant differences in AUCs for any of the parameters (P > 0.05). In breast dynamic contrast-enhanced MRI the scan duration has a significant impact on pharmacokinetic parameters, but the diagnostic ability may not be significantly affected. A scan duration of 5 min after injection of contrast medium may be sufficient for calculation of Tofts model pharmacokinetic parameters. (orig.)

  8. Influence of scan duration on the estimation of pharmacokinetic parameters for breast lesions: a study based on CAIPIRINHA-Dixon-TWIST-VIBE technique

    International Nuclear Information System (INIS)

    Hao, Wen; Zhao, Bin; Wang, Guangbin; Wang, Cuiyan; Liu, Hui

    2015-01-01

    To evaluate the influence of scan duration on pharmacokinetic parameters and their performance in differentiating benign from malignant breast lesions. Dynamic breast imaging was performed on a 3.0-T MR system using a prototype CAIPIRINHA-Dixon-TWISTVIBE (CDT-VIBE) sequence with a temporal resolution of 11.9 s. Enrolled in the study were 53 women with 55 lesions (26 benign and 29 malignant). Pharmacokinetic parameters (Ktrans, ve, kep and iAUC) were calculated for various scan durations from 1 to 7 min after injection of contrast medium using the Tofts model. Ktrans, kep and ve calculated from the 1-min dataset were significantly different from those calculated from the other datasets. In benign lesions, Ktrans, kep and ve were significantly different only between 1 min and 2 min (corrected P > 0.05), but in malignant lesions there were significant differences for any of the comparisons up to 6 min vs. 7 min (corrected P > 0.05). There were no significant differences in AUCs for any of the parameters (P > 0.05). In breast dynamic contrast-enhanced MRI the scan duration has a significant impact on pharmacokinetic parameters, but the diagnostic ability may not be significantly affected. A scan duration of 5 min after injection of contrast medium may be sufficient for calculation of Tofts model pharmacokinetic parameters. (orig.)

  9. The Medical Library Association Benchmarking Network: development and implementation*

    Science.gov (United States)

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. Methods: The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. Results: The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. Conclusions: The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program. PMID:16636702

  10. Extraction of electron beam dose parameters from EBT2 film data scored in a mini phantom.

    Science.gov (United States)

    O'Reilly, Dedri; Smit, Cobus J L; du Plessis, Freek C P

    2013-09-01

    Quality assurance of medical linear accelerators includes dosimetric parameter measurement of therapeutic electron beams e.g. relative dose at a depth of 80% (R₈₀). This parameter must be within a tolerance of 0.2 cm of the declared value. Cumbersome water tank measurements can be regarded as a benchmark to measure electron depth dose curves. A mini-phantom was designed and built, in which a strip of GAFCHROMIC® EBT2 film could be encased tightly for electron beam depth dose measurement. Depth dose data were measured for an ELEKTA Sl25 MLC, ELEKTA Precise, and ELEKTA Synergy (Elekta Oncology Systems, Crawley, UK) machines. The electron beam energy range was between 4 and 22 MeV among the machines. A 10 × 10 cm² electron applicator with 95 cm source-surface-distance was used on all the machines. 24 h after irradiation, the EBT2 film strips were scanned on Canon CanoScan N670U scanner. Afterwards, the data were analysed with in-house developed software that entailed optical density to dose conversion, and optimal fitting of the PDD data to de-noise the raw data. From the PDD data R₈₀ values were solved for and compared with acceptance values. A series of tests were also carried out to validate the use of the scanner for film Dosimetry. These tests are presented in this study. It was found that this method of R₈₀ evaluation was reliable with good agreement with benchmark water tank measurements using a commercial parallel plate ionization chamber as the radiation detector. The EBT2 film data yielded R₈₀ values that were on average 0.06 cm different from benchmark water tank measured R₈₀ values.

  11. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  12. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  13. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  14. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  15. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  16. Benchmarking: A tool for conducting self-assessment

    International Nuclear Information System (INIS)

    Perkey, D.N.

    1992-01-01

    There is more information on nuclear plant performance available than can reasonably be assimilated and used effectively by plant management or personnel responsible for self-assessment. Also, it is becoming increasingly more important that an effective self-assessment program uses internal parameters not only to evaluate performance, but to incorporate lessons learned from other plants. Because of the quantity of information available, it is important to focus efforts and resources in areas where safety or performance is a concern and where the most improvement can be realized. One of the techniques that is being used to effectively accomplish this is benchmarking. Benchmarking involves the use of various sources of information to self-identify a plant's strengths and weaknesses, identify which plants are strong performers in specific areas, evaluate what makes a top performer, and incorporate the success factors into existing programs. The formality with which benchmarking is being implemented varies widely depending on the objective. It can be as simple as looking at a single indicator, such as systematic assessment of licensee performance (SALP) in engineering and technical support, then surveying the top performers with specific questions. However, a more comprehensive approach may include the performance of a detailed benchmarking study. Both operational and economic indicators may be used in this type of evaluation. Some of the indicators that may be considered and the limitations of each are discussed

  17. Design of Test Wrapper Scan Chain Based on Differential Evolution

    Directory of Open Access Journals (Sweden)

    Aijun Zhu

    2013-08-01

    Full Text Available Integrated Circuit has entered the era of design of the IP-based SoC (System on Chip, which makes the IP core reuse become a key issue. SoC test wrapper design for scan chain is a NP Hard problem, we propose an algorithm based on Differential Evolution (DE to design wrapper scan chain. Through group’s mutation, crossover and selection operations, the design of test wrapper scan chain is achieved. Experimental verification is carried out according to the international standard benchmark ITC’02. The results show that the algorithm can obtain shorter longest wrapper scan chains, compared with other algorithms.

  18. Validation of neutron-transport calculations in benchmark facilities for improved damage-fluence predictions

    International Nuclear Information System (INIS)

    Williams, M.L.; Stallmann, F.W.; Maerker, R.E.; Kam, F.B.K.

    1983-01-01

    An accurate determination of damage fluence accumulated by reactor pressure vessels (RPV) as a function of time is essential in order to evaluate the vessel integrity for both pressurized thermal shock (PTS) transients and end-of-life considerations. The desired accuracy for neutron exposure parameters such as displacements per atom or fluence (E > 1 MeV) is of the order of 20 to 30%. However, these types of accuracies can only be obtained realistically by validation of nuclear data and calculational methods in benchmark facilities. The purposes of this paper are to review the needs and requirements for benchmark experiments, to discuss the status of current benchmark experiments, to summarize results and conclusions obtained so far, and to suggest areas where further benchmarking is needed

  19. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  20. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  1. SU-F-T-152: Experimental Validation and Calculation Benchmark for a Commercial Monte Carlo Pencil BeamScanning Proton Therapy Treatment Planning System in Heterogeneous Media

    Energy Technology Data Exchange (ETDEWEB)

    Lin, L; Huang, S; Kang, M; Ainsley, C; Simone, C; McDonough, J; Solberg, T [University of Pennsylvania, Philadelphia, PA (United States)

    2016-06-15

    Purpose: Eclipse AcurosPT 13.7, the first commercial Monte Carlo pencil beam scanning (PBS) proton therapy treatment planning system (TPS), was experimentally validated for an IBA dedicated PBS nozzle in the CIRS 002LFC thoracic phantom. Methods: A two-stage procedure involving the use of TOPAS 1.3 simulations was performed. First, Geant4-based TOPAS simulations in this phantom were experimentally validated for single and multi-spot profiles at several depths for 100, 115, 150, 180, 210 and 225 MeV proton beams, using the combination of a Lynx scintillation detector and a MatriXXPT ionization chamber array. Second, benchmark calculations were performed with both AcurosPT and TOPAS in a phantom identical to the CIRS 002LFC, with the exception that the CIRS bone/mediastinum/lung tissues were replaced with similar tissues that are predefined in AcurosPT (a limitation of this system which necessitates the two stage procedure). Results: Spot sigmas measured in tissue were in agreement within 0.2 mm of TOPAS simulation for all six energies, while AcurosPT was consistently found to have larger spot sigma (<0.7 mm) than TOPAS. Using absolute dose calibration by MatriXXPT, the agreements between profiles measurements and TOPAS simulation, and calculation benchmarks are over 97% except near the end of range using 2 mm/2% gamma criteria. Overdosing and underdosing were observed at the low and high density side of tissue interfaces, respectively, and these increased with increasing depth and decreasing energy. Near the mediastinum/lung interface, the magnitude can exceed 5 mm/10%. Furthermore, we observed >5% quenching effect in the conversion of Lynx measurements to dose. Conclusion: We recommend the use of an ionization chamber array in combination with the scintillation detector to measure absolute dose and relative PBS spot characteristics. We also recommend the use of an independent Monte Carlo calculation benchmark for the commissioning of a commercial TPS. Partially

  2. Investigation of the influence of image reconstruction filter and scan parameters on operation of automatic tube current modulation systems for different CT scanners

    International Nuclear Information System (INIS)

    Sookpeng, Supawitoo; Martin, Colin J.; Gentle, David J.

    2015-01-01

    Variation in the user selected CT scanning parameters under automatic tube current modulation (ATCM) between hospitals has a substantial influence on the radiation doses and image quality for patients. The aim of this study was to investigate the effect of changing image reconstruction filter and scan parameter settings on tube current, dose and image quality for various CT scanners operating under ATCM. The scan parameters varied were pitch factor, rotation time, collimator configuration, kVp, image thickness and image filter convolution (FC) used for reconstruction. The Toshiba scanner varies the tube current to achieve a set target noise. Changes in the FC setting and image thickness for the first reconstruction were the major factors affecting patient dose. A two-step change in FC from smoother to sharper filters doubles the dose, but is counterbalanced by an improvement in spatial resolution. In contrast, Philips and Siemens scanners maintained tube current values similar to those for a reference image and patient, and the tube current only varied slightly for changes in individual CT scan parameters. The selection of a sharp filter increased the image noise, while use of iDose iterative reconstruction reduced the noise. Since the principles used by CT manufacturers for ATCM vary, it is important that parameters which affect patient dose and image quality for each scanner are made clear to operator to aid in optimisation. (authors)

  3. A benchmark comparison of the Canadian Supercritical Water-Cooled Reactor (SCWR) 64-element fuel lattice cell parameters using various computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Sharpe, J.; Salaun, F.; Hummel, D.; Moghrabi, A., E-mail: sharpejr@mcmaster.ca [McMaster University, Hamilton, ON (Canada); Nowak, M. [McMaster University, Hamilton, ON (Canada); Institut National Polytechnique de Grenoble, Phelma, Grenoble (France); Pencer, J. [McMaster University, Hamilton, ON (Canada); Canadian Nuclear Laboratories, Chalk River, ON, (Canada); Novog, D.; Buijs, A. [McMaster University, Hamilton, ON (Canada)

    2015-07-01

    Discrepancies in key lattice physics parameters have been observed between various deterministic (e.g. DRAGON and WIMS-AECL) and stochastic (MCNP, KENO) neutron transport codes in modeling previous versions of the Canadian SCWR lattice cell. Further, inconsistencies in these parameters have also been observed when using different nuclear data libraries. In this work, the predictions of k∞, various reactivity coefficients, and relative ring-averaged pin powers have been re-evaluated using these codes and libraries with the most recent 64-element fuel assembly geometry. A benchmark problem has been defined to quantify the dissimilarities between code results for a number of responses along the fuel channel under prescribed hot full power (HFP), hot zero power (HZP) and cold zero power (CZP) conditions and at several fuel burnups (0, 25 and 50 MW·d·kg{sup -1} [HM]). Results from deterministic (TRITON, DRAGON) and stochastic codes (MCNP6, KENO V.a and KENO-VI) are presented. (author)

  4. A benchmark comparison of the Canadian Supercritical Water-Cooled Reactor (SCWR) 64-element fuel lattice cell parameters using various computer codes

    International Nuclear Information System (INIS)

    Sharpe, J.; Salaun, F.; Hummel, D.; Moghrabi, A.; Nowak, M.; Pencer, J.; Novog, D.; Buijs, A.

    2015-01-01

    Discrepancies in key lattice physics parameters have been observed between various deterministic (e.g. DRAGON and WIMS-AECL) and stochastic (MCNP, KENO) neutron transport codes in modeling previous versions of the Canadian SCWR lattice cell. Further, inconsistencies in these parameters have also been observed when using different nuclear data libraries. In this work, the predictions of k∞, various reactivity coefficients, and relative ring-averaged pin powers have been re-evaluated using these codes and libraries with the most recent 64-element fuel assembly geometry. A benchmark problem has been defined to quantify the dissimilarities between code results for a number of responses along the fuel channel under prescribed hot full power (HFP), hot zero power (HZP) and cold zero power (CZP) conditions and at several fuel burnups (0, 25 and 50 MW·d·kg"-"1 [HM]). Results from deterministic (TRITON, DRAGON) and stochastic codes (MCNP6, KENO V.a and KENO-VI) are presented. (author)

  5. Boiling water reactor turbine trip (TT) benchmark

    International Nuclear Information System (INIS)

    2005-01-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts as well as for current applications. Recently developed 'best-estimate' computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for coupling core phenomena and system dynamics (PWR, BWR, VVER) need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for this purpose. The present report is the second in a series of four and summarises the results of the first benchmark exercise, which identifies the key parameters and important issues concerning the thermalhydraulic system modelling of the transient, with specified core average axial power distribution and fission power time transient history. The transient addressed is a turbine trip in a boiling water reactor, involving pressurization events in which the coupling between core phenomena and system dynamics plays an important role. In addition, the data made available from experiments carried out at the Peach Bottom 2 reactor (a GE-designed BWR/4) make the present benchmark particularly valuable. (author)

  6. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  7. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  8. Ten key short-term sectoral benchmarks to limit warming to 1.5º C

    NARCIS (Netherlands)

    Kuramochi, Takeshi; Hoehne, N.E.; Schaeffer, M.; Cantzler, Jasmin; Hare, William; Deng, Yvonne; Sterl, Sebastian; Hagemann, Markus; Rocha, Marcia; Yanguas-Parra, Paola Andrea; Mir, Goher-Ur-Rehman; Wong, Lindee; El-Laboudy, Tarik; Wouters, Karlien; Deryng, Delphine; Blok, Kornelis

    2018-01-01

    This article identifies and quantifies the 10 most important benchmarks for climate action to be taken by 2020–2025 to keep the window open for a 1.5°C-consistent GHG emission pathway. We conducted a comprehensive review of existing emissions scenarios, scanned all sectors and the respective

  9. Ten key short-term sectoral benchmarks to limit warming to 1.5°C

    NARCIS (Netherlands)

    Kuramochi, Takeshi; Höhne, Niklas; Schaeffer, Michiel; Cantzler, Jasmin; Hare, Bill; Deng, Yvonne; Sterl, Sebastian; Hagemann, Markus; Rocha, Marcia; Yanguas-Parra, Paola Andrea; Mir, Goher Ur Rehman; Wong, Lindee; El-Laboudy, Tarik; Wouters, Karlien; Deryng, Delphine; Blok, Kornelis

    2018-01-01

    This article identifies and quantifies the 10 most important benchmarks for climate action to be taken by 2020–2025 to keep the window open for a 1.5°C-consistent GHG emission pathway. We conducted a comprehensive review of existing emissions scenarios, scanned all sectors and the respective

  10. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  11. Structural Benchmark Creep Testing for Microcast MarM-247 Advanced Stirling Convertor E2 Heater Head Test Article SN18

    Science.gov (United States)

    Krause, David L.; Brewer, Ethan J.; Pawlik, Ralph

    2013-01-01

    This report provides test methodology details and qualitative results for the first structural benchmark creep test of an Advanced Stirling Convertor (ASC) heater head of ASC-E2 design heritage. The test article was recovered from a flight-like Microcast MarM-247 heater head specimen previously used in helium permeability testing. The test article was utilized for benchmark creep test rig preparation, wall thickness and diametral laser scan hardware metrological developments, and induction heater custom coil experiments. In addition, a benchmark creep test was performed, terminated after one week when through-thickness cracks propagated at thermocouple weld locations. Following this, it was used to develop a unique temperature measurement methodology using contact thermocouples, thereby enabling future benchmark testing to be performed without the use of conventional welded thermocouples, proven problematic for the alloy. This report includes an overview of heater head structural benchmark creep testing, the origin of this particular test article, test configuration developments accomplished using the test article, creep predictions for its benchmark creep test, qualitative structural benchmark creep test results, and a short summary.

  12. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  13. 2009 South American benchmarking study: natural gas transportation companies

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Nathalie [Gas TransBoliviano S.A. (Bolivia); Walter, Juliana S. [TRANSPETRO, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In the current business environment large corporations are constantly seeking to adapt their strategies. Benchmarking is an important tool for continuous improvement and decision-making. Benchmarking is a methodology that determines which aspects are the most important to be improved upon, and it proposes establishing a competitive parameter in an analysis of the best practices and processes, applying continuous improvement driven by the best organizations in their class. At the beginning of 2008, GTB (Gas TransBoliviano S.A.) contacted several South American gas transportation companies to carry out a regional benchmarking study in 2009. In this study, the key performance indicators of the South American companies, whose reality is similar, for example, in terms of prices, availability of labor, and community relations, will be compared. Within this context, a comparative study of the results, the comparative evaluation among natural gas transportation companies, is becoming an essential management instrument to help with decision-making. (author)

  14. Boiling water reactor turbine trip (TT) benchmark. Volume II: Summary Results of Exercise 1

    International Nuclear Information System (INIS)

    Akdeniz, Bedirhan; Ivanov, Kostadin N.; Olson, Andy M.

    2005-06-01

    The OECD Nuclear Energy Agency (NEA) completed under US Nuclear Regulatory Commission (NRC) sponsorship a PWR main steam line break (MSLB) benchmark against coupled system three-dimensional (3-D) neutron kinetics and thermal-hydraulic codes. Another OECD/NRC coupled-code benchmark was recently completed for a BWR turbine trip (TT) transient and is the object of the present report. Turbine trip transients in a BWR are pressurisation events in which the coupling between core space-dependent neutronic phenomena and system dynamics plays an important role. The data made available from actual experiments carried out at the Peach Bottom 2 plant make the present benchmark particularly valuable. While defining and coordinating the BWR TT benchmark, a systematic approach and level methodology not only allowed for a consistent and comprehensive validation process, but also contributed to the study of key parameters of pressurisation transients. The benchmark consists of three separate exercises, two initial states and five transient scenarios. The BWR TT Benchmark will be published in four volumes as NEA reports. CD-ROMs will also be prepared and will include the four reports and the transient boundary conditions, decay heat values as a function of time, cross-section libraries and supplementary tables and graphs not published in the paper version. BWR TT Benchmark - Volume I: Final Specifications was issued in 2001 [NEA/NSC/DOC(2001)]. The benchmark team [Pennsylvania State University (PSU) in co-operation with Exelon Nuclear and the NEA] has been responsible for coordinating benchmark activities, answering participant questions and assisting them in developing their models, as well as analysing submitted solutions and providing reports summarising the results for each phase. The benchmark team has also been involved in the technical aspects of the benchmark, including sensitivity studies for the different exercises. Volume II summarises the results for Exercise 1 of the

  15. Validation study of SRAC2006 code system based on evaluated nuclear data libraries for TRIGA calculations by benchmarking integral parameters of TRX and BAPL lattices of thermal reactors

    International Nuclear Information System (INIS)

    Khan, M.J.H.; Sarker, M.M.; Islam, S.M.A.

    2013-01-01

    Highlights: ► To validate the SRAC2006 code system for TRIGA neutronics calculations. ► TRX and BAPL lattices are treated as standard benchmarks for this purpose. ► To compare the calculated results with experiment as well as MCNP values in this study. ► The study demonstrates a good agreement with the experiment and the MCNP results. ► Thus, this analysis reflects the validation study of the SRAC2006 code system. - Abstract: The goal of this study is to present the validation study of the SRAC2006 code system based on evaluated nuclear data libraries ENDF/B-VII.0 and JENDL-3.3 for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. This study is achieved through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors. In integral measurements, the thermal reactor lattices TRX-1, TRX-2, BAPL-UO 2 -1, BAPL-UO 2 -2 and BAPL-UO 2 -3 are treated as standard benchmarks for validating/testing the SRAC2006 code system as well as nuclear data libraries. The integral parameters of the said lattices are calculated using the collision probability transport code PIJ of the SRAC2006 code system at room temperature 20 °C based on the above libraries. The calculated integral parameters are compared to the measured values as well as the MCNP values based on the Chinese evaluated nuclear data library CENDL-3.0. It was found that in most cases, the values of integral parameters demonstrate a good agreement with the experiment and the MCNP results. In addition, the group constants in SRAC format for TRX and BAPL lattices in fast and thermal energy range respectively are compared between the above libraries and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation study of the SRAC2006 code system based on evaluated nuclear data libraries JENDL-3.3 and ENDF/B-VII.0 and can also be essential to implement further neutronics calculations

  16. Design of a pre-collimator system for neutronics benchmark experiment

    International Nuclear Information System (INIS)

    Cai Xinggang; Liu Jiantao; Nie Yangbo; Bao Jie; Ruan Xichao; Lu Yanxia

    2013-01-01

    Benchmark experiment is an important means to inspect the reliability and accuracy of the evaluated nuclear data, the effect/background ratios are the important parameters to weight the quality of experimental data. In order to obtain higher effect/background ratios, a pre-collimator system was designed for benchmark experiment. This system mainly consists of a pre-collimator and a shadow cone, The MCNP-4C code was used to simulate the background spectra under various conditions, from the results we found that with the pre-collimator system have a very marked improvement in the effect/background ratios. (authors)

  17. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  18. Thermal reactor benchmark tests on JENDL-2

    International Nuclear Information System (INIS)

    Takano, Hideki; Tsuchihashi, Keichiro; Yamane, Tsuyoshi; Akino, Fujiyoshi; Ishiguro, Yukio; Ido, Masaru.

    1983-11-01

    A group constant library for the thermal reactor standard nuclear design code system SRAC was produced by using the evaluated nuclear data JENDL-2. Furthermore, the group constants for 235 U were calculated also from ENDF/B-V. Thermal reactor benchmark calculations were performed using the produced group constant library. The selected benchmark cores are two water-moderated lattices (TRX-1 and 2), two heavy water-moderated cores (DCA and ETA-1), two graphite-moderated cores (SHE-8 and 13) and eight critical experiments for critical safety. The effective multiplication factors and lattice cell parameters were calculated and compared with the experimental values. The results are summarized as follows. (1) Effective multiplication factors: The results by JENDL-2 are considerably improved in comparison with ones by ENDF/B-IV. The best agreement is obtained by using JENDL-2 and ENDF/B-V (only 235 U) data. (2) Lattice cell parameters: For the rho 28 (the ratio of epithermal to thermal 238 U captures) and C* (the ratio of 238 U captures to 235 U fissions), the values calculated by JENDL-2 are in good agreement with the experimental values. The rho 28 (the ratio of 238 U to 235 U fissions) are overestimated as found also for the fast reactor benchmarks. The rho 02 (the ratio of epithermal to thermal 232 Th captures) calculated by JENDL-2 or ENDF/B-IV are considerably underestimated. The functions of the SRAC system have been continued to be extended according to the needs of its users. A brief description will be given, in Appendix B, to the extended parts of the SRAC system together with the input specification. (author)

  19. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  20. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  1. Higgs pair production: choosing benchmarks with cluster analysis

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Alexandra; Dall’Osso, Martino; Dorigo, Tommaso [Dipartimento di Fisica e Astronomia and INFN, Sezione di Padova,Via Marzolo 8, I-35131 Padova (Italy); Goertz, Florian [CERN,1211 Geneva 23 (Switzerland); Gottardo, Carlo A. [Physikalisches Institut, Universität Bonn,Nussallee 12, 53115 Bonn (Germany); Tosi, Mia [CERN,1211 Geneva 23 (Switzerland)

    2016-04-20

    New physics theories often depend on a large number of free parameters. The phenomenology they predict for fundamental physics processes is in some cases drastically affected by the precise value of those free parameters, while in other cases is left basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics predicted by different models; a clustering algorithm using that metric may allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmarks are then guaranteed to be sensitive to a large area of the parameter space. In this document we show a practical implementation of the above strategy for the study of non-resonant production of Higgs boson pairs in the context of extensions of the standard model with anomalous couplings of the Higgs bosons. A non-standard value of those couplings may significantly enhance the Higgs boson pair-production cross section, such that the process could be detectable with the data that the LHC will collect in Run 2.

  2. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  3. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  4. Parameter scans and accuracy estimates of the dynamical aperture of the CERN LHC

    CERN Document Server

    Giovannozzi, Massimo

    2006-01-01

    Techniques to make use of large distributed computing facilities allow for denser parameter scans of the dynamic aperture, i.e., the domain in phase space where bounded single-particle motion prevails. Moreover, one can also increase the number of 'seeds' each of which represents a possible realization of multipolar components around the machine. In this paper the dependence of the dynamic aperture on the step size of the grid of initial conditions and on the number of seeds is studied. Estimates on the accuracy of the dynamic aperture are derived and the definition of an improved protocol for numerical simulations is presented.

  5. PEBBLES Simulation of Static Friction and New Static Friction Benchmark

    International Nuclear Information System (INIS)

    Cogliati, Joshua J.; Ougouag, Abderrafi M.

    2010-01-01

    Pebble bed reactors contain large numbers of spherical fuel elements arranged randomly. Determining the motion and location of these fuel elements is required for calculating certain parameters of pebble bed reactor operation. This paper documents the PEBBLES static friction model. This model uses a three dimensional differential static friction approximation extended from the two dimensional Cundall and Strack model. The derivation of determining the rotational transformation of pebble to pebble static friction force is provided. A new implementation for a differential rotation method for pebble to container static friction force has been created. Previous published methods are insufficient for pebble bed reactor geometries. A new analytical static friction benchmark is documented that can be used to verify key static friction simulation parameters. This benchmark is based on determining the exact pebble to pebble and pebble to container static friction coefficients required to maintain a stable five sphere pyramid.

  6. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  7. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  8. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  9. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  10. Effect of CT scanning parameters on volumetric measurements of pulmonary nodules by 3D active contour segmentation: a phantom study

    International Nuclear Information System (INIS)

    Way, Ted W; Chan, H-P; Goodsitt, Mitchell M; Sahiner, Berkman; Hadjiiski, Lubomir M; Zhou Chuan; Chughtai, Aamer

    2008-01-01

    The purpose of this study is to investigate the effects of CT scanning and reconstruction parameters on automated segmentation and volumetric measurements of nodules in CT images. Phantom nodules of known sizes were used so that segmentation accuracy could be quantified in comparison to ground-truth volumes. Spherical nodules having 4.8, 9.5 and 16 mm diameters and 50 and 100 mg cc -1 calcium contents were embedded in lung-tissue-simulating foam which was inserted in the thoracic cavity of a chest section phantom. CT scans of the phantom were acquired with a 16-slice scanner at various tube currents, pitches, fields-of-view and slice thicknesses. Scans were also taken using identical techniques either within the same day or five months apart for study of reproducibility. The phantom nodules were segmented with a three-dimensional active contour (3DAC) model that we previously developed for use on patient nodules. The percentage volume errors relative to the ground-truth volumes were estimated under the various imaging conditions. There was no statistically significant difference in volume error for repeated CT scans or scans taken with techniques where only pitch, field of view, or tube current (mA) were changed. However, the slice thickness significantly (p < 0.05) affected the volume error. Therefore, to evaluate nodule growth, consistent imaging conditions and high resolution should be used for acquisition of the serial CT scans, especially for smaller nodules. Understanding the effects of scanning and reconstruction parameters on volume measurements by 3DAC allows better interpretation of data and assessment of growth. Tracking nodule growth with computerized segmentation methods would reduce inter- and intraobserver variabilities

  11. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  12. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  13. Benchmark of the CASMO-3G/MICROBURN-B codes for Commonwealth Edison boiling water reactors

    International Nuclear Information System (INIS)

    Wheeler, J.K.; Pallotta, A.S.

    1992-01-01

    The Commonwealth Edison Company has performed an extensive benchmark against measured data from three boiling water reactors using the Studsvik lattice physics code CASMO-3G and the Siemens Nuclear Power three-dimensional simulator code MICROBURN-B. The measured data of interest for this benchmark are the hot and cold reactivity, and the core power distributions as measured by the traversing incore probe system and gamma scan data for fuel pins and assemblies. A total of nineteen unit-cycles were evaluated. The database included fuel product lines manufactured by General Electric and Siemens Nuclear Power, wit assemblies containing 7 x 7 to 9 x 9 pin configurations, several water rod designs, various enrichments and gadolina loadings, and axially varying lattice designs throughout the enriched portion of the bundle. The results of the benchmark present evidence that the CASMO-3G/MICROBURN-B code package can adequately model the range of fuel and core types in the benchmark, and the codes are acceptable for performing neutronic analyses of Commonwealth Edison's boiling water reactors

  14. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  15. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  16. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    International Nuclear Information System (INIS)

    Schaefer, R. W.; McKnight, R. D.

    2000-01-01

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of k eff . Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for k eff , f 28 /f 25 , c 28 /f 25 , and β eff . These limited results demonstrate the importance of studying other integral parameters in addition to k eff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files

  17. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  18. Benchmarking Brain-Computer Interfaces Outside the Laboratory: The Cybathlon 2016

    Directory of Open Access Journals (Sweden)

    Domen Novak

    2018-01-01

    Full Text Available This paper presents a new approach to benchmarking brain-computer interfaces (BCIs outside the lab. A computer game was created that mimics a real-world application of assistive BCIs, with the main outcome metric being the time needed to complete the game. This approach was used at the Cybathlon 2016, a competition for people with disabilities who use assistive technology to achieve tasks. The paper summarizes the technical challenges of BCIs, describes the design of the benchmarking game, then describes the rules for acceptable hardware, software and inclusion of human pilots in the BCI competition at the Cybathlon. The 11 participating teams, their approaches, and their results at the Cybathlon are presented. Though the benchmarking procedure has some limitations (for instance, we were unable to identify any factors that clearly contribute to BCI performance, it can be successfully used to analyze BCI performance in realistic, less structured conditions. In the future, the parameters of the benchmarking game could be modified to better mimic different applications (e.g., the need to use some commands more frequently than others. Furthermore, the Cybathlon has the potential to showcase such devices to the general public.

  19. Benchmarking of radiological departments. Starting point for successful process optimization

    International Nuclear Information System (INIS)

    Busch, Hans-Peter

    2010-01-01

    Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)

  20. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  1. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Static benchmarking of the NESTLE advanced nodal code

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1997-01-01

    Results from the NESTLE advanced nodal code are presented for multidimensional numerical benchmarks representing four different types of reactors, and predictions from NESTLE are compared with measured data from pressurized water reactors (PWRs). The numerical benchmarks include cases representative of PWRs, boiling water reactors (BWRs), CANDU heavy water reactors (HWRs), and high-temperature gas-cooled reactors (HTGRs). The measured PWR data include critical soluble boron concentrations and isothermal temperature coefficients of reactivity. The results demonstrate that NESTLE correctly solves the multigroup diffusion equations for both Cartesian and hexagonal geometries, that it reliably calculates k eff and reactivity coefficients for PWRs, and that--subsequent to the incorporation of additional thermal-hydraulic models--it will be able to perform accurate calculations for the corresponding parameters in BWRs, HWRs, and HTGRs as well

  3. Predictive uncertainty reduction in coupled neutron-kinetics/thermal hydraulics modeling of the BWR-TT2 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Badea, Aurelian F., E-mail: aurelian.badea@kit.edu [Karlsruhe Institute of Technology, Vincenz-Prießnitz-Str. 3, 76131 Karlsruhe (Germany); Cacuci, Dan G. [Center for Nuclear Science and Energy/Dept. of ME, University of South Carolina, 300 Main Street, Columbia, SC 29208 (United States)

    2017-03-15

    Highlights: • BWR Turbine Trip 2 (BWR-TT2) benchmark. • Substantial (up to 50%) reduction of uncertainties in the predicted transient power. • 6660 uncertain model parameters were calibrated. - Abstract: By applying a comprehensive predictive modeling methodology, this work demonstrates a substantial (up to 50%) reduction of uncertainties in the predicted total transient power in the BWR Turbine Trip 2 (BWR-TT2) benchmark while calibrating the numerical simulation of this benchmark, comprising 6090 macroscopic cross sections, and 570 thermal-hydraulics parameters involved in modeling the phase-slip correlation, transient outlet pressure, and total mass flow. The BWR-TT2 benchmark is based on an experiment that was carried out in 1977 in the NPP Peach Bottom 2, involving the closure of the turbine stop valve which caused a pressure wave that propagated with attenuation into the reactor core. The condensation of the steam in the reactor core caused by the pressure increase led to a positive reactivity insertion. The subsequent rise of power was limited by the feedback and the insertion of the control rods. The BWR-TT2 benchmark was modeled with the three-dimensional reactor physics code system DYN3D, by coupling neutron kinetics with two-phase thermal-hydraulics. All 6660 DYN3D model parameters were calibrated by applying a predictive modeling methodology that combines experimental and computational information to produce optimally predicted best-estimate results with reduced predicted uncertainties. Simultaneously, the predictive modeling methodology yields optimally predicted values for the BWR total transient power while reducing significantly the accompanying predicted standard deviations.

  4. Predictive uncertainty reduction in coupled neutron-kinetics/thermal hydraulics modeling of the BWR-TT2 benchmark

    International Nuclear Information System (INIS)

    Badea, Aurelian F.; Cacuci, Dan G.

    2017-01-01

    Highlights: • BWR Turbine Trip 2 (BWR-TT2) benchmark. • Substantial (up to 50%) reduction of uncertainties in the predicted transient power. • 6660 uncertain model parameters were calibrated. - Abstract: By applying a comprehensive predictive modeling methodology, this work demonstrates a substantial (up to 50%) reduction of uncertainties in the predicted total transient power in the BWR Turbine Trip 2 (BWR-TT2) benchmark while calibrating the numerical simulation of this benchmark, comprising 6090 macroscopic cross sections, and 570 thermal-hydraulics parameters involved in modeling the phase-slip correlation, transient outlet pressure, and total mass flow. The BWR-TT2 benchmark is based on an experiment that was carried out in 1977 in the NPP Peach Bottom 2, involving the closure of the turbine stop valve which caused a pressure wave that propagated with attenuation into the reactor core. The condensation of the steam in the reactor core caused by the pressure increase led to a positive reactivity insertion. The subsequent rise of power was limited by the feedback and the insertion of the control rods. The BWR-TT2 benchmark was modeled with the three-dimensional reactor physics code system DYN3D, by coupling neutron kinetics with two-phase thermal-hydraulics. All 6660 DYN3D model parameters were calibrated by applying a predictive modeling methodology that combines experimental and computational information to produce optimally predicted best-estimate results with reduced predicted uncertainties. Simultaneously, the predictive modeling methodology yields optimally predicted values for the BWR total transient power while reducing significantly the accompanying predicted standard deviations.

  5. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  6. Effects of benchmarking on the quality of type 2 diabetes care: results of the OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study in Greece

    Science.gov (United States)

    Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.

    2015-01-01

    Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642

  7. Benchmark problem suite for reactor physics study of LWR next generation fuels

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Ikehara, Tadashi; Ito, Takuya; Saji, Etsuro

    2002-01-01

    This paper proposes a benchmark problem suite for studying the physics of next-generation fuels of light water reactors. The target discharge burnup of the next-generation fuel was set to 70 GWd/t considering the increasing trend in discharge burnup of light water reactor fuels. The UO 2 and MOX fuels are included in the benchmark specifications. The benchmark problem consists of three different geometries: fuel pin cell, PWR fuel assembly and BWR fuel assembly. In the pin cell problem, detailed nuclear characteristics such as burnup dependence of nuclide-wise reactivity were included in the required calculation results to facilitate the study of reactor physics. In the assembly benchmark problems, important parameters for in-core fuel management such as local peaking factors and reactivity coefficients were included in the required results. The benchmark problems provide comprehensive test problems for next-generation light water reactor fuels with extended high burnup. Furthermore, since the pin cell, the PWR assembly and the BWR assembly problems are independent, analyses of the entire benchmark suite is not necessary: e.g., the set of pin cell and PWR fuel assembly problems will be suitable for those in charge of PWR in-core fuel management, and the set of pin cell and BWR fuel assembly problems for those in charge of BWR in-core fuel management. (author)

  8. Studies of thermal-reactor benchmark-data interpretation: experimental corrections

    International Nuclear Information System (INIS)

    Sher, R.; Fiarman, S.

    1976-10-01

    Experimental values of integral parameters of the lattices studied in this report, i.e., the MIT(D 2 O) and TRX benchmark lattices have been re-examined and revised. The revisions correct several systematic errors that have been previously ignored or considered insignificant. These systematic errors are discussed in detail. The final corrected values are presented

  9. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  10. Benchmark analyses for EBR-II shutdown heat removal tests SHRT-17 and SHRT-45R

    Energy Technology Data Exchange (ETDEWEB)

    Mochizuki, Hiroyasu, E-mail: mochizki@u-fukui.ac.jp [Research Institute of Nuclear Engineering, University of Fukui (Japan); Muranaka, Kohmei; Asai, Takayuki [Graduate School of Engineering, University of Fukui (Japan); Rooijen, W.F.G. van, E-mail: rooijen@u-fukui.ac.jp [Research Institute of Nuclear Engineering, University of Fukui (Japan)

    2014-08-15

    Highlights: • The IAEA EBR-II benchmarks SHRT-17 and SHRT-45R are analyzed with a 1D system code. • The calculated result of SHRT-17 corresponds well to the measured results. • For SHRT-45R ERANOS is used for various core parameters and reactivity coefficients. • SHRT-45R peak temperature is overestimated with the ERANOS feedback coefficients. • The peak temperature is well predicted when the feedback coefficient is reduced. - Abstract: Benchmark problems of several experiments in EBR-II, proposed by ANL and coordinated by the IAEA, are analyzed using the plant system code NETFLOW++ and the neutronics code ERANOS. The SHRT-17 test conducted as a loss-of-flow test is calculated using only the NETFLOW++ code because it is a purely thermal–hydraulic problem. The measured data were made available to the benchmark participants after the results of the blind benchmark calculations were submitted. Our work shows that major parameters of the plant are predicted with good accuracy. The SHRT-45R test, an unprotected loss of flow test is calculated using the NETFLOW++ code with the aid of delayed neutron data and reactivity coefficients calculated by the ERANOS code. These parameters are used in the NETFLOW++ code to perform a semi-coupled analysis of the neutronics – thermal–hydraulic problem. The measured data are compared with our calculated results. In our work, the peak temperature is underestimated, indicating that the reactivity feedback coefficients are too strong. When the reactivity feedback coefficient for thermal expansion is adjusted, good agreement is obtained in general for the calculated plant parameters, with a few exceptions.

  11. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  12. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  13. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  14. Integral benchmark test of JENDL-4.0 for U-233 systems with ICSBEP handbook

    International Nuclear Information System (INIS)

    Kuwagaki, Kazuki; Nagaya, Yasunobu

    2017-03-01

    The integral benchmark test of JENDL-4.0 for U-233 systems using the continuous-energy Monte Carlo code MVP was conducted. The previous benchmark test was performed only for U-233 thermal solution and fast metallic systems in the ICSBEP handbook. In this study, MVP input files were prepared for uninvestigated benchmark problems in the handbook including compound thermal systems (mainly lattice systems) and integral benchmark test was performed. The prediction accuracy of JENDL-4.0 was evaluated for effective multiplication factors (k eff 's) of the U-233 systems. As a result, a trend of underestimation was observed for all the categories of U-233 systems. In the benchmark test of ENDF/B-VII.1 for U-233 systems with the ICSBEP handbook, it is reported that a decreasing trend of calculated k eff values in association with a parameter ATFF (Above-Thermal Fission Fraction) is observed. The ATFF values were also calculated in this benchmark test of JENDL-4.0 and the same trend as ENDF/B-VII.1 was observed. A CD-ROM is attached as an appendix. (J.P.N.)

  15. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  16. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  17. Deriving structural forest parameters using airborne laser scanning

    International Nuclear Information System (INIS)

    Morsdorf, F.

    2011-01-01

    Airborne laser scanning is a relatively young and precise technology to directly measure surface elevations. With today's high scanning rates, dense 3-D pointclouds of coordinate triplets (xyz) can be provided, in which many structural aspects of the vegetation are contained. The challenge now is to transform this data, as far as possible automatically, into manageable information relevant to the user. In this paper we present two such methods: the first extracts automatically the geometry of individual trees, with a recognition rate of over 70% and a systematic underestimation of tree height of only 0.6 metres. The second method derives a pixel map of the canopy density from the pointcloud, in which the spatial patterns of vegetation cover are represented. These patterns are relevant for habitat analysis and ecosystem studies. The values derived by this method correlate well with field measurements, giving a measure of certainty (R 2 ) of 0.8. The greatest advantage of airborne laser scanning is that it provides spatially extensive, direct measurements of vegetation structure which show none of the extrapolation errors of spot measurements. A large challenge remains in integrating these new products into the user's processing chains and workflows, be it in the realm of forestry or in that of ecosystem research. (author) [de

  18. Certain methods for improving the accuracy of lattice parameter determination in sloping scanning with the use of single-crystal spectrometer

    International Nuclear Information System (INIS)

    Pinegin, V.I.; Koz'ma, A.A.; Fuks, M.Ya.

    1985-01-01

    Errors are accounted for the sloping scanning that is a base of X-ray tenzometry and requires high accuracy of lattice parameter determination including a limp state parameter. A technique is suggested for elimination of systematic alignment errors due to deviation of the primary beam and eccentricity of an irradiated sample surface relative to the main axis of a goniometer. The technique allows to decrease a relative error in interplanar distance determination from 10 -4 to 10 -5 using the DRON-20 diffractometer. Analytical expressions are obtained for correction calculations. The technique can be used in sloping scanning of imperfect single- and polycrystals. Experimental test of the technique has been carried out with epitaxy single-crystal nickel films 500 to 2000 A thick with the use of copper radiation

  19. Reactor calculation benchmark PCA blind test results

    International Nuclear Information System (INIS)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables

  20. Reactor calculation benchmark PCA blind test results

    Energy Technology Data Exchange (ETDEWEB)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables.

  1. High Energy Physics (HEP) benchmark program

    International Nuclear Information System (INIS)

    Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.

    1993-01-01

    High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)

  2. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  3. Lecithin-coated gold nanoflowers (GNFs) for CT scan imaging applications and biochemical parameters; in vitro and in vivo studies.

    Science.gov (United States)

    Aziz, Farooq; Bano, Khizra; Siddique, Ahmad Hassan; Bajwa, Sadia Zafar; Nazir, Aalia; Munawar, Anam; Shaheen, Ayesha; Saeed, Madiha; Afzal, Muhammad; Iqbal, M Zubair; Wu, Aiguo; Khan, Waheed S

    2018-01-09

    We report a novel strategy for the fabrication of lecithin-coated gold nanoflowers (GNFs) via single-step design for CT imaging application. Field-emission electron microscope confirmed flowers like morphology of the as-synthesized nanostructures. Furthermore, these show absorption peak in near-infrared (NIR) region at λ max 690 nm Different concentrations of GNFs are tested as a contrast agent in CT scans at tube voltage 135 kV and tube current 350 mA. These results are compared with same amount of iodine at same CT scan parameters. The results of in vitro CT scan study show that GNFs have good contrast enhancement properties, whereas in vivo study of rabbits CT scan shows that GNFs enhance the CT image clearly at 135 kV as compared to that of iodine. Cytotoxicity was studied and blood profile show minor increase of white blood cells and haemoglobin, whereas decrease of red blood cells and platelets.

  4. OECD/NEA benchmark for time-dependent neutron transport calculations without spatial homogenization

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Jason, E-mail: jason.hou@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Ivanov, Kostadin N. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Boyarinov, Victor F.; Fomichenko, Peter A. [National Research Centre “Kurchatov Institute”, Kurchatov Sq. 1, Moscow (Russian Federation)

    2017-06-15

    Highlights: • A time-dependent homogenization-free neutron transport benchmark was created. • The first phase, known as the kinetics phase, was described in this work. • Preliminary results for selected 2-D transient exercises were presented. - Abstract: A Nuclear Energy Agency (NEA), Organization for Economic Co-operation and Development (OECD) benchmark for the time-dependent neutron transport calculations without spatial homogenization has been established in order to facilitate the development and assessment of numerical methods for solving the space-time neutron kinetics equations. The benchmark has been named the OECD/NEA C5G7-TD benchmark, and later extended with three consecutive phases each corresponding to one modelling stage of the multi-physics transient analysis of the nuclear reactor core. This paper provides a detailed introduction of the benchmark specification of Phase I, known as the “kinetics phase”, including the geometry description, supporting neutron transport data, transient scenarios in both two-dimensional (2-D) and three-dimensional (3-D) configurations, as well as the expected output parameters from the participants. Also presented are the preliminary results for the initial state 2-D core and selected transient exercises that have been obtained using the Monte Carlo method and the Surface Harmonic Method (SHM), respectively.

  5. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  6. Development of a benchmark parameter scan for Higgs bosons in the NMSSM Model and a study of the sensitivity for H{yields}AA{yields}4{tau} in vector boson fusion with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Rottlaender, Iris

    2008-08-15

    An evaluation of the discovery potential for NMSSM Higgs bosons of the ATLAS experiment at the LHC is presented. For this purpose, seven two-dimensional benchmark planes in the six-dimensional parameter space of the NMSSM Higgs sector are defined. These planes include different types of phenomenology for which the discovery of NMSSM Higgs bosons is especially challenging and which are considered typical for the NMSSM. They are subsequently used to give a detailed evaluation of the Higgs boson discovery potential based on Monte Carlo studies from the ATLAS collaboration. Afterwards, the possibility of discovering NMSSM Higgs bosons via the H{sub 1}{yields}A{sub 1}A{sub 1}{yields}4{tau}{yields}4{mu}+8{nu} decay chain and with the vector boson fusion production mode is investigated. A particular emphasis is put on the mass reconstruction from the complex final state. Furthermore, a study of the jet reconstruction performance at the ATLAS experiment which is of crucial relevance for vector boson fusion searches is presented. A good detectability of the so-called tagging jets that originate from the scattered partons in the vector boson fusion process is of critical importance for an early Higgs boson discovery in many models and also within the framework of the NMSSM. (orig.)

  7. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  8. Breast dose reduction for chest CT by modifying the scanning parameters based on the pre-scan size-specific dose estimate (SSDE)

    Energy Technology Data Exchange (ETDEWEB)

    Kidoh, Masafumi; Utsunomiya, Daisuke; Oda, Seitaro; Nakaura, Takeshi; Yuki, Hideaki; Hirata, Kenichiro; Namimoto, Tomohiro; Sakabe, Daisuke; Hatemura, Masahiro; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Faculty of Life Sciences, Honjo, Kumamoto (Japan); Funama, Yoshinori [Kumamoto University, Department of Medical Physics, Faculty of Life Sciences, Honjo, Kumamoto (Japan)

    2017-06-15

    To investigate the usefulness of modifying scanning parameters based on the size-specific dose estimate (SSDE) for a breast-dose reduction for chest CT. We scanned 26 women with a fixed volume CT dose index (CTDI{sub vol}) (15 mGy) and another 26 with a fixed SSDE (15 mGy) protocol (protocol 1 and 2, respectively). In protocol 2, tube current was calculated based on the patient habitus obtained on scout images. We compared the mean breast dose and the inter-patient breast dose variability and performed linear regression analysis of the breast dose and the body mass index (BMI) of the two protocols. The mean breast dose was about 35 % lower under protocol 2 than protocol 1 (10.9 mGy vs. 16.8 mGy, p < 0.01). The inter-patient breast dose variability was significantly lower under protocol 2 than 1 (1.2 mGy vs. 2.5 mGy, p < 0.01). We observed a moderate negative correlation between the breast dose and the BMI under protocol 1 (r = 0.43, p < 0.01); there was no significant correlation (r = 0.06, p = 0.35) under protocol 2. The SSDE-based protocol achieved a reduction in breast dose and in inter-patient breast dose variability. (orig.)

  9. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    Science.gov (United States)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  10. Benchmarking of epithermal methods in the lattice-physics code EPRI-CELL

    International Nuclear Information System (INIS)

    Williams, M.L.; Wright, R.Q.; Barhen, J.; Rothenstein, W.; Toney, B.

    1982-01-01

    The epithermal cross section shielding methods used in the lattice physics code EPRI-CELL (E-C) have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology. These include: treatment of the external moderator source with intermediate resonance (IR) theory, development of a new Dancoff factor expression to account for clad interactions, development of a new method for treating resonance interference, and application of a generalized least squares method to compute best-estimate values for the Bell factor and group-dependent IR parameters. The modified E-C code with its new ENDF/B-V cross section library is tested for several numerical benchmark problems. Integral parameters computed by EC are compared with those obtained with point-cross section Monte Carlo calculations, and E-C fine group cross sections are benchmarked against point-cross section descrete ordinates calculations. It is found that the code modifications improve agreement between E-C and the more sophisticated methods. E-C shows excellent agreement on the integral parameters and usually agrees within a few percent on fine-group, shielded cross sections

  11. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  12. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  13. Regression Tree-Based Methodology for Customizing Building Energy Benchmarks to Individual Commercial Buildings

    Science.gov (United States)

    Kaskhedikar, Apoorva Prakash

    According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI

  14. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  15. Analysis of the OECD main steam line break benchmark using ANC-K/MIDAC code

    International Nuclear Information System (INIS)

    Aoki, Shigeaki; Tahara, Yoshihisa; Suemura, Takayuki; Ogawa, Junto

    2004-01-01

    A three-dimensional (3D) neutronics and thermal-and-hydraulics (T/H) coupling code ANC-K/MIDAC has been developed. It is the combination of the 3D nodal kinetic code ANC-K and the 3D drift flux thermal hydraulic code MIDAC. In order to verify the adequacy of this code, we have performed several international benchmark problems. In this paper, we show the calculation results of ''OECD Main Steam Line Break Benchmark (MSLB benchmark)'', which gives the typical local power peaking problem. And we calculated the return-to-power scenario of the Phase II problem. The comparison of the results shows the very good agreement of important core parameters between the ANC-K/MIDAC and other participant codes. (author)

  16. Benchmarking in Thoracic Surgery. Third Edition.

    Science.gov (United States)

    Freixinet Gilart, Jorge; Varela Simó, Gonzalo; Rodríguez Suárez, Pedro; Embún Flor, Raúl; Rivas de Andrés, Juan José; de la Torre Bravos, Mercedes; Molins López-Rodó, Laureano; Pac Ferrer, Joaquín; Izquierdo Elena, José Miguel; Baschwitz, Benno; López de Castro, Pedro E; Fibla Alfara, Juan José; Hernando Trancho, Florentino; Carvajal Carrasco, Ángel; Canalís Arrayás, Emili; Salvatierra Velázquez, Ángel; Canela Cardona, Mercedes; Torres Lanzas, Juan; Moreno Mata, Nicolás

    2016-04-01

    Benchmarking entails continuous comparison of efficacy and quality among products and activities, with the primary objective of achieving excellence. To analyze the results of benchmarking performed in 2013 on clinical practices undertaken in 2012 in 17 Spanish thoracic surgery units. Study data were obtained from the basic minimum data set for hospitalization, registered in 2012. Data from hospital discharge reports were submitted by the participating groups, but staff from the corresponding departments did not intervene in data collection. Study cases all involved hospital discharges recorded in the participating sites. Episodes included were respiratory surgery (Major Diagnostic Category 04, Surgery), and those of the thoracic surgery unit. Cases were labelled using codes from the International Classification of Diseases, 9th revision, Clinical Modification. The refined diagnosis-related groups classification was used to evaluate differences in severity and complexity of cases. General parameters (number of cases, mean stay, complications, readmissions, mortality, and activity) varied widely among the participating groups. Specific interventions (lobectomy, pneumonectomy, atypical resections, and treatment of pneumothorax) also varied widely. As in previous editions, practices among participating groups varied considerably. Some areas for improvement emerge: admission processes need to be standardized to avoid urgent admissions and to improve pre-operative care; hospital discharges should be streamlined and discharge reports improved by including all procedures and complications. Some units have parameters which deviate excessively from the norm, and these sites need to review their processes in depth. Coding of diagnoses and comorbidities is another area where improvement is needed. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.

  17. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  18. Optimization of input parameters of supra-threshold stochastic resonance image processing algorithm for the detection of abdomino-pelvic tumors on PET/CT scan

    International Nuclear Information System (INIS)

    Pandey, Anil Kumar; Saroha, Kartik; Patel, C.D.; Bal, C.S.; Kumar, Rakesh

    2016-01-01

    Administration of diuretics increases the urine output to clear radioactive urine from kidneys and bladder. Hence post-diuretic pelvic PET/CT scan enhances the probability of detection of abdomino-pelvic tumor. However, it causes discomfort in patients and has some side effects also. Application of supra threshold stochastic resonance (SSR) image processing algorithm on Pre-diuretic PET/CT scan may also increase the probability of detection of these tumors. Amount of noise and threshold are two variable parameters that effect the final image quality. This study was conducted to investigate the effect of these two variable parameters on the detection of abdomen-pelvic tumor

  19. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    the visible human CT scans from the National Library of Medicine, are essential for producing realistic images. Sets of test cases with systematic and random errors in selected setup parameters and anatomic volumes are suitable for use as standard benchmarks by the radiotherapy community. In addition to serving as an aid to research and development, benchmark images may also be useful for evaluation of commercial systems and as part of a quality assurance program for clinical systems. Test cases and software are available upon request

  20. High-resolution MRI of the labyrinth. Optimization of scan parameters with 3D-FSE

    International Nuclear Information System (INIS)

    Sakata, Motomichi; Harada, Kuniaki; Shirase, Ryuji; Kumagai, Akiko; Ogasawara, Masashi

    2005-01-01

    The aim of our study was to optimize the parameters of high-resolution MRI of the labyrinth with a 3D fast spin-echo (3D-FSE) sequence. We investigated repetition time (TR), echo time (TE), Matrix, field of view (FOV), and coil selection in terms of CNR (contrast-to-noise ratio) and SNR (signal-to-noise ratio) by comparing axial images and/or three-dimensional images. The optimal 3D-FSE sequence parameters were as follows: 1.5 Tesla MR unit (Signa LX, GE Medical Systems), 3D-FSE sequence, dual 3-inch surface coil, acquisition time=12.08 min, TR=5000 msec, TE=300 msec, 3 number of excitations (NEX), FOV=12 cm, matrix=256 x 256, slice thickness=0.5 mm/0.0 sp, echo train=64, bandwidth=±31.5 kHz. High-resolution MRI of the labyrinth using the optimized 3D-FSE sequence parameters permits visualization of important anatomic details (such as scala tympani and scala vestibuli), making it possible to determine inner ear anomalies and the patency of cochlear turns. To obtain excellent heavily T2-weighted axial and three-dimensional images in the labyrinth, high CNR, SNR, and spatial resolution are significant factors at the present time. Furthermore, it is important not only to optimize the scan parameters of 3D-FSE but also to select an appropriate coil for high-resolution MRI of the labyrinth. (author)

  1. Benchmark simulations of ICRF antenna coupling

    International Nuclear Information System (INIS)

    Louche, F.; Lamalle, P. U.; Messiaen, A. M.; Compernolle, B. van; Milanesio, D.; Maggiora, R.

    2007-01-01

    The paper reports on ongoing benchmark numerical simulations of antenna input impedance parameters in the ion cyclotron range of frequencies with different coupling codes: CST Microwave Studio, TOPICA and ANTITER 2. In particular we study the validity of the approximation of a magnetized plasma slab by a dielectric medium of suitably chosen permittivity. Different antenna models are considered: a single-strap antenna, a 4-strap antenna and the 24-strap ITER antenna array. Whilst the diagonal impedances are mostly in good agreement, some differences between the mutual terms predicted by Microwave Studio and TOPICA have yet to be resolved

  2. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  3. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  4. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...

  5. Effect of cataract surgery on retinal nerve fiber layer thickness parameters using scanning laser polarimetry (GDxVCC).

    Science.gov (United States)

    Dada, Tanuj; Behera, Geeta; Agarwal, Anand; Kumar, Sanjeev; Sihota, Ramanjit; Panda, Anita

    2010-01-01

    To study the effect of cataract extraction on the retinal nerve fiber layer (RNFL) thickness, and assessment by scanning laser polarimetry (SLP), with variable corneal compensation (GDx VCC), at the glaucoma service of a tertiary care center in North India. Thirty-two eyes of 32 subjects were enrolled in the study. The subjects underwent RNFL analysis by SLP (GDx VCC) before undergoing phacoemulsification cataract extraction with intraocular lens (IOL) implantation (Acrysof SA 60 AT) four weeks following cataract surgery. The RNFL thickness parameters evaluated both before and after surgery included temporal, superior, nasal, inferior, temporal (TSNIT) average, superior average, inferior average, and nerve fiber index (NFI). The mean age of subjects was 57.6 +/- 11.7 years (18 males, 14 females). Mean TSNIT average thickness (microm) pre- and post-cataract surgery was 49.2 +/- 14.1 and 56.5 +/- 7.6 ( P = 0.001). There was a statistically significant increase in RNFL thickness parameters (TSNIT average, superior average, and inferior average) and decrease in NFI post-cataract surgery as compared to the baseline values. Mean NFI pre- and post-cataract surgery was 41.3 +/- 15.3 and 21.6 +/- 11.8 ( P = 0.001). Measurement of RNFL thickness parameters by scanning laser polarimetry is significantly altered following cataract surgery. Post the cataract surgery, a new baseline needs to be established for assessing the longitudinal follow-up of a glaucoma patient. The presence of cataract may lead to an underestimation of the RNFL thickness, and this should be taken into account when analyzing progression in a glaucoma patient.

  6. Benchmarking Simulation of Long Term Station Blackout Events

    International Nuclear Information System (INIS)

    Kim, Sung Kyum; Lee, John C.; Fynan, Douglas A.; Lee, John C.

    2013-01-01

    The importance of passive cooling systems has emerged since the SBO events. Turbine-driven auxiliary feedwater (TD-AFW) system is the only passive cooling system for steam generators (SGs) in current PWRs. During SBO events, all alternating current (AC) and direct current (DC) are interrupted and then the water levels of steam generators become high. In this case, turbine blades could be degraded and cannot cool down the SGs anymore. To prevent this kind of degradations, improved TD-AFW system should be installed for current PWRs, especially OPR 1000 plants. A long-term station blackout (LTSBO) scenario based on the improved TD-AFW system has been benchmarked as a reference input file. The following task is a safety analysis in order to find some important parameters causing the peak cladding temperature (PCT) to vary. This task has been initiated with the benchmarked input deck applying to the State-of-the-Art Reactor Consequence Analyses (SOARCA) Report. The point of the improved TD-AFW is to control the water level of the SG by using the auxiliary battery charged by a generator connected with the auxiliary turbine. However, this battery also could be disconnected from the generator. To analyze the uncertainties of the failure of the auxiliary battery, the simulation for the time-dependent failure of the TD-AFW has been performed. In addition to the cases simulated in the paper, some valves (e. g., pressurizer safety valve), available during SBO events in the paper, could be important parameters to assess uncertainties in PCTs estimated. The results for these parameters will be included in a future study in addition to the results for the leakage of the RCP seals. After the simulation of several transient cases, alternating conditional expectation (ACE) algorithm will be used to derive functional relationships between the PCT and several system parameters

  7. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....

  8. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind......Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... as it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This paper addresses...

  9. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  10. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  11. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  12. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  13. VALIDATION OF FULL CORE GEOMETRY MODEL OF THE NODAL3 CODE IN THE PWR TRANSIENT BENCHMARK PROBLEMS

    Directory of Open Access Journals (Sweden)

    Tagor Malem Sembiring

    2015-10-01

    Full Text Available ABSTRACT VALIDATION OF FULL CORE GEOMETRY MODEL OF THE NODAL3 CODE IN THE PWR TRANSIENT BENCHMARK PROBLEMS. The coupled neutronic and thermal-hydraulic (T/H code, NODAL3 code, has been validated in some PWR static benchmark and the NEACRP PWR transient benchmark cases. However, the NODAL3 code have not yet validated in the transient benchmark cases of a control rod assembly (CR ejection at peripheral core using a full core geometry model, the C1 and C2 cases.  By this research work, the accuracy of the NODAL3 code for one CR ejection or the unsymmetrical group of CRs ejection case can be validated. The calculations by the NODAL3 code have been carried out by the adiabatic method (AM and the improved quasistatic method (IQS. All calculated transient parameters by the NODAL3 code were compared with the reference results by the PANTHER code. The maximum relative difference of 16% occurs in the calculated time of power maximum parameter by using the IQS method, while the relative difference of the AM method is 4% for C2 case.  All calculation results by the NODAL3 code shows there is no systematic difference, it means the neutronic and T/H modules are adopted in the code are considered correct. Therefore, all calculation results by using the NODAL3 code are very good agreement with the reference results. Keywords: nodal method, coupled neutronic and thermal-hydraulic code, PWR, transient case, control rod ejection.   ABSTRAK VALIDASI MODEL GEOMETRI TERAS PENUH PAKET PROGRAM NODAL3 DALAM PROBLEM BENCHMARK GAYUT WAKTU PWR. Paket program kopel neutronik dan termohidraulika (T/H, NODAL3, telah divalidasi dengan beberapa kasus benchmark statis PWR dan kasus benchmark gayut waktu PWR NEACRP.  Akan tetapi, paket program NODAL3 belum divalidasi dalam kasus benchmark gayut waktu akibat penarikan sebuah perangkat batang kendali (CR di tepi teras menggunakan model geometri teras penuh, yaitu kasus C1 dan C2. Dengan penelitian ini, akurasi paket program

  14. SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI

    Directory of Open Access Journals (Sweden)

    Ralf Reussner

    2002-01-01

    Full Text Available The main objective of the MPI communication library is to enable portable parallel programming with high performance within the message-passing paradigm. Since the MPI standard has no associated performance model, and makes no performance guarantees, comprehensive, detailed and accurate performance figures for different hardware platforms and MPI implementations are important for the application programmer, both for understanding and possibly improving the behavior of a given program on a given platform, as well as for assuring a degree of predictable behavior when switching to another hardware platform and/or MPI implementation. We term this latter goal performance portability, and address the problem of attaining performance portability by benchmarking. We describe the SKaMPI benchmark which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability. SKaMPI is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.

  15. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  16. Power reactor pressure vessel benchmarks

    International Nuclear Information System (INIS)

    Rahn, F.J.

    1978-01-01

    A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)

  17. International Benchmarking of Electricity Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2014-01-01

    Electricity transmission system operators (TSO) in Europe are increasing subject to high-powered performance-based regulation, such as revenue-cap regimes. The determination of the parameters in such regimes is challenging for national regulatory authorities (NRA), since there is normally a single...... TSO operating in each jurisdiction. The solution for European regulators has been found in international regulatory benchmarking, organized in collaboration with the Council of European Energy Regulators (CEER) in 2008 and 2012 for 22 and 23 TSOs, respectively. The frontier study provides static cost...... weight restrictions and a correction method for opening balances....

  18. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  19. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  20. MIMIC: An Innovative Methodology for Determining Mobile Laser Scanning System Point Density

    Directory of Open Access Journals (Sweden)

    Conor Cahalane

    2014-08-01

    Full Text Available Understanding how various Mobile Mapping System (MMS laser hardware configurations and operating parameters exercise different influence on point density is important for assessing system performance, which in turn facilitates system design and MMS benchmarking. Point density also influences data processing, as objects that can be recognised using automated algorithms generally require a minimum point density. Although obtaining the necessary point density impacts on hardware costs, survey time and data storage requirements, a method for accurately and rapidly assessing MMS performance is lacking for generic MMSs. We have developed a method for quantifying point clouds collected by an MMS with respect to known objects at specified distances using 3D surface normals, 2D geometric formulae and line drawing algorithms. These algorithms were combined in a system called the Mobile Mapping Point Density Calculator (MIMIC and were validated using point clouds captured by both a single scanner and a dual scanner MMS. Results from MIMIC were promising: when considering the number of scan profiles striking the target, the average error equated to less than 1 point per scan profile. These tests highlight that MIMIC is capable of accurately calculating point density for both single and dual scanner MMSs.

  1. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  2. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    Kluth, Stefan

    2014-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  3. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...

  4. Interplay effects in proton scanning for lung: a 4D Monte Carlo study assessing the impact of tumor and beam delivery parameters

    International Nuclear Information System (INIS)

    Dowdell, S; Grassberger, C; Sharp, G C; Paganetti, H

    2013-01-01

    Relative motion between a tumor and a scanning proton beam results in a degradation of the dose distribution (interplay effect). This study investigates the relationship between beam scanning parameters and the interplay effect, with the goal of finding parameters that minimize interplay. 4D Monte Carlo simulations of pencil beam scanning proton therapy treatments were performed using the 4DCT geometry of five lung cancer patients of varying tumor size (50.4–167.1 cc) and motion amplitude (2.9–30.1 mm). Treatments were planned assuming delivery in 35 × 2.5 Gy(RBE) fractions. The spot size, time to change the beam energy (τ es ), time required for magnet settling (τ ss ), initial breathing phase, spot spacing, scanning direction, scanning speed, beam current and patient breathing period were varied for each of the five patients. Simulations were performed for a single fraction and an approximation of conventional fractionation. For the patients considered, the interplay effect could not be predicted using the superior–inferior motion amplitude alone. Larger spot sizes (σ ∼ 9–16 mm) were less susceptible to interplay, giving an equivalent uniform dose (EUD) of 99.0 ± 4.4% (1 standard deviation) in a single fraction compared to 86.1 ± 13.1% for smaller spots (σ ∼ 2–4 mm). The smaller spot sizes gave EUD values as low as 65.3% of the prescription dose in a single fraction. Reducing the spot spacing improved the target dose homogeneity. The initial breathing phase can have a significant effect on the interplay, particularly for shorter delivery times. No clear benefit was evident when scanning either parallel or perpendicular to the predominant axis of motion. Longer breathing periods decreased the EUD. In general, longer delivery times led to lower interplay effects. Conventional fractionation showed significant improvement in terms of interplay, giving a EUD of at least 84.7% and 100.0% of the prescription dose for the small and larger spot sizes

  5. Interplay effects in proton scanning for lung: a 4D Monte Carlo study assessing the impact of tumor and beam delivery parameters.

    Science.gov (United States)

    Dowdell, S; Grassberger, C; Sharp, G C; Paganetti, H

    2013-06-21

    Relative motion between a tumor and a scanning proton beam results in a degradation of the dose distribution (interplay effect). This study investigates the relationship between beam scanning parameters and the interplay effect, with the goal of finding parameters that minimize interplay. 4D Monte Carlo simulations of pencil beam scanning proton therapy treatments were performed using the 4DCT geometry of five lung cancer patients of varying tumor size (50.4-167.1 cc) and motion amplitude (2.9-30.1 mm). Treatments were planned assuming delivery in 35 × 2.5 Gy(RBE) fractions. The spot size, time to change the beam energy (τes), time required for magnet settling (τss), initial breathing phase, spot spacing, scanning direction, scanning speed, beam current and patient breathing period were varied for each of the five patients. Simulations were performed for a single fraction and an approximation of conventional fractionation. For the patients considered, the interplay effect could not be predicted using the superior-inferior motion amplitude alone. Larger spot sizes (σ ~ 9-16 mm) were less susceptible to interplay, giving an equivalent uniform dose (EUD) of 99.0 ± 4.4% (1 standard deviation) in a single fraction compared to 86.1 ± 13.1% for smaller spots (σ ~ 2-4 mm). The smaller spot sizes gave EUD values as low as 65.3% of the prescription dose in a single fraction. Reducing the spot spacing improved the target dose homogeneity. The initial breathing phase can have a significant effect on the interplay, particularly for shorter delivery times. No clear benefit was evident when scanning either parallel or perpendicular to the predominant axis of motion. Longer breathing periods decreased the EUD. In general, longer delivery times led to lower interplay effects. Conventional fractionation showed significant improvement in terms of interplay, giving a EUD of at least 84.7% and 100.0% of the prescription dose for the small and larger spot sizes respectively. The

  6. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  7. Development of a set of benchmark problems to verify numerical methods for solving burnup equations

    International Nuclear Information System (INIS)

    Lago, Daniel; Rahnema, Farzad

    2017-01-01

    Highlights: • Description transmutation chain benchmark problems. • Problems for validating numerical methods for solving burnup equations. • Analytical solutions for the burnup equations. • Numerical solutions for the burnup equations. - Abstract: A comprehensive set of transmutation chain benchmark problems for numerically validating methods for solving burnup equations was created. These benchmark problems were designed to challenge both traditional and modern numerical methods used to solve the complex set of ordinary differential equations used for tracking the change in nuclide concentrations over time due to nuclear phenomena. Given the development of most burnup solvers is done for the purpose of coupling with an established transport solution method, these problems provide a useful resource in testing and validating the burnup equation solver before coupling for use in a lattice or core depletion code. All the relevant parameters for each benchmark problem are described. Results are also provided in the form of reference solutions generated by the Mathematica tool, as well as additional numerical results from MATLAB.

  8. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  9. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  10. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  11. Modelling the benchmark spot curve for the Serbian

    Directory of Open Access Journals (Sweden)

    Drenovak Mikica

    2010-01-01

    Full Text Available The objective of this paper is to estimate Serbian benchmark spot curves using the Svensson parametric model. The main challenges that we tackle are: sparse data, different currency denominations of short and longer term maturities, and infrequent transactions in the short-term market segment vs daily traded medium and long-term market segment. We find that the model is flexible enough to account for most of the data variability. The model parameters are interpreted in economic terms.

  12. Reduction of MRI acoustic noise achieved by manipulation of scan parameters – A study using veterinary MR sequences

    International Nuclear Information System (INIS)

    Baker, Martin A.

    2013-01-01

    Sound pressure levels were measured within an MR scan room for a range of sequences employed in veterinary brain scanning, using a test phantom in an extremity coil. Variation of TR and TE, and use of a quieter gradient mode (‘whisper’ mode) were evaluated to determine their effect on sound pressure levels (SPLs). Use of a human head coil and a human brain sequence was also evaluated. Significant differences in SPL were achieved for T2, T1, T2* gradient echo and VIBE sequences by varying TR or TE, or by selecting the ‘whisper’ gradient mode. An appreciable reduction was achieved for the FLAIR sequence. Noise levels were not affected when a head coil was used in place of an extremity coil. Due to sequence parameters employed, veterinary patients and anaesthetists may be exposed to higher sound levels than those experienced in human MR examinations. The techniques described are particularly valuable in small animal MR scanning where ear protection is not routinely provided for the patient.

  13. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  14. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  15. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  16. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  17. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  18. PREMIUM - Benchmark on the quantification of the uncertainty of the physical models in the system thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Skorek, Tomasz; Crecy, Agnes de

    2013-01-01

    PREMIUM (Post BEMUSE Reflood Models Input Uncertainty Methods) is an activity launched with the aim to push forward the methods of quantification of physical models uncertainties in thermal-hydraulic codes. It is endorsed by OECD/NEA/CSNI/WGAMA. The benchmark PREMIUM is addressed to all who applies uncertainty evaluation methods based on input uncertainties quantification and propagation. The benchmark is based on a selected case of uncertainty analysis application to the simulation of quench front propagation in an experimental test facility. Application to an experiment enables evaluation and confirmation of the quantified probability distribution functions on the basis of experimental data. The scope of the benchmark comprises a review of the existing methods, selection of potentially important uncertain input parameters, preliminary quantification of the ranges and distributions of the identified parameters, evaluation of the probability density function using experimental results of tests performed on FEBA test facility and confirmation/validation of the performed quantification on the basis of blind calculation of Reflood 2-D PERICLES experiment. (authors)

  19. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  20. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  1. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  2. Parallel Ada benchmarks for the SVMS

    Science.gov (United States)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  3. Comparison of the results of the fifth dynamic AER benchmark-a benchmark for coupled thermohydraulic system/three-dimensional hexagonal kinetic core models

    International Nuclear Information System (INIS)

    Kliem, S.

    1998-01-01

    The fifth dynamic benchmark was defined at seventh AER-Symposium, held in Hoernitz, Germany in 1997. It is the first benchmark for coupled thermohydraulic system/three-dimensional hexagonal neutron kinetic core models. In this benchmark the interaction between the components of a WWER-440 NPP with the reactor core has been investigated. The initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one control rod group stucking. This break causes an overcooling of the primary circuit. During this overcooling the scram reactivity is compensated and the scrammed reactor becomes re critical. The calculation was continued until the highly-borated water from the high pressure injection system terminated the power excursion. Each participant used own best-estimate nuclear cross section data. Only the initial subcriticality at the beginning of the transient was given. Solutions were received from Kurchatov Institute Russia with the code BIPR8/ATHLET, VTT Energy Finland with HEXTRAN/SMABRE, NRI Rez Czech Republic with DYN3/ATHLET, KFKI Budapest Hungary with KIKO3D/ATHLET and from FZR Germany with the code DYN3D/ATHLET.In this paper the results are compared. Beside the comparison of global results, the behaviour of several thermohydraulic and neutron kinetic parameters is presented to discuss the revealed differences between the solutions.(Authors)

  4. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  5. Integrating Best Practice and Performance Indicators To Benchmark the Performance of a School System. Benchmarking Paper 940317.

    Science.gov (United States)

    Cuttance, Peter

    This paper provides a synthesis of the literature on the role of benchmarking, with a focus on its use in the public sector. Benchmarking is discussed in the context of quality systems, of which it is an important component. The paper describes the basic types of benchmarking, pertinent research about its application in the public sector, the…

  6. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  7. Critical Assessment of Metagenome Interpretation-a benchmark of metagenomics software.

    Science.gov (United States)

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J; Chia, Burton K H; Denis, Bertrand; Froula, Jeff L; Wang, Zhong; Egan, Robert; Don Kang, Dongwan; Cook, Jeffrey J; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael D; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z; Cuevas, Daniel A; Edwards, Robert A; Saha, Surya; Piro, Vitor C; Renard, Bernhard Y; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C; Woyke, Tanja; Vorholt, Julia A; Schulze-Lefert, Paul; Rubin, Edward M; Darling, Aaron E; Rattei, Thomas; McHardy, Alice C

    2017-11-01

    Methods for assembly, taxonomic profiling and binning are key to interpreting metagenome data, but a lack of consensus about benchmarking complicates performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on highly complex and realistic data sets, generated from ∼700 newly sequenced microorganisms and ∼600 novel viruses and plasmids and representing common experimental setups. Assembly and genome binning programs performed well for species represented by individual genomes but were substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below family level. Parameter settings markedly affected performance, underscoring their importance for program reproducibility. The CAMI results highlight current challenges but also provide a roadmap for software selection to answer specific research questions.

  8. A ChIP-Seq benchmark shows that sequence conservation mainly improves detection of strong transcription factor binding sites.

    Directory of Open Access Journals (Sweden)

    Tony Håndstad

    Full Text Available BACKGROUND: Transcription factors are important controllers of gene expression and mapping transcription factor binding sites (TFBS is key to inferring transcription factor regulatory networks. Several methods for predicting TFBS exist, but there are no standard genome-wide datasets on which to assess the performance of these prediction methods. Also, it is believed that information about sequence conservation across different genomes can generally improve accuracy of motif-based predictors, but it is not clear under what circumstances use of conservation is most beneficial. RESULTS: Here we use published ChIP-seq data and an improved peak detection method to create comprehensive benchmark datasets for prediction methods which use known descriptors or binding motifs to detect TFBS in genomic sequences. We use this benchmark to assess the performance of five different prediction methods and find that the methods that use information about sequence conservation generally perform better than simpler motif-scanning methods. The difference is greater on high-affinity peaks and when using short and information-poor motifs. However, if the motifs are specific and information-rich, we find that simple motif-scanning methods can perform better than conservation-based methods. CONCLUSIONS: Our benchmark provides a comprehensive test that can be used to rank the relative performance of transcription factor binding site prediction methods. Moreover, our results show that, contrary to previous reports, sequence conservation is better suited for predicting strong than weak transcription factor binding sites.

  9. Proposed biopsy performance benchmarks for MRI based on an audit of a large academic center.

    Science.gov (United States)

    Sedora Román, Neda I; Mehta, Tejas S; Sharpe, Richard E; Slanetz, Priscilla J; Venkataraman, Shambhavi; Fein-Zachary, Valerie; Dialani, Vandana

    2018-05-01

    Performance benchmarks exist for mammography (MG); however, performance benchmarks for magnetic resonance imaging (MRI) are not yet fully developed. The purpose of our study was to perform an MRI audit based on established MG and screening MRI benchmarks and to review whether these benchmarks can be applied to an MRI practice. An IRB approved retrospective review of breast MRIs was performed at our center from 1/1/2011 through 12/31/13. For patients with biopsy recommendation, core biopsy and surgical pathology results were reviewed. The data were used to derive mean performance parameter values, including abnormal interpretation rate (AIR), positive predictive value (PPV), cancer detection rate (CDR), percentage of minimal cancers and axillary node negative cancers and compared with MG and screening MRI benchmarks. MRIs were also divided by screening and diagnostic indications to assess for differences in performance benchmarks amongst these two groups. Of the 2455 MRIs performed over 3-years, 1563 were performed for screening indications and 892 for diagnostic indications. With the exception of PPV2 for screening breast MRIs from 2011 to 2013, PPVs were met for our screening and diagnostic populations when compared to the MRI screening benchmarks established by the Breast Imaging Reporting and Data System (BI-RADS) 5 Atlas ® . AIR and CDR were lower for screening indications as compared to diagnostic indications. New MRI screening benchmarks can be used for screening MRI audits while the American College of Radiology (ACR) desirable goals for diagnostic MG can be used for diagnostic MRI audits. Our study corroborates established findings regarding differences in AIR and CDR amongst screening versus diagnostic indications. © 2017 Wiley Periodicals, Inc.

  10. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  11. Benchmarking for controllere: metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels Erik; Dietrichson, Lars Grubbe

    2008-01-01

    Benchmarking indgår på mange måder i både private og offentlige virksomheders ledelsespraksis. I økonomistyring anvendes benchmark-baserede indikatorer (eller nøgletal), eksempelvis ved fastlæggelse af mål i resultatkontrakter eller for at angive det ønskede niveau for visse nøgletal i et Balanced...... Scorecard eller tilsvarende målstyringsmodeller. Artiklen redegør for begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det, samt redegør for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et...... benchmarkingprojekt. Dernæst bliver forskellen på resultatbenchmarking og procesbenchmarking behandlet, hvorefter brugen af intern hhv. ekstern benchmarking, samt brugen af benchmarking i budgetlægning og budgetopfølgning, behandles....

  12. Residual Generation for the Ship Benchmark Using Structural Approach

    DEFF Research Database (Denmark)

    Cocquempot, V.; Izadi-Zamanabadi, Roozbeh; Staroswiecki, M

    1998-01-01

    The prime objective of Fault-tolerant Control (FTC) systems is to handle faults and discrepancies using appropriate accommodation policies. The issue of obtaining information about various parameters and signals, which have to be monitored for fault detection purposes, becomes a rigorous task...... with the growing number of subsystems. The structural approach, presented in this paper, constitutes a general framework for providing information when the system becomes complex. The methodology of this approach is illustrated on the ship propulsion benchmark....

  13. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  14. Benchmarking study of corporate research management and planning practices

    Science.gov (United States)

    McIrvine, Edward C.

    1992-05-01

    During 1983-84, Xerox Corporation was undergoing a change in corporate style through a process of training and altered behavior known as Leadership Through Quality. One tenet of Leadership Through Quality was benchmarking, a procedure whereby all units of the corporation were asked to compare their operation with the outside world. As a part of the first wave of benchmark studies, Xerox Corporate Research Group studied the processes of research management, technology transfer, and research planning in twelve American and Japanese companies. The approach taken was to separate `research yield' and `research productivity' (as defined by Richard Foster) and to seek information about how these companies sought to achieve high- quality results in these two parameters. The most significant findings include the influence of company culture, two different possible research missions (an innovation resource and an information resource), and the importance of systematic personal interaction between sources and targets of technology transfer.

  15. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  16. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  17. Scan path entropy and Arrow plots: Capturing scanning behavior of multiple observers

    Directory of Open Access Journals (Sweden)

    Ignace T C Hooge

    2013-12-01

    Full Text Available Designers of visual communication material want their material to attract and retain attention. In marketing research, heat maps, dwell time, and time to AOI first hit are often used as evaluation parameters. Here we present two additional measures 1 scan path entropy to quantify gaze guidance and 2 the arrow plot to visualize the average scan path. Both are based on string representations of scan paths. The latter also incorporates transition matrices and time required for 50% of the observers to first hit AOIs (T50. The new measures were tested in an eye tracking study (48 observers, 39 advertisements. Scan path entropy is a sensible measure for gaze guidance and the new visualization method reveals aspects of the average scan path and gives a better indication in what order global scanning takes place.

  18. A benchmark study for different numerical parameters and their impact on the calculated strain levels for a model part door outer

    International Nuclear Information System (INIS)

    Berger, E.; Till, E.; Brenne, T.; Heath, A.; Hochholdinger, B.; Kassem-Manthey, K.; Kessler, L.; Koch, N.; Kortmann, G.; Kroeff, A.; Otto, T.; Verhoeven, H.; Steinbeck, G.; Vu, T.-C.; Wiegand, K.

    2005-01-01

    To increase the accuracy of finite element simulations in daily practice the local German and Austrian Deep Drawing Research Groups of IDDRG founded a special Working Group in year 2000. The main objective of this group was the continuously ongoing study and discussion of numerical / material effects in simulation jobs and to work out possible solutions. As a first theme of this group the intensive study of small die radii and the possibility of detecting material failure in these critical forming positions was selected. The part itself is a fictional body panel outside in which the original door handle of the VW Golf A4 has been constructed, a typical position of possible material necking or rupture in the press shop. All conditions to do a successful simulation have been taken care of in advance, material data, boundary conditions, friction, FLC and others where determined for the two materials in investigation - a mild steel and a dual phase steel HXT500X. The results of the experiments have been used to design the descriptions of two different benchmark runs for the simulation. The simulations with different programs as well as with different parameters showed on one hand negligible and on the other hand parameters with strong impact on the result - thereby having a different impact on a possible material failure prediction

  19. Solution of the fifth dynamic Atomic Energy Research benchmark problem using the coupled code DIN3/ATHLET

    International Nuclear Information System (INIS)

    Kliem, S.

    1998-01-01

    The fifth dynamic benchmark is the first benchmark for coupled thermohydraulic system/three dimensional hexagonal neutron kinetic core models. In this benchmark the interaction between the components of a WWER-440 NPP with the reactor core has been investigated. The initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and the shutdown conditions with one control rod group s tucking. This break causes an overcooling of the primary circuit. During this overcooling the scram reactivity is compensated and the scrammed reactor becomes re critical. The calculation was continued until the highly-borated water from the high pressure injection system terminated the power excursion. Several aspects of the very complex and complicated benchmark problem are analyzed in detail. Sensitivity studies with different hydraulic parameters are made. The influence on the course of the transient and on the solution is discussed.(Author)

  20. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  1. Nonlinear viscoplasticity in ASPECT: benchmarking and applications to subduction

    Science.gov (United States)

    Glerum, Anne; Thieulot, Cedric; Fraters, Menno; Blom, Constantijn; Spakman, Wim

    2018-03-01

    ASPECT (Advanced Solver for Problems in Earth's ConvecTion) is a massively parallel finite element code originally designed for modeling thermal convection in the mantle with a Newtonian rheology. The code is characterized by modern numerical methods, high-performance parallelism and extensibility. This last characteristic is illustrated in this work: we have extended the use of ASPECT from global thermal convection modeling to upper-mantle-scale applications of subduction.Subduction modeling generally requires the tracking of multiple materials with different properties and with nonlinear viscous and viscoplastic rheologies. To this end, we implemented a frictional plasticity criterion that is combined with a viscous diffusion and dislocation creep rheology. Because ASPECT uses compositional fields to represent different materials, all material parameters are made dependent on a user-specified number of fields.The goal of this paper is primarily to describe and verify our implementations of complex, multi-material rheology by reproducing the results of four well-known two-dimensional benchmarks: the indentor benchmark, the brick experiment, the sandbox experiment and the slab detachment benchmark. Furthermore, we aim to provide hands-on examples for prospective users by demonstrating the use of multi-material viscoplasticity with three-dimensional, thermomechanical models of oceanic subduction, putting ASPECT on the map as a community code for high-resolution, nonlinear rheology subduction modeling.

  2. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  3. Critical Assessment of Metagenome Interpretation – a benchmark of computational metagenomics software

    Science.gov (United States)

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.

    2018-01-01

    In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888

  4. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  5. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  6. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  7. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  8. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  9. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  10. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2015-01-01

    method based uncertainty and reliability analysis. The reliability of the scanning paths are established using cumulative probability distribution functions for process output criteria such as sample density, thermal homogeneity, etc. A customized genetic algorithm is used along with the simulation model...

  11. Scanning Quantum Cryogenic Atom Microscope

    Science.gov (United States)

    Yang, Fan; Kollár, Alicia J.; Taylor, Stephen F.; Turner, Richard W.; Lev, Benjamin L.

    2017-03-01

    Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed-matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented dc-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (approximately 2 μ m ) or 6 nT /√{Hz } per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly 100 points with an effective field sensitivity of 600 pT /√{Hz } for each point during the same time as a point-by-point scanner measures these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly 2 orders of magnitude improvement in magnetic flux sensitivity (down to 10-6 Φ0/√{Hz } ) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns in a system where samples may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge-transport images at temperatures from room temperature to 4 K in unconventional superconductors and topologically nontrivial materials.

  12. EA-MC Neutronic Calculations on IAEA ADS Benchmark 3.2

    Energy Technology Data Exchange (ETDEWEB)

    Dahlfors, Marcus [Uppsala Univ. (Sweden). Dept. of Radiation Sciences; Kadi, Yacine [CERN, Geneva (Switzerland). Emerging Energy Technologies

    2006-01-15

    The neutronics and the transmutation properties of the IAEA ADS benchmark 3.2 setup, the 'Yalina' experiment or ISTC project B-70, have been studied through an extensive amount of 3-D Monte Carlo calculations at CERN. The simulations were performed with the state-of-the-art computer code package EA-MC, developed at CERN. The calculational approach is outlined and the results are presented in accordance with the guidelines given in the benchmark description. A variety of experimental conditions and parameters are examined; three different fuel rod configurations and three types of neutron sources are applied to the system. Reactivity change effects introduced by removal of fuel rods in both central and peripheral positions are also computed. Irradiation samples located in a total of 8 geometrical positions are examined. Calculations of capture reaction rates in {sup 129}I, {sup 237}Np and {sup 243}Am samples and of fission reaction rates in {sup 235}U, {sup 237}Np and {sup 243}Am samples are presented. Simulated neutron flux densities and energy spectra as well as spectral indices inside experimental channels are also given according to benchmark specifications. Two different nuclear data libraries, JAR-95 and JENDL-3.2, are applied for the calculations.

  13. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  14. A CFD benchmarking exercise based on flow mixing in a T-junction

    Energy Technology Data Exchange (ETDEWEB)

    Smith, B.L., E-mail: brian.smith@psi.ch [Thermal Hydraulics Laboratory, Nuclear Energy and Safety Department, Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Mahaffy, J.H. [Wheelsmith Farm, Spring Mill, PA (United States); Angele, K. [Vattenfall R and D, Älvkarleby (Sweden)

    2013-11-15

    The paper describes an international benchmarking exercise, sponsored by the OECD Nuclear Energy Agency (NEA), aimed at testing the ability of state-of-the-art computational fluid dynamics (CFD) codes to predict the important fluid flow parameters affecting high-cycle thermal fatigue induced by turbulent mixing in T-junctions. The results from numerical simulations are compared to measured data from an experiment performed at 1:2 scale by Vattenfall Research and Development, Älvkarleby, Sweden. The test data were released only at the end of the exercise making this a truly blind CFD-validation benchmark. Details of the organizational procedures, the experimental set-up and instrumentation, the different modeling approaches adopted, synthesis of results, and overall conclusions and perspectives are presented.

  15. Ad hoc committee on reactor physics benchmarks

    International Nuclear Information System (INIS)

    Diamond, D.J.; Mosteller, R.D.; Gehin, J.C.

    1996-01-01

    In the spring of 1994, an ad hoc committee on reactor physics benchmarks was formed under the leadership of two American Nuclear Society (ANS) organizations. The ANS-19 Standards Subcommittee of the Reactor Physics Division and the Computational Benchmark Problem Committee of the Mathematics and Computation Division had both seen a need for additional benchmarks to help validate computer codes used for light water reactor (LWR) neutronics calculations. Although individual organizations had employed various means to validate the reactor physics methods that they used for fuel management, operations, and safety, additional work in code development and refinement is under way, and to increase accuracy, there is a need for a corresponding increase in validation. Both organizations thought that there was a need to promulgate benchmarks based on measured data to supplement the LWR computational benchmarks that have been published in the past. By having an organized benchmark activity, the participants also gain by being able to discuss their problems and achievements with others traveling the same route

  16. Scatter radiation breast exposure during head CT: impact of scanning conditions and anthropometric parameters on shielded and unshielded breast dose

    Energy Technology Data Exchange (ETDEWEB)

    Klasic, B. [Hospital for pulmonary diseases, Zagreb (Croatia); Knezevic, Z.; Vekic, B. [Rudjer Boskovic Institute, Zagreb (Croatia); Brnic, Z.; Novacic, K. [Merkur Univ. Hospital, Zagreb (Croatia)

    2006-07-01

    Constantly increasing clinical requests for CT scanning of the head on our facility continue to raise concern regarding radiation exposure of patients, especially radiosensitive tissues positioned close to the scanning plane. The aim of our prospective study was to estimate scatter radiation doses to the breast from routine head CT scans, both with and without use of lead shielding, and to establish influence of various technical and anthropometric factors on doses using statistical data analysis. In 85 patient referred to head CT for objective medical reasons, one breast was covered with lead apron during CT scanning. Radiation doses were measured at skin of both breasts and over the apron simultaneously, by the use of thermo luminescent dosimeters. The doses showed a mean reduction by 37% due to lead shielding. After we statistically analyzed our data, we observed significant correlation between under-the-shield dose and values of technical parameters. We used multiple linear regression model to describe the relationships of doses to unshielded and shielded breast respectively, with anthropometric and technical factors. Our study proved lead shielding of the breast to be effective, easy to use and leading to a significant reduction in scatter dose. (author)

  17. Scatter radiation breast exposure during head CT: impact of scanning conditions and anthropometric parameters on shielded and unshielded breast dose

    International Nuclear Information System (INIS)

    Klasic, B.; Knezevic, Z.; Vekic, B.; Brnic, Z.; Novacic, K.

    2006-01-01

    Constantly increasing clinical requests for CT scanning of the head on our facility continue to raise concern regarding radiation exposure of patients, especially radiosensitive tissues positioned close to the scanning plane. The aim of our prospective study was to estimate scatter radiation doses to the breast from routine head CT scans, both with and without use of lead shielding, and to establish influence of various technical and anthropometric factors on doses using statistical data analysis. In 85 patient referred to head CT for objective medical reasons, one breast was covered with lead apron during CT scanning. Radiation doses were measured at skin of both breasts and over the apron simultaneously, by the use of thermo luminescent dosimeters. The doses showed a mean reduction by 37% due to lead shielding. After we statistically analyzed our data, we observed significant correlation between under-the-shield dose and values of technical parameters. We used multiple linear regression model to describe the relationships of doses to unshielded and shielded breast respectively, with anthropometric and technical factors. Our study proved lead shielding of the breast to be effective, easy to use and leading to a significant reduction in scatter dose. (author)

  18. Integral parameters for the Godiva benchmark calculated by using theoretical and adjusted fission spectra of 235U

    International Nuclear Information System (INIS)

    Caldeira, A.D.

    1987-05-01

    The theoretical and adjusted Watt spectrum representations for 235 U are used as weighting functions to calculate K eff and θ f 28 /θ f 25 for the benchmark Godiva. The results obtained show that the values of K eff and θ f 28 /θ f 25 are not affected by spectrum form change. (author) [pt

  19. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  20. Benchmarking of business excellence as a determinant of quality improvement

    Directory of Open Access Journals (Sweden)

    Srejović Milan

    2017-01-01

    Full Text Available In order for a process to operate successfully, it is necessary to constantly measure and improve its performance. One way to analyze the current state of performances of a company and its improvement is benchmarking. In a market-oriented environment, an enterprise must meet the expectations of different interest groups, or key stakeholders. However, in order to achieve business excellence, it is necessary to fulfill the requirements prescribed by the relevant standards. In this paper, the focus will be on the requirements of the ISO 9004: 2009 standard. The aim of the paper is to highlight the significance of the benchmark technique in measuring the business performances of companies. By implementing it, you can identify the strengths and weaknesses of the company in question. In this way, the process parameters that need to be improved are identified so that the company can improve its competitive position.

  1. MTCB: A Multi-Tenant Customizable database Benchmark

    NARCIS (Netherlands)

    van der Zijden, WIm; Hiemstra, Djoerd; van Keulen, Maurice

    2017-01-01

    We argue that there is a need for Multi-Tenant Customizable OLTP systems. Such systems need a Multi-Tenant Customizable Database (MTC-DB) as a backing. To stimulate the development of such databases, we propose the benchmark MTCB. Benchmarks for OLTP exist and multi-tenant benchmarks exist, but no

  2. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  3. Optimisation of the CT parameters with evaluation of MDCT double-scan images in the planning of the dental implant treatment

    International Nuclear Information System (INIS)

    Wojciechowski, W.; Urbanik, A.; Kownacki, P.; Kownacki, S.; Sowa, A.

    2009-01-01

    Background: The aim of the present study was optimisation of the examination parameters and evaluation of reliability of the MDCT double-scan images obtained with computer navigation for dental implant treatment. Material/Methods: With the use of a MDCT scanner SOMATOM Sensation (Siemens), CT-images of a phantom were performed: slice-collimation (10 A - 0.75 mm, 10 A - 1.5 mm), slice-thickness (0.75, 1, 2, 3, 5 mm), pitch (0.5, 1, 1.5). Additionally, the analysis on various filters from H20f to H60f was performed. For study used a phantom of the human cadaver head. Qualitative analysis was done using Nobel Guide (Nobel Biocare, Sweden), assessing possible artefacts on the images, and measurements of the bone structure on all filters in comparison with the real image. Results: The quality of the phantom images was assessed as optimal for the slices thickness 0.75 and 1 mm. The use of various values of the pitch did not have statistically significant difference on the image quality. Application of various filters did not alter the parameters of the bone structure, however the use of lower filters (H30f and H40f) had a beneficial effect on the quality of 3D reconstruction. The arrangement of the 'window' parameters in CT seemed to have a greater influence on the measurement and evaluation of the bone structure. Conclusions: Slice-collimation and slice-thickness are the most important parameters in selection of the optimal scan-protocol. It is recommended to use in the postprocessing, the mentioned above parameter succession with the application of various filters (H30f and H60f) at a stable arrangement of the 'window' in the CT examination. (authors)

  4. Benchmark calculations of thermal reaction rates. I - Quantal scattering theory

    Science.gov (United States)

    Chatfield, David C.; Truhlar, Donald G.; Schwenke, David W.

    1991-01-01

    The thermal rate coefficient for the prototype reaction H + H2 yields H2 + H with zero total angular momentum is calculated by summing, averaging, and numerically integrating state-to-state reaction probabilities calculated by time-independent quantum-mechanical scattering theory. The results are very carefully converged with respect to all numerical parameters in order to provide high-precision benchmark results for confirming the accuracy of new methods and testing their efficiency.

  5. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  6. Benchmarking the Netherlands. Benchmarking for growth

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity

  7. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  8. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  9. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...

  10. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  11. Bone scan and joint scan of hands and feet in rheumatoid arthritis

    International Nuclear Information System (INIS)

    Carpentier, N.; Verbeke, S.; Perdrisot, R.; Grilo, R.M.; Quenesson, E.; Bonnet, C.; Vergne, P.; Treves, R.; Bertin, P.; Boutros-Toni, F.

    2000-01-01

    The aim of this study was to determine the ability of joint scan and bone scan of hands and feet, in patients with rheumatoid arthritis, to localize the altered joints. The sensitivity, the specificity, the positive predictive value (PPV) and the negative predictive value (NPV) of joint scan were determined in comparison with clinical joint assessment. Fifteen patients (780 joints) were clinically examined (pain and synovitis); during the same day, a bone scan and a joint scan were realized by oxidronate 99m Tc intravenous injection. Patients were scanned 5 minutes (tissual time, T t ) and 3 hours 1/4 (bone time, T 0 ) after the administration. The uptake of the bi-phosphonate was evaluated with a qualitative method using a grey scale. The uptake of 99m Tc oxidronate was quantitated using an extra-articular region of interest. The sensitivity, specificity, PPV and NPV of the scan at Tt were 46%, 96%, 85% et 78%. The same parameters were 75%, 66%, 53% and 84% for the scan realized at T 0 . The joint scan has showed 22% of false positive. These false positives could be a consequence of an earlier detection of joint alterations by scan. The joint scan should forecast the evolution of joints in patients with rheumatoid arthritis. (author)

  12. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  13. Benchmark for Evaluating Moving Object Indexes

    DEFF Research Database (Denmark)

    Chen, Su; Jensen, Christian Søndergaard; Lin, Dan

    2008-01-01

    that targets techniques for the indexing of the current and near-future positions of moving objects. This benchmark enables the comparison of existing and future indexing techniques. It covers important aspects of such indexes that have not previously been covered by any benchmark. Notable aspects covered......Progress in science and engineering relies on the ability to measure, reliably and in detail, pertinent properties of artifacts under design. Progress in the area of database-index design thus relies on empirical studies based on prototype implementations of indexes. This paper proposes a benchmark...... include update efficiency, query efficiency, concurrency control, and storage requirements. Next, the paper applies the benchmark to half a dozen notable moving-object indexes, thus demonstrating the viability of the benchmark and offering new insight into the performance properties of the indexes....

  14. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  15. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  16. Benchmarking: A Process for Improvement.

    Science.gov (United States)

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  17. Hospital benchmarking: are U.S. eye hospitals ready?

    Science.gov (United States)

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added

  18. Dependable Benchmarking for Storage Systems in High-Energy Physics

    CERN Document Server

    Fleri Soler, Edward

    2017-01-01

    In high-energy physics, storage systems play a crucial role to store and secure very valuable data produced by complex experiments. The effectiveness and efficiency of data acquisition systems of such experiments depends directly on those of these storage systems. Coping with present day rates and reliability requirements of such experiments implies operating high-performance hardware under the best possible conditions, with a broad set of hardware and software parameters existing along the hierarchical levels, from networks down to drives. An extensive number of tests are required for the tuning of parameters to achieve optimised I/O operations. Current approaches to I/O optimisation generally consist of manual test execution and result taking. This approach lacks appropriate modularity, durability and reproducibility, attainable through dedicated testing facilities. The aim of this project is to conceive a user-friendly, dedicated storage benchmarking tool for the improved comparison of I/O parameters in re...

  19. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  20. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  1. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    Burns, Phil; Jenkins, Cloda; Riechmann, Christoph

    2005-01-01

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  2. ALGORITHM FOR THE AUTOMATIC ESTIMATION OF AGRICULTURAL TREE GEOMETRIC PARAMETERS USING AIRBORNE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    E. Hadaś

    2016-06-01

    Full Text Available The estimation of dendrometric parameters has become an important issue for the agricultural planning and management. Since the classical field measurements are time consuming and inefficient, Airborne Laser Scanning (ALS data can be used for this purpose. Point clouds acquired for orchard areas allow to determine orchard structures and geometric parameters of individual trees. In this research we propose an automatic method that allows to determine geometric parameters of individual olive trees using ALS data. The method is based on the α-shape algorithm applied for normalized point clouds. The algorithm returns polygons representing crown shapes. For points located inside each polygon, we select the maximum height and the minimum height and then we estimate the tree height and the crown base height. We use the first two components of the Principal Component Analysis (PCA as the estimators for crown diameters. The α-shape algorithm requires to define the radius parameter R. In this study we investigated how sensitive are the results to the radius size, by comparing the results obtained with various settings of the R with reference values of estimated parameters from field measurements. Our study area was the olive orchard located in the Castellon Province, Spain. We used a set of ALS data with an average density of 4 points m−2. We noticed, that there was a narrow range of the R parameter, from 0.48 m to 0.80 m, for which all trees were detected and for which we obtained a high correlation coefficient (> 0.9 between estimated and measured values. We compared our estimates with field measurements. The RMSE of differences was 0.8 m for the tree height, 0.5 m for the crown base height, 0.6 m and 0.4 m for the longest and shorter crown diameter, respectively. The accuracy obtained with the method is thus sufficient for agricultural applications.

  3. Benchmark assemblies of the Los Alamos critical assemblies facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1986-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described. (author)

  4. Benchmark assemblies of the Los Alamos Critical Assemblies Facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1985-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  5. Benchmark assemblies of the Los Alamos critical assemblies facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1985-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  6. Benchmarking of the PHOENIX-P/ANC [Advanced Nodal Code] advanced nuclear design system

    International Nuclear Information System (INIS)

    Nguyen, T.Q.; Liu, Y.S.; Durston, C.; Casadei, A.L.

    1988-01-01

    At Westinghouse, an advanced neutronic methods program was designed to improve the quality of the predictions, enhance flexibility in designing advanced fuel and related products, and improve design lead time. Extensive benchmarking data is presented to demonstrate the accuracy of the Advanced Nodal Code (ANC) and the PHOENIX-P advanced lattice code. Qualification data to demonstrate the accuracy of ANC include comparison of key physics parameters against a fine-mesh diffusion theory code, TORTISE. Benchmarking data to demonstrate the validity of the PHOENIX-P methodologies include comparison of physics predictions against critical experiments, isotopics measurements and measured power distributions from spatial criticals. The accuracy of the PHOENIX-P/ANC Advanced Design System is demonstrated by comparing predictions of hot zero power physics parameters and hot full power core follow against measured data from operating reactors. The excellent performance of this system for a broad range of comparisons establishes the basis for implementation of these tools for core design, licensing and operational follow of PWR [pressurized water reactor] cores at Westinghouse

  7. Evaluation and benchmarking of an EC-QCL-based mid-infrared spectrometer for monitoring metabolic blood parameters in critical care units

    Science.gov (United States)

    Grafen, M.; Delbeck, S.; Busch, H.; Heise, H. M.; Ostendorf, A.

    2018-02-01

    Mid-infrared spectroscopy hyphenated with micro-dialysis is an excellent method for monitoring metabolic blood parameters as it enables the concurrent, reagent-free and precise measurement of multiple clinically relevant substances such as glucose, lactate and urea in micro-dialysates of blood or interstitial fluid. For a marketable implementation, quantum cascade lasers (QCL) seem to represent a favourable technology due to their high degree of miniaturization and potentially low production costs. In this work, an external cavity (EC) - QCL-based spectrometer and two Fourier-transform infrared (FTIR) spectrometers were benchmarked with regard to the precision, accuracy and long-term stability needed for the monitoring of critically ill patients. For the tests, ternary aqueous solutions of glucose, lactate and mannitol (the latter for dialysis recovery determination) were measured in custom-made flow-through transmission cells of different pathlengths and analyzed by Partial Least Squares calibration models. It was revealed, that the wavenumber tuning speed of the QCL had a severe impact on the EC-mirror trajectory due to matching the digital-analog-converter step frequency with the mechanical resonance frequency of the mirror actuation. By selecting an appropriate tuning speed, the mirror oscillations acted as a hardware smoothing filter for the significant intensity variations caused by mode hopping. Besides the tuning speed, the effects of averaging over multiple spectra and software smoothing parameters (Savitzky-Golay-filters and FT-smoothing) were investigated. The final settings led to a performance of the QCL-system, which was comparable with a research FTIR-spectrometer and even surpassed the performance of a small FTIR-mini-spectrometer.

  8. SP2Bench: A SPARQL Performance Benchmark

    Science.gov (United States)

    Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg

    A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.

  9. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  10. Benchmarking specialty hospitals, a scoping review on theory and practice.

    Science.gov (United States)

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  11. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  12. Development of a California commercial building benchmarking database

    International Nuclear Information System (INIS)

    Kinney, Satkartar; Piette, Mary Ann

    2002-01-01

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database

  13. GPI Spectroscopy of the Mass, Age, and Metallicity Benchmark Brown Dwarf HD 4747 B

    Science.gov (United States)

    Crepp, Justin R.; Principe, David A.; Wolff, Schuyler; Giorla Godfrey, Paige A.; Rice, Emily L.; Cieza, Lucas; Pueyo, Laurent; Bechter, Eric B.; Gonzales, Erica J.

    2018-02-01

    The physical properties of brown dwarf companions found to orbit nearby, solar-type stars can be benchmarked against independent measures of their mass, age, chemical composition, and other parameters, offering insights into the evolution of substellar objects. The TRENDS high-contrast imaging survey has recently discovered a (mass/age/metallicity) benchmark brown dwarf orbiting the nearby (d = 18.69 ± 0.19 pc), G8V/K0V star HD 4747. We have acquired follow-up spectroscopic measurements of HD 4747 B using the Gemini Planet Imager to study its spectral type, effective temperature, surface gravity, and cloud properties. Observations obtained in the H-band and K 1-band recover the companion and reveal that it is near the L/T transition (T1 ± 2). Fitting atmospheric models to the companion spectrum, we find strong evidence for the presence of clouds. However, spectral models cannot satisfactorily fit the complete data set: while the shape of the spectrum can be well-matched in individual filters, a joint fit across the full passband results in discrepancies that are a consequence of the inherent color of the brown dwarf. We also find a 2σ tension in the companion mass, age, and surface gravity when comparing to evolutionary models. These results highlight the importance of using benchmark objects to study “secondary effects” such as metallicity, non-equilibrium chemistry, cloud parameters, electron conduction, non-adiabatic cooling, and other subtleties affecting emergent spectra. As a new L/T transition benchmark, HD 4747 B warrants further investigation into the modeling of cloud physics using higher resolution spectroscopy across a broader range of wavelengths, polarimetric observations, and continued Doppler radial velocity and astrometric monitoring.

  14. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  15. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  16. Benchmarking Computational Fluid Dynamics for Application to PWR Fuel

    International Nuclear Information System (INIS)

    Smith, L.D. III; Conner, M.E.; Liu, B.; Dzodzo, B.; Paramonov, D.V.; Beasley, D.E.; Langford, H.M.; Holloway, M.V.

    2002-01-01

    The present study demonstrates a process used to develop confidence in Computational Fluid Dynamics (CFD) as a tool to investigate flow and temperature distributions in a PWR fuel bundle. The velocity and temperature fields produced by a mixing spacer grid of a PWR fuel assembly are quite complex. Before using CFD to evaluate these flow fields, a rigorous benchmarking effort should be performed to ensure that reasonable results are obtained. Westinghouse has developed a method to quantitatively benchmark CFD tools against data at conditions representative of the PWR. Several measurements in a 5 x 5 rod bundle were performed. Lateral flow-field testing employed visualization techniques and Particle Image Velocimetry (PIV). Heat transfer testing involved measurements of the single-phase heat transfer coefficient downstream of the spacer grid. These test results were used to compare with CFD predictions. Among the parameters optimized in the CFD models based on this comparison with data include computational mesh, turbulence model, and boundary conditions. As an outcome of this effort, a methodology was developed for CFD modeling that provides confidence in the numerical results. (authors)

  17. Performance assessment of new neutron cross section libraries using MCNP code and some critical benchmarks

    International Nuclear Information System (INIS)

    Bakkari, B El; Bardouni, T El.; Erradi, L.; Chakir, E.; Meroun, O.; Azahra, M.; Boukhal, H.; Khoukhi, T El.; Htet, A.

    2007-01-01

    Full text: New releases of nuclear data files made available during the few recent years. The reference MCNP5 code (1) for Monte Carlo calculations is usually distributed with only one standard nuclear data library for neutron interactions based on ENDF/B-VI. The main goal of this work is to process new neutron cross sections libraries in ACE continuous format for MCNP code based on the most recent data files recently made available for the scientific community : ENDF/B-VII.b2, ENDF/B-VI (release 8), JEFF3.0, JEFF-3.1, JENDL-3.3 and JEF2.2. In our data treatment, we used the modular NJOY system (release 99.9) (2) in conjunction with its most recent upadates. Assessment of the processed point wise cross sections libraries performances was made by means of some criticality prediction and analysis of other integral parameters for a set of reactor benchmarks. Almost all the analyzed benchmarks were taken from the international handbook of Evaluated criticality safety benchmarks experiments from OECD (3). Some revised benchmarks were taken from references (4,5). These benchmarks use Pu-239 or U-235 as the main fissionable materiel in different forms, different enrichments and cover various geometries. Monte Carlo calculations were performed in 3D with maximum details of benchmark description and the S(α,β) cross section treatment was adopted in all thermal cases. The resulting one standard deviation confidence interval for the eigenvalue is typically +/-13% to +/-20 pcm [fr

  18. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  19. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  20. Research on IoT-based water environment benchmark data acquisition management

    Science.gov (United States)

    Yan, Bai; Xue, Bai; Ling, Lin; Jin, Huang; Ren, Liu

    2017-11-01

    Over the past more than 30 years of reform and opening up, China’s economy has developed at a full speed. However, this rapid growth is under restrictions of resource exhaustion and environmental pollution. Green sustainable development has become a common goal of all humans. As part of environmental resources, water resources are faced with such problems as pollution and shortage, thus hindering sustainable development. The top priority in water resources protection and research is to manage the basic data on water resources, and determine what is the footstone and scientific foundation of water environment management. By studying the aquatic organisms in the Yangtze River Basin, the Yellow River Basin, the Liaohe River Basin and the 5 lake areas, this paper puts forward an IoT-based water environment benchmark data management platform which can transform parameters measured to electric signals by way of chemical probe identification, and then send the benchmark test data of the water environment to node servers. The management platform will provide data and theoretical support for environmental chemistry, toxicology, ecology, etc., promote researches on environmental sciences, lay a solid foundation for comprehensive and systematic research on China’s regional environment characteristics, biotoxicity effects and environment criteria, and provide objective data for compiling standards of the water environment benchmark data.

  1. Quantitative evaluation of catheter radiopacity by fiber optic scanning densitometry

    International Nuclear Information System (INIS)

    Solomon, D.D.; Byron, M.P.; Lipton, M.J.

    1989-01-01

    A rapid accurate method has been developed utilizing fiber optic scanning densitometry to quantify the radiopacity of vascular catheters. The technique provides for computerized calculation of relative catheter radiopacity and an appropriate control standard. A densitometer with a 180 degree collection angle for diffuse transmission density measurements was selected based on the diffusing nature of X-ray film (Q-factor 1.80). A benchmark catheter and 2 mil thick brass shim stock were selected as control standards for evaluation of mono-and multilumen tubing using standard X-ray conditions and an aluminum block attenuator. The authors present results from reproducibility studies which show scan-to-scan repeatability is within ±1%, and day-to-day variability is less than 5%. Application studies demonstrate a linear relationship between percent barium sulfate loading and the radiopaqueness of 16 gauge monolumen tubing. Results were also obtained from a clinical chest X-ray film showing good in-vivo/in-vitro correlation

  2. Proposal on the accelerator driven molten-salt reactor (ATW concept) benchmark calculations. (STAGE 1 - without an external neutron source)

    International Nuclear Information System (INIS)

    Svarny, J.; Mikolas, P.

    1999-01-01

    The first stage of ATW neutronic benchmark (without an external source), based on the simple modelling of two component concept is presented. The simple model of two component concept of the ATW (graphite + molten salt system) was found. The main purpose of this benchmark is not only to provide the basic characteristics of given ADS but also to test codes in calculations of the rate of transmutation waste and to evaluate basic kinetics parameters and reactivity effects. (author)

  3. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  4. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  5. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  6. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  7. TU Electric reactor physics model verification: Power reactor benchmark

    International Nuclear Information System (INIS)

    Willingham, C.E.; Killgore, M.R.

    1988-01-01

    Power reactor benchmark calculations using the advanced code package CASMO-3/SIMULATE-3 have been performed for six cycles of Prairie Island Unit 1. The reload fuel designs for the selected cycles included gadolinia as a burnable absorber, natural uranium axial blankets and increased water-to-fuel ratio. The calculated results for both startup reactor physics tests (boron endpoints, control rod worths, and isothermal temperature coefficients) and full power depletion results were compared to measured plant data. These comparisons show that the TU Electric reactor physics models accurately predict important measured parameters for power reactors

  8. Benchmarking of small-signal dynamics of single-phase PLLs

    DEFF Research Database (Denmark)

    Zhang, Chong; Wang, Xiongfei; Blaabjerg, Frede

    2015-01-01

    Phase-looked Loop (PLL) is a critical component for the control and grid synchronization of grid-connected power converters. This paper presents a benchmarking study on the small-signal dynamics of three commonly used PLLs for single-phase converters, including enhanced PLL, second......-order generalized integrator based PLL, and the inverse-PLL. First, a unified small-signal model of those PLLs is established for comparing their dynamics. Then, a systematic design guideline for parameters tuning of the PLLs is formulated. To confirm the validity of theoretical analysis, nonlinear time...

  9. Benchmark studies of the gyro-Landau-fluid code and gyro-kinetic codes on kinetic ballooning modes

    Energy Technology Data Exchange (ETDEWEB)

    Tang, T. F. [Dalian University of Technology, Dalian 116024 (China); Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Xu, X. Q. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Ma, C. H. [Fusion Simulation Center, School of Physics, Peking University, Beijing (China); Bass, E. M.; Candy, J. [General Atomics, P.O. Box 85608, San Diego, California 92186-5608 (United States); Holland, C. [University of California San Diego, La Jolla, California 92093-0429 (United States)

    2016-03-15

    A Gyro-Landau-Fluid (GLF) 3 + 1 model has been recently implemented in BOUT++ framework, which contains full Finite-Larmor-Radius effects, Landau damping, and toroidal resonance [Ma et al., Phys. Plasmas 22, 055903 (2015)]. A linear global beta scan has been conducted using the JET-like circular equilibria (cbm18 series), showing that the unstable modes are kinetic ballooning modes (KBMs). In this work, we use the GYRO code, which is a gyrokinetic continuum code widely used for simulation of the plasma microturbulence, to benchmark with GLF 3 + 1 code on KBMs. To verify our code on the KBM case, we first perform the beta scan based on “Cyclone base case parameter set.” We find that the growth rate is almost the same for two codes, and the KBM mode is further destabilized as beta increases. For JET-like global circular equilibria, as the modes localize in peak pressure gradient region, a linear local beta scan using the same set of equilibria has been performed at this position for comparison. With the drift kinetic electron module in the GYRO code by including small electron-electron collision to damp electron modes, GYRO generated mode structures and parity suggest that they are kinetic ballooning modes, and the growth rate is comparable to the GLF results. However, a radial scan of the pedestal for a particular set of cbm18 equilibria, using GYRO code, shows different trends for the low-n and high-n modes. The low-n modes show that the linear growth rate peaks at peak pressure gradient position as GLF results. However, for high-n modes, the growth rate of the most unstable mode shifts outward to the bottom of pedestal and the real frequency of what was originally the KBMs in ion diamagnetic drift direction steadily approaches and crosses over to the electron diamagnetic drift direction.

  10. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Science.gov (United States)

    2010-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...

  11. Raising Quality and Achievement. A College Guide to Benchmarking.

    Science.gov (United States)

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  12. Prismatic Core Coupled Transient Benchmark

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.

    2011-01-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  13. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  14. Evaporator unit as a benchmark for plug and play and fault tolerant control

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Vinther, Kasper; Mojallali, Hamed

    2012-01-01

    This paper presents a challenging industrial benchmark for implementation of control strategies under realistic working conditions. The developed control strategies should perform in a plug & play manner, i.e. adapt to varying working conditions, optimize their performance, and provide fault...... tolerance. A fault tolerant strategy is needed to deal with a faulty sensor measurement of the evaporation pressure. The design and algorithmic challenges in the control of an evaporator include: unknown model parameters, large parameter variations, varying loads, and external discrete phenomena...... such as compressor switch on/o or abrupt change in compressor speed....

  15. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  16. ZZ ECN-BUBEBO, ECN-Petten Burnup Benchmark Book, Inventories, Afterheat

    International Nuclear Information System (INIS)

    Kloosterman, Jan Leen

    1999-01-01

    Description of program or function: Contains experimental benchmarks which can be used for the validation of burnup code systems and accompanied data libraries. Although the benchmarks presented here are thoroughly described in literature, it is in many cases not straightforward to retrieve unambiguously the correct input data and corresponding results from the benchmark Descriptions. Furthermore, results which can easily be measured, are sometimes difficult to calculate because of conversions to be made. Therefore, emphasis has been put to clarify the input of the benchmarks and to present the benchmark results in such a way that they can easily be calculated and compared. For more thorough Descriptions of the benchmarks themselves, the literature referred to here should be consulted. This benchmark book is divided in 11 chapters/files containing the following in text and tabular form: chapter 1: Introduction; chapter 2: Burnup Credit Criticality Benchmark Phase 1-B; chapter 3: Yankee-Rowe Core V Fuel Inventory Study; chapter 4: H.B. Robinson Unit 2 Fuel Inventory Study; chapter 5: Turkey Point Unit 3 Fuel Inventory Study; chapter 6: Turkey Point Unit 3 Afterheat Power Study; chapter 7: Dickens Benchmark on Fission Product Energy Release of U-235; chapter 8: Dickens Benchmark on Fission Product Energy Release of Pu-239; chapter 9: Yarnell Benchmark on Decay Heat Measurements of U-233; chapter 10: Yarnell Benchmark on Decay Heat Measurements of U-235; chapter 11: Yarnell Benchmark on Decay Heat Measurements of Pu-239

  17. Validation of VHTRC calculation benchmark of critical experiment using the MCB code

    Directory of Open Access Journals (Sweden)

    Stanisz Przemysław

    2016-01-01

    Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.

  18. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  19. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  20. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  1. [Do you mean benchmarking?].

    Science.gov (United States)

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  2. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    Science.gov (United States)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  3. Benchmarking in digital circuit design automation

    NARCIS (Netherlands)

    Jozwiak, L.; Gawlowski, D.M.; Slusarczyk, A.S.

    2008-01-01

    This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious

  4. Benchmarking, Total Quality Management, and Libraries.

    Science.gov (United States)

    Shaughnessy, Thomas W.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)

  5. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  6. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  7. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  8. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    International Nuclear Information System (INIS)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-01-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR-06 are highlighted, and the future of the two projects is discussed

  9. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  10. Analysis of a molten salt reactor benchmark

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Bajpai, Anil; Degweker, S.B.

    2013-01-01

    This paper discusses results of our studies of an IAEA molten salt reactor (MSR) benchmark. The benchmark, proposed by Japan, involves burnup calculations of a single lattice cell of a MSR for burning plutonium and other minor actinides. We have analyzed this cell with in-house developed burnup codes BURNTRAN and McBURN. This paper also presents a comparison of the results of our codes and those obtained by the proposers of the benchmark. (author)

  11. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  12. Structural code benchmarking for the analysis of impact response of nuclear material shipping casks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1984-01-01

    The Transportation Technology Center at Sandia National Laboratories has initiated a program to benchmark thermal and structural codes that are available to the nuclear material transportation community. The program consists of the following five phrases: (1) code inventory and review, (2) development of a cask-like set of problems, (3) multiple independent numerical analyses of the problems, (4) transfer of information, and (5) performance of experiments to obtain data for comparison with the numerical analyses. This paper will summarize the results obtained by the independent numerical analyses. The analyses indicate the variability that can be expected both due to differences in user-controlled parameters and from code-to-code differences. The results show that in purely elastic analyses, differences can be attributed to user controlled parameters. Model problems involving elastic/plastic material behavior and large deformations, however, have greater variability with significant differences reported for implicit and explicit integration schemes in finite element programs. This variability demonstrates the need to obtain experimental data to properly benchmark codes utilizing elastic/plastic material models and large deformation capability

  13. Benchmarking: contexts and details matter.

    Science.gov (United States)

    Zheng, Siyuan

    2017-07-05

    Benchmarking is an essential step in the development of computational tools. We take this opportunity to pitch in our opinions on tool benchmarking, in light of two correspondence articles published in Genome Biology.Please see related Li et al. and Newman et al. correspondence articles: www.dx.doi.org/10.1186/s13059-017-1256-5 and www.dx.doi.org/10.1186/s13059-017-1257-4.

  14. NODAL3 Sensitivity Analysis for NEACRP 3D LWR Core Transient Benchmark (PWR

    Directory of Open Access Journals (Sweden)

    Surian Pinem

    2016-01-01

    Full Text Available This paper reports the results of sensitivity analysis of the multidimension, multigroup neutron diffusion NODAL3 code for the NEACRP 3D LWR core transient benchmarks (PWR. The code input parameters covered in the sensitivity analysis are the radial and axial node sizes (the number of radial node per fuel assembly and the number of axial layers, heat conduction node size in the fuel pellet and cladding, and the maximum time step. The output parameters considered in this analysis followed the above-mentioned core transient benchmarks, that is, power peak, time of power peak, power, averaged Doppler temperature, maximum fuel centerline temperature, and coolant outlet temperature at the end of simulation (5 s. The sensitivity analysis results showed that the radial node size and maximum time step give a significant effect on the transient parameters, especially the time of power peak, for the HZP and HFP conditions. The number of ring divisions for fuel pellet and cladding gives negligible effect on the transient solutions. For productive work of the PWR transient analysis, based on the present sensitivity analysis results, we recommend NODAL3 users to use 2×2 radial nodes per assembly, 1×18 axial layers per assembly, the maximum time step of 10 ms, and 9 and 1 ring divisions for fuel pellet and cladding, respectively.

  15. Colon distension and scan protocol for CT-colonography: an overview

    NARCIS (Netherlands)

    Boellaard, Thierry N.; de Haan, Margriet C.; Venema, Henk W.; Stoker, Jaap

    2013-01-01

    This article reviews two important aspects of CT-colonography, namely colonic distension and scan parameters. Adequate distension should be obtained to visualize the complete colonic lumen and optimal scan parameters should be used to prevent unnecessary radiation burden. For optimal distension,

  16. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  17. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    Science.gov (United States)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  18. Neutronics benchmark of a MOX assembly with near-weapons-grade plutonium

    International Nuclear Information System (INIS)

    Difilippo, F.C.; Fisher, S.E.

    1998-01-01

    One of the proposed ways to dispose of surplus weapons-grade plutonium (Pu) is to irradiate the high-fissile material in light-water reactors in order to reduce the Pu enrichment to the level of spent fuels from commercial reactors. Considerable experience has been accumulated about the behavior of mixed-oxide (MOX) uranium and plutonium fuels for plutonium recycling in commercial reactors, but the experience is related to Pu enrichments typical of spent fuels quite below the values of weapons-grade plutonium. Important decisions related to the kind of reactors to be used for the disposition of the plutonium are going to be based on calculations, so the validation of computational algorithms related to all aspects of the fuel cycle (power distributions, isotopics as function of the burnup, etc.), for weapons-grade isotopics is very important. Analysis of public domain data reveals that the cycle-2 irradiation in the Quad cities boiling-water reactor (BWR) is the most recent US destructive examination. This effort involved the irradiation of five MOX assemblies using 80 and 90% fissile plutonium. These benchmark data were gathered by General Electric under the sponsorship of the Electric Power Research Institute. It is emphasized, however, that global parameters are not the focus of this benchmark, since the five bundles containing MOX fuels did not significantly affect the overall core performance. However, since the primary objective of this work is to compare against measured post-irradiation assembly data, the term benchmark is applied here. One important reason for performing the benchmark on Quad Cities irradiation is that the fissile blends (up to 90%) are higher than reactor-grade and, quite close to, weapons-grade isotopics

  19. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  20. Highest performance in 3D metal cutting at smallest footprint: benchmark of a robot based system vs. parameters of gantry systems

    Science.gov (United States)

    Scheller, Torsten; Bastick, André; Michel-Triller, Robert; Manzella, Christon

    2014-02-01

    In the automotive industry as well as in other industries ecological aspects regarding energy savings are driving new technologies and materials, e.g. lightweight materials as aluminium or press hardened steels. Processing such parts especially complex 3D shaped parts laser manufacturing has become the key process offering highest efficiency. The most established systems for 3D cutting applications are based on gantry systems. The disadvantage of those systems is their huge footprint to realize the required stability and work envelope. Alternatively a robot based system might be of advantage if accuracy, speed and overall performance would be capable processing automotive parts. With the BIM "beam in motion" system, JENOPTIK Automatisierungstechnik GmbH has developed a modular robot based laser processing machine, which meets all OEM specs processing press hardened steel parts. A benchmark of the BIM versus a gantry system was done regarding all required parameters to fulfil OEM specifications for press hardened steel parts. As a result a highly productive, accurate and efficient system can be described based on one or multiple robot modules working simultaneously together. The paper presents the improvements on the robot machine concept BIM addressed in 2012 [1] leading to an industrial proven system approach for the automotive industry. It further compares the performance and the parameters for 3D cutting applications of the BIM system versus a gantry system by samples of applied parts. Finally an overview of suitable applications for processing complex 3D parts with high productivity at small footprint is given.

  1. TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer

    Science.gov (United States)

    Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.

    2017-07-01

    different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.

  2. Validation of full core geometry model of the NODAL3 code in the PWR transient Benchmark problems

    International Nuclear Information System (INIS)

    T-M Sembiring; S-Pinem; P-H Liem

    2015-01-01

    The coupled neutronic and thermal-hydraulic (T/H) code, NODAL3 code, has been validated in some PWR static benchmark and the NEACRP PWR transient benchmark cases. However, the NODAL3 code have not yet validated in the transient benchmark cases of a control rod assembly (CR) ejection at peripheral core using a full core geometry model, the C1 and C2 cases. By this research work, the accuracy of the NODAL3 code for one CR ejection or the unsymmetrical group of CRs ejection case can be validated. The calculations by the NODAL3 code have been carried out by the adiabatic method (AM) and the improved quasistatic method (IQS). All calculated transient parameters by the NODAL3 code were compared with the reference results by the PANTHER code. The maximum relative difference of 16 % occurs in the calculated time of power maximum parameter by using the IQS method, while the relative difference of the AM method is 4 % for C2 case. All calculation results by the NODAL3 code shows there is no systematic difference, it means the neutronic and T/H modules are adopted in the code are considered correct. Therefore, all calculation results by using the NODAL3 code are very good agreement with the reference results. (author)

  3. MoMaS reactive transport benchmark using PFLOTRAN

    Science.gov (United States)

    Park, H.

    2017-12-01

    MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.

  4. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  5. Criticality safety benchmarking of PASC-3 and ECNJEF1.1

    International Nuclear Information System (INIS)

    Li, J.

    1992-09-01

    To validate the code system PASC-3 and the multigroup cross section library ECNJEF1.1 on various applications many benchmarks are required. This report presents the results of critically safety benchmarking for five calculational and four experimental benchmarks. These benchmarks are related to the transport package of fissile materials such as spent fuel. The fissile nuclides in these benchmarks are 235 U and 239 Pu. The modules of PASC-3 which have been used for the calculations are BONAMI, NITAWL and KENO.5A. The final results for the experimental benchmarks do agree well with experimental data. For the calculational benchmarks the results presented here are in reasonable agreement with the results from other investigations. (author). 8 refs.; 20 figs.; 5 tabs

  6. Revaluering benchmarking - A topical theme for the construction industry

    OpenAIRE

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in...

  7. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  8. Benchmarking and validation activities within JEFF project

    Directory of Open Access Journals (Sweden)

    Cabellos O.

    2017-01-01

    Full Text Available The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  9. Benchmarking and validation activities within JEFF project

    Science.gov (United States)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  10. Repeatability and Reproducibility of Retinal Nerve Fiber Layer Parameters Measured by Scanning Laser Polarimetry with Enhanced Corneal Compensation in Normal and Glaucomatous Eyes.

    Science.gov (United States)

    Ara, Mirian; Ferreras, Antonio; Pajarin, Ana B; Calvo, Pilar; Figus, Michele; Frezzotti, Paolo

    2015-01-01

    To assess the intrasession repeatability and intersession reproducibility of peripapillary retinal nerve fiber layer (RNFL) thickness parameters measured by scanning laser polarimetry (SLP) with enhanced corneal compensation (ECC) in healthy and glaucomatous eyes. One randomly selected eye of 82 healthy individuals and 60 glaucoma subjects was evaluated. Three scans were acquired during the first visit to evaluate intravisit repeatability. A different operator obtained two additional scans within 2 months after the first session to determine intervisit reproducibility. The intraclass correlation coefficient (ICC), coefficient of variation (COV), and test-retest variability (TRT) were calculated for all SLP parameters in both groups. ICCs ranged from 0.920 to 0.982 for intravisit measurements and from 0.910 to 0.978 for intervisit measurements. The temporal-superior-nasal-inferior-temporal (TSNIT) average was the highest (0.967 and 0.946) in normal eyes, while nerve fiber indicator (NFI; 0.982) and inferior average (0.978) yielded the best ICC in glaucomatous eyes for intravisit and intervisit measurements, respectively. All COVs were under 10% in both groups, except NFI. TSNIT average had the lowest COV (2.43%) in either type of measurement. Intervisit TRT ranged from 6.48 to 12.84. The reproducibility of peripapillary RNFL measurements obtained with SLP-ECC was excellent, indicating that SLP-ECC is sufficiently accurate for monitoring glaucoma progression.

  11. Implementing a benchmarking and feedback concept decreases postoperative pain after total knee arthroplasty: A prospective study including 256 patients.

    Science.gov (United States)

    Benditz, A; Drescher, J; Greimel, F; Zeman, F; Grifka, J; Meißner, W; Völlner, F

    2016-12-05

    Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16 th in terms of activity-related pain and 9 th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1 st activity-related pain and to 2 nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA.

  12. Benchmarking a Soil Moisture Data Assimilation System for Agricultural Drought Monitoring

    Science.gov (United States)

    Hun, Eunjin; Crow, Wade T.; Holmes, Thomas; Bolten, John

    2014-01-01

    Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this need, this paper evaluates an LDAS for agricultural drought monitoring by benchmarking individual components of the system (i.e., a satellite soil moisture retrieval algorithm, a soil water balance model and a sequential data assimilation filter) against a series of linear models which perform the same function (i.e., have the same basic inputoutput structure) as the full system component. Benchmarking is based on the calculation of the lagged rank cross-correlation between the normalized difference vegetation index (NDVI) and soil moisture estimates acquired for various components of the system. Lagged soil moistureNDVI correlations obtained using individual LDAS components versus their linear analogs reveal the degree to which non-linearities andor complexities contained within each component actually contribute to the performance of the LDAS system as a whole. Here, a particular system based on surface soil moisture retrievals from the Land Parameter Retrieval Model (LPRM), a two-layer Palmer soil water balance model and an Ensemble Kalman filter (EnKF) is benchmarked. Results suggest significant room for improvement in each component of the system.

  13. A Scanning Quantum Cryogenic Atom Microscope

    Science.gov (United States)

    Lev, Benjamin

    Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity, high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented DC-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (2 um), or 6 nT / Hz1 / 2 per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly one-hundred points with an effective field sensitivity of 600 pT / Hz1 / 2 each point during the same time as a point-by-point scanner would measure these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly two orders of magnitude improvement in magnetic flux sensitivity (down to 10- 6 Phi0 / Hz1 / 2) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are for the first time carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns and done so using samples that may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge transport images at temperatures from room to \\x9D4K in unconventional superconductors and topologically nontrivial materials.

  14. Benchmarking

    OpenAIRE

    Beretta Sergio; Dossi Andrea; Grove Hugh

    2000-01-01

    Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...

  15. NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) Benchmark. Volume II: uncertainty and sensitivity analyses of void distribution and critical power - Specification

    International Nuclear Information System (INIS)

    Aydogan, F.; Hochreiter, L.; Ivanov, K.; Martin, M.; Utsuno, H.; Sartori, E.

    2010-01-01

    This report provides the specification for the uncertainty exercises of the international OECD/NEA, NRC and NUPEC BFBT benchmark problem including the elemental task. The specification was prepared jointly by Pennsylvania State University (PSU), USA and the Japan Nuclear Energy Safety (JNES) Organisation, in cooperation with the OECD/NEA and the Commissariat a l'energie atomique (CEA Saclay, France). The work is sponsored by the US NRC, METI-Japan, the OECD/NEA and the Nuclear Engineering Program (NEP) of Pennsylvania State University. This uncertainty specification covers the fourth exercise of Phase I (Exercise-I-4), and the third exercise of Phase II (Exercise II-3) as well as the elemental task. The OECD/NRC BFBT benchmark provides a very good opportunity to apply uncertainty analysis (UA) and sensitivity analysis (SA) techniques and to assess the accuracy of thermal-hydraulic models for two-phase flows in rod bundles. During the previous OECD benchmarks, participants usually carried out sensitivity analysis on their models for the specification (initial conditions, boundary conditions, etc.) to identify the most sensitive models or/and to improve the computed results. The comprehensive BFBT experimental database (NEA, 2006) leads us one step further in investigating modelling capabilities by taking into account the uncertainty analysis in the benchmark. The uncertainties in input data (boundary conditions) and geometry (provided in the benchmark specification) as well as the uncertainties in code models can be accounted for to produce results with calculational uncertainties and compare them with the measurement uncertainties. Therefore, uncertainty analysis exercises were defined for the void distribution and critical power phases of the BFBT benchmark. This specification is intended to provide definitions related to UA/SA methods, sensitivity/ uncertainty parameters, suggested probability distribution functions (PDF) of sensitivity parameters, and selected

  16. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  17. Calculation of Single Cell and Fuel Assembly IRIS Benchmarks Using WIMSD5B and GNOMER Codes

    International Nuclear Information System (INIS)

    Pevec, D.; Grgic, D.; Jecmenica, R.

    2002-01-01

    IRIS reactor (an acronym for International Reactor Innovative and Secure) is a modular, integral, light water cooled, small to medium power (100-335 MWe/module) reactor, which addresses the requirements defined by the United States Department of Energy for Generation IV nuclear energy systems, i.e., proliferation resistance, enhanced safety, improved economics, and waste reduction. An international consortium led by Westinghouse/BNFL was created for development of IRIS reactor; it includes universities, institutes, commercial companies, and utilities. Faculty of Electrical Engineering and Computing, University of Zagreb joined the consortium in year 2001, with the aim to take part in IRIS neutronics design and safety analyses of IRIS transients. A set of neutronic benchmarks for IRIS reactor was defined with the objective to compare results of all participants with exactly the same assumptions. In this paper a calculation of Benchmark 44 for IRIS reactor is described. Benchmark 44 is defined as a core depletion benchmark problem for specified IRIS reactor operating conditions (e.g., temperatures, moderator density) without feedback. Enriched boron, inhomogeneously distributed in axial direction, is used as an integral fuel burnable absorber (IFBA). The aim of this benchmark was to enable a more direct comparison of results of different code systems. Calculations of Benchmark 44 were performed using the modified CORD-2 code package. The CORD-2 code package consists of WIMSD and GNOMER codes. WIMSD is a well-known lattice spectrum calculation code. GNOMER solves the neutron diffusion equation in three-dimensional Cartesian geometry by the Green's function nodal method. The following parameters were obtained in Benchmark 44 analysis: effective multiplication factor as a function of burnup, nuclear peaking factor as a function of burnup, axial offset as a function of burnup, core-average axial power profile, core radial power profile, axial power profile for selected

  18. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    Baker, S. P.; Carter, R. G.; Watkins, K. E.; Jones, D. B.

    2004-01-01

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  19. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  20. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  1. Signal Processing Effects for Ultrasonic Guided Wave Scanning of Composites

    International Nuclear Information System (INIS)

    Roth, D.J.; Cosgriff, L.M.; Martin, R.E.; Burns, E.A.; Teemer, L.

    2005-01-01

    The goal of this ongoing work is to optimize experimental variables for a guided wave scanning method to obtain the most revealing and accurate images of defect conditions in composite materials. This study focuses on signal processing effects involved in forming guided wave scan images. Signal processing is involved at two basic levels for deriving ultrasonic guided wave scan images. At the primary level, NASA GRC has developed algorithms to extract over 30 parameters from the multimode signal and its power spectral density. At the secondary level, there are many variables for which values must be chosen that affect actual computation of these parameters. In this study, a ceramic matrix composite sample having a delamination is characterized using the ultrasonic guided wave scan method. Energy balance and decay rate parameters of the guided wave at each scan location are calculated to form images. These images are compared with ultrasonic c-scan and thermography images. The effect of the time portion of the waveform processed on image quality is assessed by comparing with images formed using the total waveform acquired

  2. Energy benchmarking in wastewater treatment plants: the importance of site operation and layout.

    Science.gov (United States)

    Belloir, C; Stanford, C; Soares, A

    2015-01-01

    Energy benchmarking is a powerful tool in the optimization of wastewater treatment plants (WWTPs) in helping to reduce costs and greenhouse gas emissions. Traditionally, energy benchmarking methods focused solely on reporting electricity consumption, however, recent developments in this area have led to the inclusion of other types of energy, including electrical, manual, chemical and mechanical consumptions that can be expressed in kWh/m3. In this study, two full-scale WWTPs were benchmarked, both incorporated preliminary, secondary (oxidation ditch) and tertiary treatment processes, Site 1 also had an additional primary treatment step. The results indicated that Site 1 required 2.32 kWh/m3 against 0.98 kWh/m3 for Site 2. Aeration presented the highest energy consumption for both sites with 2.08 kWh/m3 required for Site 1 and 0.91 kWh/m3 in Site 2. The mechanical energy represented the second biggest consumption for Site 1 (9%, 0.212 kWh/m3) and chemical input was significant in Site 2 (4.1%, 0.026 kWh/m3). The analysis of the results indicated that Site 2 could be optimized by constructing a primary settling tank that would reduce the biochemical oxygen demand, total suspended solids and NH4 loads to the oxidation ditch by 55%, 75% and 12%, respectively, and at the same time reduce the aeration requirements by 49%. This study demonstrated that the effectiveness of the energy benchmarking exercise in identifying the highest energy-consuming assets, nevertheless it points out the need to develop a holistic overview of the WWTP and the need to include parameters such as effluent quality, site operation and plant layout to allow adequate benchmarking.

  3. H.B. Robinson-2 pressure vessel benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I.; Kam, F.B.K.

    1998-02-01

    The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.

  4. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  5. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  6. A simplified 2D HTTR benchmark problem

    International Nuclear Information System (INIS)

    Zhang, Z.; Rahnema, F.; Pounders, J. M.; Zhang, D.; Ougouag, A.

    2009-01-01

    To access the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of relevant whole core configurations. In this paper we have created a numerical benchmark problem in 2D configuration typical of a high temperature gas cooled prismatic core. This problem was derived from the HTTR start-up experiment. For code-to-code verification, complex details of geometry and material specification of the physical experiments are not necessary. To this end, the benchmark problem presented here is derived by simplifications that remove the unnecessary details while retaining the heterogeneity and major physics properties from the neutronics viewpoint. Also included here is a six-group material (macroscopic) cross section library for the benchmark problem. This library was generated using the lattice depletion code HELIOS. Using this library, benchmark quality Monte Carlo solutions are provided for three different configurations (all-rods-in, partially-controlled and all-rods-out). The reference solutions include the core eigenvalue, block (assembly) averaged fuel pin fission density distributions, and absorption rate in absorbers (burnable poison and control rods). (authors)

  7. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  8. Second benchmark problem for WIPP structural computations

    International Nuclear Information System (INIS)

    Krieg, R.D.; Morgan, H.S.; Hunter, T.O.

    1980-12-01

    This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications

  9. Benchmarks: The Development of a New Approach to Student Evaluation.

    Science.gov (United States)

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  10. Integral benchmarks with reference to thorium fuel cycle

    International Nuclear Information System (INIS)

    Ganesan, S.

    2003-01-01

    This is a power point presentation about the Indian participation in the CRP 'Evaluated Data for the Thorium-Uranium fuel cycle'. The plans and scope of the Indian participation are to provide selected integral experimental benchmarks for nuclear data validation, including Indian Thorium burn up benchmarks, post-irradiation examination studies, comparison of basic evaluated data files and analysis of selected benchmarks for Th-U fuel cycle

  11. A Sequential Circuit-Based IP Watermarking Algorithm for Multiple Scan Chains in Design-for-Test

    Directory of Open Access Journals (Sweden)

    C. Wu

    2011-06-01

    Full Text Available In Very Large Scale Integrated Circuits (VLSI design, the existing Design-for-Test(DFT based watermarking techniques usually insert watermark through reordering scan cells, which causes large resource overhead, low security and coverage rate of watermark detection. A novel scheme was proposed to watermark multiple scan chains in DFT for solving the problems. The proposed scheme adopts DFT scan test model of VLSI design, and uses a Linear Feedback Shift Register (LFSR for pseudo random test vector generation. All of the test vectors are shifted in scan input for the construction of multiple scan chains with minimum correlation. Specific registers in multiple scan chains will be changed by the watermark circuit for watermarking the design. The watermark can be effectively detected without interference with normal function of the circuit, even after the chip is packaged. The experimental results on several ISCAS benchmarks show that the proposed scheme has lower resource overhead, probability of coincidence and higher coverage rate of watermark detection by comparing with the existing methods.

  12. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    International Nuclear Information System (INIS)

    Bess, John D.; Montierth, Leland; Köberl, Oliver

    2014-01-01

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235 U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  13. DORT-TD/THERMIX solutions for the OECD/NEA/NSC PBMR400 MW coupled neutronics thermal hydraulics transient benchmark

    International Nuclear Information System (INIS)

    Tyobeka, Bismark; Pautz, Andreas; Ivanov, Kostadin

    2008-01-01

    In new reactor designs that are still under review such as the PBMR, not much experimental data exists to benchmark newly developed computer codes against. Such a situation requires that nuclear engineers and designers of this novel reactor design must resort to the validation of a newly developed code through a code-to-code benchmarking exercise because there are validated codes that are currently in use to analyze this reactor design, albeit very few of them. There are numerous HTR core physics benchmarks that are currently being pursued by different organizations, for different purposes. One such benchmark exercise is the PBMR-400 MW OECD/NEA/NSC coupled neutronics/thermal hydraulics transient benchmark. In this paper, a newly developed coupled neutronics thermal hydraulics code system, DORT-TD/THERMIX with both transport and diffusion theory options, is used to simulate the transient scenarios in the above-mentioned benchmark problem. Steady-state calculations results are compared with selected participants' results as well as transient models in which the diffusion and transport theory solutions of the same code system are directly compared. Several sensitivity studies are also shown in order to determine how much the change in certain parameters influences the overall behaviour of a given transient. It is shown in this paper that DORT-TD/THERMIX is a versatile tool which can be deployed for design and safety analyses of high temperature reactors of pebble-bed type. (authors)

  14. SparseBeads data: benchmarking sparsity-regularized computed tomography

    Science.gov (United States)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  15. The extent of benchmarking in the South African financial sector

    Directory of Open Access Journals (Sweden)

    W Vermeulen

    2014-06-01

    Full Text Available Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective implementation of benchmarking. Based on the research findings, recommendations for achieving success are suggested.

  16. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Primm III, RT

    2002-05-29

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  17. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    International Nuclear Information System (INIS)

    Primm III, RT

    2002-01-01

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors

  18. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  19. RANSAC approach for automated registration of terrestrial laser scans using linear features

    Directory of Open Access Journals (Sweden)

    K. Al-Durgham

    2013-10-01

    Full Text Available The registration process of terrestrial laser scans (TLS targets the problem of how to combine several laser scans in order to attain better information about features than what could be obtained through single scan. The main goal of the registration process is to estimate the parameters which determine geometrical variation between the origins of datasets collected from different locations. Scale, shifts, and rotation parameters are usually used to describe such variation. This paper presents a framework for the registration of overlapping terrestrial laser scans by establishing an automatic matching strategy that uses 3D linear features. More specifically, invariant separation characteristics between 3D linear features extracted from laser scans will be used to establish hypothesized conjugate linear features between the laser scans. These candidate matches are then used to geo-reference scans relative to a common reference frame. The registration workflow simulates the well-known RANndom Sample Consensus method (RANSAC for determining the registration parameters, whereas the iterative closest projected point (ICPP is utilized to determine the most probable solution of the transformation parameters from several solutions. The experimental results prove that the proposed methodology can be used for the automatic registration of terrestrial laser scans using linear features.

  20. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    International Nuclear Information System (INIS)

    Orii, Shigeo

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  1. Benchmarking to improve the quality of cystic fibrosis care.

    Science.gov (United States)

    Schechter, Michael S

    2012-11-01

    Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.

  2. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  3. Benchmarking multi-dimensional large strain consolidation analyses

    International Nuclear Information System (INIS)

    Priestley, D.; Fredlund, M.D.; Van Zyl, D.

    2010-01-01

    Analyzing the consolidation of tailings slurries and dredged fills requires a more extensive formulation than is used for common (small strain) consolidation problems. Large strain consolidation theories have traditionally been limited to 1-D formulations. SoilVision Systems has developed the capacity to analyze large strain consolidation problems in 2 and 3-D. The benchmarking of such formulations is not a trivial task. This paper presents several examples of modeling large strain consolidation in the beta versions of the new software. These examples were taken from the literature and were used to benchmark the large strain formulation used by the new software. The benchmarks reported here are: a comparison to the consolidation software application CONDES0, Townsend's Scenario B and a multi-dimensional analysis of long-term column tests performed on oil sands tailings. All three of these benchmarks were attained using the SVOffice suite. (author)

  4. A Voxel-Based Method for Automated Identification and Morphological Parameters Estimation of Individual Street Trees from Mobile Laser Scanning Data

    Directory of Open Access Journals (Sweden)

    Hongxing Liu

    2013-01-01

    Full Text Available As an important component of urban vegetation, street trees play an important role in maintenance of environmental quality, aesthetic beauty of urban landscape, and social service for inhabitants. Acquiring accurate and up-to-date inventory information for street trees is required for urban horticultural planning, and municipal urban forest management. This paper presents a new Voxel-based Marked Neighborhood Searching (VMNS method for efficiently identifying street trees and deriving their morphological parameters from Mobile Laser Scanning (MLS point cloud data. The VMNS method consists of six technical components: voxelization, calculating values of voxels, searching and marking neighborhoods, extracting potential trees, deriving morphological parameters, and eliminating pole-like objects other than trees. The method is validated and evaluated through two case studies. The evaluation results show that the completeness and correctness of our method for street tree detection are over 98%. The derived morphological parameters, including tree height, crown diameter, diameter at breast height (DBH, and crown base height (CBH, are in a good agreement with the field measurements. Our method provides an effective tool for extracting various morphological parameters for individual street trees from MLS point cloud data.

  5. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  6. Quantitative investigation of the effects of the scanning parameters in the digitization of EBT and EBT2 Gafchromic film dosimetry with flatbed scanners.

    Science.gov (United States)

    Hu, Yida; Ahmad, Salahuddin; Ali, Imad

    2012-01-01

    With increasing popularity and complexity of intensity-modulated radiation therapy (IMRT) delivery modalities including regular and arc therapies, there is a growing challenge for validating the accuracy of dose distributions. Gafchromic films have superior characteristics for dose verification over other conventional dosimeters. In order to optimize the use of Gafchromic films in clinical IMRT quality assurance procedures, the scanning parameters of EBT and EBT2 films with a flatbed scanner were investigated. The effects of several parameters including scanning position, orientation, uniformity, film sensitivity and optical density (OD) growth after irradiation were quantified. The profiles of the EBT and EBT2 films had a noise level of 0.6% and 0.7%, respectively. Considerable orientation dependence was observed and the scanner value difference between landscape and portrait modes were about 12% and 10% for EBT and EBT2 films, respectively. The highest response sensitivity was observed using digitized red color images of the EBT2 film scanned with landscape mode. The total system non-uniformity composed of contributions from the film and the scanner was less than 1.7%. OD variations showed that EBT gray scale grew slower, however, reached higher growth values of 15% when compared with EBT2 gray scale which grew 12% after a long time (480 hours) post-irradiation. The EBT film using the red color channel showed the minimal growth where OD increased up to 3% within 3 days after irradiation, and took one week to stabilize.

  7. Final results of the fifth three-dimensional dynamic Atomic Energy Research benchmark problem calculations

    International Nuclear Information System (INIS)

    Hadek, J.

    1999-01-01

    The paper gives a brief survey of the fifth three-dimensional dynamic Atomic Energy Research benchmark calculation results received with the code DYN3D/ATHLET at NRI Rez. This benchmark was defined at the seventh Atomic Energy Research Symposium (Hoernitz near Zittau, 1997). Its initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one stuck out control rod group. The calculations were performed with the externally coupled codes ATHLET Mod.1.1 Cycle C and DYN3DH1.1/M3. The standard WWER-440/213 input deck of ATHLET code was adopted for benchmark purposes and for coupling with the code DYN3D. The first part of paper contains a brief characteristics of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. In comparison with the results published at the eighth Atomic Energy Research Symposium (Bystrice nad Pernstejnem, 1998), the results published in this paper are based on improved ATHLET descriptions of control and safety systems. (Author)

  8. A 3D stylized half-core CANDU benchmark problem

    International Nuclear Information System (INIS)

    Pounders, Justin M.; Rahnema, Farzad; Serghiuta, Dumitru; Tholammakkil, John

    2011-01-01

    A 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem is presented. The benchmark problem is comprised of a heterogeneous lattice of 37-element natural uranium fuel bundles, heavy water moderated, heavy water cooled, with adjuster rods included as reactivity control devices. Furthermore, a 2-group macroscopic cross section library has been developed for the problem to increase the utility of this benchmark for full-core deterministic transport methods development. Monte Carlo results are presented for the benchmark problem in cooled, checkerboard void, and full coolant void configurations.

  9. Effect of imaging parameters of spiral CT scanning on image quality for the dental implants. Visual evaluation using a semi-anthropomorphic mandible phantom

    International Nuclear Information System (INIS)

    Morita, Yasuhiko; Indou, Hiroko; Honda Eiichi

    2002-01-01

    The purpose of this study was to evaluate the effect of parameters of spiral CT scanning on the image quality required for the planning of dental implants operations. A semi-anthropomorphic mandible phantom which has artificial mandibular canals and teeth roots was used as a standard object for imaging. Spiral CT scans for the phantom settled in water phantom with diameters of 20 and 16 cm were performed. Visibility of the artificial mandibular canal made of a Teflon tube and gaps between tooth apex and canal in the mandibular phantom was evaluated for various combinations of the slice thickness, tables speeds, angles to the canal, and x-ray tube currents. Teeth roots were made of PVC (poly vinyl chloride). The artificial mandibular canal was clearly observed on the images of 1 mm slice thickness. At the same table speed of 2 mm /rotation, the images of thin slice (1 mm) were superior to that of thick slice (2 mm). The gap between teeth apex and canal was erroneously diagnosed on the images with table speeds of 3 mm/rotation. Horizontal scanning in parallel to the canal result in poor image quality for observation of mandibular canals because of the partial volume effect. A relatively high x-ray tube current (125 mA) at thin slice (1 mm) scanning was required for scanning the mandibular phantom in 20 cm water vessel. Spiral scanning with slice thickness of 1 mm and table speeds of 1 of 2 mm/rotation seemed to be suitable for dental implants. The result of this study suggested that diagnosis from two independent spiral scans with a different angle to the object was more accurate and more efficient than single spiral scanning. (author)

  10. International handbook of evaluated criticality safety benchmark experiments

    International Nuclear Information System (INIS)

    2010-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span over 55,000 pages and contain 516 evaluations with benchmark specifications for 4,405 critical, near critical, or subcritical configurations, 24 criticality alarm placement / shielding configurations with multiple dose points for each, and 200 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 770 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be

  11. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  12. ZZ WPPR, Pu Recycling Benchmark Results

    International Nuclear Information System (INIS)

    Lutz, D.; Mattes, M.; Delpech, Marc; Juanola, Marc

    2002-01-01

    Description of program or function: The NEA NSC Working Party on Physics of Plutonium Recycling has commissioned a series of benchmarks covering: - Plutonium recycling in pressurized-water reactors; - Void reactivity effect in pressurized-water reactors; - Fast Plutonium-burner reactors: beginning of life; - Plutonium recycling in fast reactors; - Multiple recycling in advanced pressurized-water reactors. The results have been published (see references). ZZ-WPPR-1-A/B contains graphs and tables relative to the PWR Mox pin cell benchmark, representing typical fuel for plutonium recycling, one corresponding to a first cycle, the second for a fifth cycle. These computer readable files contain the complete set of results, while the printed report contains only a subset. ZZ-WPPR-2-CYC1 are the results from cycle 1 of the multiple recycling benchmarks

  13. Interior beam searchlight semi-analytical benchmark

    International Nuclear Information System (INIS)

    Ganapol, Barry D.; Kornreich, Drew E.

    2008-01-01

    Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)

  14. Benchmark referencing of neutron dosimetry measurements

    International Nuclear Information System (INIS)

    Eisenhauer, C.M.; Grundl, J.A.; Gilliam, D.M.; McGarry, E.D.; Spiegel, V.

    1980-01-01

    The concept of benchmark referencing involves interpretation of dosimetry measurements in applied neutron fields in terms of similar measurements in benchmark fields whose neutron spectra and intensity are well known. The main advantage of benchmark referencing is that it minimizes or eliminates many types of experimental uncertainties such as those associated with absolute detection efficiencies and cross sections. In this paper we consider the cavity external to the pressure vessel of a power reactor as an example of an applied field. The pressure vessel cavity is an accessible location for exploratory dosimetry measurements aimed at understanding embrittlement of pressure vessel steel. Comparisons with calculated predictions of neutron fluence and spectra in the cavity provide a valuable check of the computational methods used to estimate pressure vessel safety margins for pressure vessel lifetimes

  15. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    Makai, M.

    1998-01-01

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  16. Experimental benchmark of kinetic simulations of capacitively coupled plasmas in molecular gases

    Science.gov (United States)

    Donkó, Z.; Derzsi, A.; Korolov, I.; Hartmann, P.; Brandt, S.; Schulze, J.; Berger, B.; Koepke, M.; Bruneau, B.; Johnson, E.; Lafleur, T.; Booth, J.-P.; Gibson, A. R.; O'Connell, D.; Gans, T.

    2018-01-01

    We discuss the origin of uncertainties in the results of numerical simulations of low-temperature plasma sources, focusing on capacitively coupled plasmas. These sources can be operated in various gases/gas mixtures, over a wide domain of excitation frequency, voltage, and gas pressure. At low pressures, the non-equilibrium character of the charged particle transport prevails and particle-based simulations become the primary tools for their numerical description. The particle-in-cell method, complemented with Monte Carlo type description of collision processes, is a well-established approach for this purpose. Codes based on this technique have been developed by several authors/groups, and have been benchmarked with each other in some cases. Such benchmarking demonstrates the correctness of the codes, but the underlying physical model remains unvalidated. This is a key point, as this model should ideally account for all important plasma chemical reactions as well as for the plasma-surface interaction via including specific surface reaction coefficients (electron yields, sticking coefficients, etc). In order to test the models rigorously, comparison with experimental ‘benchmark data’ is necessary. Examples will be given regarding the studies of electron power absorption modes in O2, and CF4-Ar discharges, as well as on the effect of modifications of the parameters of certain elementary processes on the computed discharge characteristics in O2 capacitively coupled plasmas.

  17. Impact of the 235U Covariance Data in Benchmark Calculations

    International Nuclear Information System (INIS)

    Leal, Luiz C.; Mueller, D.; Arbanas, G.; Wiarda, D.; Derrien, H.

    2008-01-01

    The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems

  18. Impact of the 235U covariance data in benchmark calculations

    International Nuclear Information System (INIS)

    Leal, Luiz; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve

    2008-01-01

    The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes' method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235 U. The resulting 235 U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235 U covariance data in calculations of critical benchmark systems. (authors)

  19. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    International Nuclear Information System (INIS)

    Benelli, G

    2010-01-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  20. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    Science.gov (United States)

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  1. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  2. The extent of benchmarking in the South African financial sector

    OpenAIRE

    W Vermeulen

    2014-01-01

    Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective impleme...

  3. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  4. Criticality benchmark guide for light-water-reactor fuel in transportation and storage packages

    International Nuclear Information System (INIS)

    Lichtenwalter, J.J.; Bowman, S.M.; DeHart, M.D.; Hopper, C.M.

    1997-03-01

    This report is designed as a guide for performing criticality benchmark calculations for light-water-reactor (LWR) fuel applications. The guide provides documentation of 180 criticality experiments with geometries, materials, and neutron interaction characteristics representative of transportation packages containing LWR fuel or uranium oxide pellets or powder. These experiments should benefit the U.S. Nuclear Regulatory Commission (NRC) staff and licensees in validation of computational methods used in LWR fuel storage and transportation concerns. The experiments are classified by key parameters such as enrichment, water/fuel volume, hydrogen-to-fissile ratio (H/X), and lattice pitch. Groups of experiments with common features such as separator plates, shielding walls, and soluble boron are also identified. In addition, a sample validation using these experiments and a statistical analysis of the results are provided. Recommendations for selecting suitable experiments and determination of calculational bias and uncertainty are presented as part of this benchmark guide

  5. Indoor Modelling Benchmark for 3D Geometry Extraction

    Science.gov (United States)

    Thomson, C.; Boehm, J.

    2014-06-01

    A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.

  6. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  7. Bench-marking beam-beam simulations using coherent quadrupole effects

    International Nuclear Information System (INIS)

    Krishnagopal, S.; Chin, Y.H.

    1992-06-01

    Computer simulations are used extensively in the study of the beam-beam interaction. The proliferation of such codes raises the important question of their reliability, and motivates the development of a dependable set of bench-marks. We argue that rather than detailed quantitative comparisons, the ability of different codes to predict the same qualitative physics should be used as a criterion for such bench-marks. We use the striking phenomenon of coherent quadrupole oscillations as one such bench-mark, and demonstrate that our codes do indeed observe this behaviour. We also suggest some other tests that could be used as bench-marks

  8. Bench-marking beam-beam simulations using coherent quadrupole effects

    International Nuclear Information System (INIS)

    Krishnagopal, S.; Chin, Y.H.

    1992-01-01

    Computer simulations are used extensively in the study of the beam-beam interaction. The proliferation of such codes raises the important question of their reliability, and motivates the development of a dependable set of bench-marks. We argue that rather than detailed quantitative comparisons, the ability of different codes to predict the same qualitative physics should be used as a criterion for such bench-marks. We use the striking phenomenon of coherent quadrupole oscillations as one such bench-mark, and demonstrate that our codes do indeed observe this behavior. We also suggest some other tests that could be used as bench-marks

  9. Peculiarity by Modeling of the Control Rod Movement by the Kalinin-3 Benchmark

    International Nuclear Information System (INIS)

    Nikonov, S. P.; Velkov, K.; Pautz, A.

    2010-01-01

    The paper presents an important part of the results of the OECD/NEA benchmark transient 'Switching off one main circulation pump at nominal power' analyzed as a boundary condition problem by the coupled system code ATHLET-BIPR-VVER. Some observations and comparisons with measured data for integral reactor parameters are discussed. Special attention is paid on the modeling and comparisons performed for the control rod movement and the reactor power history. (Authors)

  10. Benchmark physics tests in the metallic-fueled assembly ZPPR-15

    International Nuclear Information System (INIS)

    McFarlane, H.F.; Brumbach, S.B.; Carpenter, S.G.; Collins, P.J.

    1989-01-01

    Results of the first benchmark physics tests of a metallic-fueled, demonstration-size liquid-metal reactor (LMR) are reported. A simple, two-zone, cylindrical conventional assembly was built with three distinctly different compositions to represent the stages of the Integral Fast Reactor fuel cycle. Experiments included criticality, control, power distribution, reaction rate ratios, reactivity coefficients, shielding, kinetics, and spectrum. Analysis was done with three-dimensional nodal diffusion calculations and ENDF/B-V.2 cross sections. Predictions of the ZPPR-15 reactor physics parameters agreed sufficiently well with the measured values to justify confidence in design analyses for metallic-fueled LMRs

  11. Benchmark physics tests in the metallic-fuelled assembly ZPPR-15

    International Nuclear Information System (INIS)

    McFarlane, H.F.; Brumbach, S.B.; Carpenter, S.G.; Collins, P.J.

    1987-01-01

    Results of the first benchmark physics tests of a metallic-fueled, demonstration-size, liquid metal reactor are reported. A simple, two-zone, cylindrical conventional assembly was built with three distinctly different compositions to represent the stages of the Integral Fast Reactor fuel cycle. Experiments included criticality, control, power distribution, reaction rate ratios, reactivity coefficients, shielding, kinetics and spectrum. Analysis was done with 3-D nodal diffusion calculations and ENDFIB-V.2 cross sections. Predictions of the ZPPR-15 reactor physics parameters agreed sufficiently well with the measured values to justify confidence in design analyses for metallic-fueled LMRs

  12. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  13. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  14. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    First draft: July 16, 2008 This version: October 7, 2008 The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, an...

  15. Benchmarking the implementation of E-Commerce A Case Study Approach

    OpenAIRE

    von Ettingshausen, C. R. D. Freiherr

    2009-01-01

    The purpose of this thesis was to develop a guideline to support the implementation of E-Commerce with E-Commerce benchmarking. Because of its importance as an interface with the customer, web-site benchmarking has been a widely researched topic. However, limited research has been conducted on benchmarking E-Commerce across other areas of the value chain. Consequently this thesis aims to extend benchmarking into E-Commerce related subjects. The literature review examined ...

  16. A simulation study on proton computed tomography (CT) stopping power accuracy using dual energy CT scans as benchmark

    DEFF Research Database (Denmark)

    Hansen, David Christoffer; Seco, Joao; Sørensen, Thomas Sangild

    2015-01-01

    Background. Accurate stopping power estimation is crucial for treatment planning in proton therapy, and the uncertainties in stopping power are currently the largest contributor to the employed dose margins. Dual energy x-ray computed tomography (CT) (clinically available) and proton CT (in...... development) have both been proposed as methods for obtaining patient stopping power maps. The purpose of this work was to assess the accuracy of proton CT using dual energy CT scans of phantoms to establish reference accuracy levels. Material and methods. A CT calibration phantom and an abdomen cross section...... phantom containing inserts were scanned with dual energy and single energy CT with a state-of-the-art dual energy CT scanner. Proton CT scans were simulated using Monte Carlo methods. The simulations followed the setup used in current prototype proton CT scanners and included realistic modeling...

  17. Benchmark calculations on nuclear characteristics of JRR-4 HEU core by SRAC code system

    International Nuclear Information System (INIS)

    Arigane, Kenji

    1987-04-01

    The reduced enrichment program for the JRR-4 has been progressing based on JAERI's RERTR (Reduced Enrichment Research and Test Reactor) program. The SRAC (JAERI Thermal Reactor Standard Code System for Reactor Design and Analysis) is used for the neutronic design of the JRR-4 LEU Core. This report describes the benchmark calculations on the neutronic characteristics of the JRR-4 HEU Core in order to validate the calculation method. The benchmark calculations were performed on the various kind of neutronic characteristics such as excess reactivity, criticality, control rod worth, thermal neutron flux distribution, void coefficient, temperature coefficient, mass coefficient, kinetic parameters and poisoning effect by Xe-135 build up. As the result, it was confirmed that these calculated values are in satisfactory agreement with the measured values. Therefore, the calculational method by the SRAC was validated. (author)

  18. Quantitative assessment of image artifacts from root filling materials on CBCT scans made using several exposure parameters

    International Nuclear Information System (INIS)

    Rabelo, Katharina Alves; Cavalcanti, Yuri Wanderley; De Oliveira Pinto, Martina Gerlane; De Melo, Daniela Pita; Melo, Saulo Leonardo Sousa; Campos, Paulo Sergio Flores; De Andrade Freitas Oliveira, Luciana Soares

    2017-01-01

    To quantify artifacts from different root filling materials in cone-beam computed tomography (CBCT) images acquired using different exposure parameters. Fifteen single-rooted teeth were scanned using 8 different exposure protocols with 3 different filling materials and once without filling material as a control group. Artifact quantification was performed by a trained observer who made measurements in the central axial slice of all acquired images in a fixed region of interest using ImageJ. Hyperdense artifacts, hypodense artifacts, and the remaining tooth area were identified, and the percentages of hyperdense and hypodense artifacts, remaining tooth area, and tooth area affected by the artifacts were calculated. Artifacts were analyzed qualitatively by 2 observers using the following scores: absence (0), moderate presence (1), and high presence (2) for hypodense halos, hypodense lines, and hyperdense lines. Two-way ANOVA and the post-hoc Tukey test were used for quantitative and qualitative artifact analysis. The Dunnet test was also used for qualitative analysis. The significance level was set at P .05). Different exposure parameters did not affect the objective or subjective observations of artifacts in CBCT images; however, the filling materials used in endodontic restorations did affect both types of assessments

  19. Evaluation of retinal nerve fiber layer thickness parameters in myopic population using scanning laser polarimetry (GDxVCC).

    Science.gov (United States)

    Dada, Tanuj; Aggarwal, A; Bali, S J; Sharma, A; Shah, B M; Angmo, D; Panda, A

    2013-01-01

    Myopia presents a significant challenge to the ophthalmologist as myopic discs are often large, tilted, with deep cups and have a thinner neuroretinal rim all of which may mimic glaucomatous optic nerve head changes causing an error in diagnosis. To evaluate the retinal fiber layer (RNFL) thickness in low, moderate and high myopia using scanning laser polarimetry with variable corneal compensation (GDxVCC). One hundred eyes of 100 emmetropes, 30 eyes of low myopes (0 to - 4 D spherical equivalent(SE), 45 eyes with moderate myopia (- 4 to - 8D SE), and 30 eyes with high myopia (- 8 to - 15D SE) were subjected to retinal nerve fiber layer assessment using the scanning laser polarimetry (GDxVCC) in all subjects using the standard protocol. Subjects with IOP > 21 mm Hg, optic nerve head or visual field changes suggestive of glaucoma were excluded from the study. The major outcome parameters were temporal-superior-nasal-inferiortemporal (TSNIT) average, the superior and inferior average and the nerve fibre indicator (NFI). The TSNIT average (p = 0.009), superior (p = 0.001) and inferior average (p = 0.008) were significantly lower; the NFI was higher (P less than 0.001) in moderate myopes as compared to that in emmetropes. In high myopia the RNFL showed supranormal values; the TSNIT average, superior and inferior average was significantly higher(p less than 0.001) as compared to that in emmetropes. The RNFL measurements on scanning laser polarimetry are affected by the myopic refractive error. Moderate myopes show a significant thinning of the RNFL. In high myopia due to peripapillary chorioretinal atrophy and contribution of scleral birefringence, the RNFL values are abnormally high. These findings need to be taken into account while assessing and monitoring glaucoma damage in moderate to high myopes on GDxVCC. © NEPjOPH.

  20. Benchmark tests of JENDL-3.2 for thermal and fast reactors

    International Nuclear Information System (INIS)

    Takano, Hideki

    1995-01-01

    Benchmark calculations for a variety of thermal and fast reactors have been performed by using the newly evaluated JENDL-3 Version-2 (JENDL-3.2) file. In the thermal reactor calculations for the uranium and plutonium fueled cores of TRX and TCA, the k eff and lattice parameters were well predicted. The fast reactor calculations for ZPPR-9 and FCA assemblies showed that the k eff , reactivity worth of Doppler, sodium void and control rod, and reaction rate distribution were in a very good agreement with the experiments. (author)

  1. Introducing a Generic Concept for an Online IT-Benchmarking System

    OpenAIRE

    Ziaie, Pujan;Ziller, Markus;Wollersheim, Jan;Krcmar, Helmut

    2014-01-01

    While IT benchmarking has grown considerably in the last few years, conventional benchmarking tools have not been able to adequately respond to the rapid changes in technology and paradigm shifts in IT-related domains. This paper aims to review benchmarking methods and leverage design science methodology to present design elements for a novel software solution in the field of IT benchmarking. The solution, which introduces a concept for generic (service-independent) indicators is based on and...

  2. Scanning microwave microscopy applied to semiconducting GaAs structures

    Science.gov (United States)

    Buchter, Arne; Hoffmann, Johannes; Delvallée, Alexandra; Brinciotti, Enrico; Hapiuk, Dimitri; Licitra, Christophe; Louarn, Kevin; Arnoult, Alexandre; Almuneau, Guilhem; Piquemal, François; Zeier, Markus; Kienberger, Ferry

    2018-02-01

    A calibration algorithm based on one-port vector network analyzer (VNA) calibration for scanning microwave microscopes (SMMs) is presented and used to extract quantitative carrier densities from a semiconducting n-doped GaAs multilayer sample. This robust and versatile algorithm is instrument and frequency independent, as we demonstrate by analyzing experimental data from two different, cantilever- and tuning fork-based, microscope setups operating in a wide frequency range up to 27.5 GHz. To benchmark the SMM results, comparison with secondary ion mass spectrometry is undertaken. Furthermore, we show SMM data on a GaAs p-n junction distinguishing p- and n-doped layers.

  3. DETERMINING GEOMETRIC PARAMETERS OF AGRICULTURAL TREES FROM LASER SCANNING DATA OBTAINED WITH UNMANNED AERIAL VEHICLE

    Directory of Open Access Journals (Sweden)

    E. Hadas

    2018-05-01

    Full Text Available The estimation of dendrometric parameters has become an important issue for agriculture planning and for the efficient management of orchards. Airborne Laser Scanning (ALS data is widely used in forestry and many algorithms for automatic estimation of dendrometric parameters of individual forest trees were developed. Unfortunately, due to significant differences between forest and fruit trees, some contradictions exist against adopting the achievements of forestry science to agricultural studies indiscriminately. In this study we present the methodology to identify individual trees in apple orchard and estimate heights of individual trees, using high-density LiDAR data (3200 points/m2 obtained with Unmanned Aerial Vehicle (UAV equipped with Velodyne HDL32-E sensor. The processing strategy combines the alpha-shape algorithm, principal component analysis (PCA and detection of local minima. The alpha-shape algorithm is used to separate tree rows. In order to separate trees in a single row, we detect local minima on the canopy profile and slice polygons from alpha-shape results. We successfully separated 92 % of trees in the test area. 6 % of trees in orchard were not separated from each other and 2 % were sliced into two polygons. The RMSE of tree heights determined from the point clouds compared to field measurements was equal to 0.09 m, and the correlation coefficient was equal to 0.96. The results confirm the usefulness of LiDAR data from UAV platform in orchard inventory.

  4. Determining Geometric Parameters of Agricultural Trees from Laser Scanning Data Obtained with Unmanned Aerial Vehicle

    Science.gov (United States)

    Hadas, E.; Jozkow, G.; Walicka, A.; Borkowski, A.

    2018-05-01

    The estimation of dendrometric parameters has become an important issue for agriculture planning and for the efficient management of orchards. Airborne Laser Scanning (ALS) data is widely used in forestry and many algorithms for automatic estimation of dendrometric parameters of individual forest trees were developed. Unfortunately, due to significant differences between forest and fruit trees, some contradictions exist against adopting the achievements of forestry science to agricultural studies indiscriminately. In this study we present the methodology to identify individual trees in apple orchard and estimate heights of individual trees, using high-density LiDAR data (3200 points/m2) obtained with Unmanned Aerial Vehicle (UAV) equipped with Velodyne HDL32-E sensor. The processing strategy combines the alpha-shape algorithm, principal component analysis (PCA) and detection of local minima. The alpha-shape algorithm is used to separate tree rows. In order to separate trees in a single row, we detect local minima on the canopy profile and slice polygons from alpha-shape results. We successfully separated 92 % of trees in the test area. 6 % of trees in orchard were not separated from each other and 2 % were sliced into two polygons. The RMSE of tree heights determined from the point clouds compared to field measurements was equal to 0.09 m, and the correlation coefficient was equal to 0.96. The results confirm the usefulness of LiDAR data from UAV platform in orchard inventory.

  5. Strategic behaviour under regulatory benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management

    2004-09-01

    In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)

  6. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  7. Introduction to 'International Handbook of Criticality Safety Benchmark Experiments'

    International Nuclear Information System (INIS)

    Komuro, Yuichi

    1998-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated year by year by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used. The author briefly introduces the informative handbook and would like to encourage Japanese engineers who are in charge of nuclear criticality safety to use the handbook. (author)

  8. Processing and benchmarking of evaluated nuclear data file/b-viii.0β4 cross-section library by analysis of a series of critical experimental benchmark using the monte carlo code MCNP(X and NJOY2016

    Directory of Open Access Journals (Sweden)

    Kabach Ouadie

    2017-12-01

    Full Text Available To validate the new Evaluated Nuclear Data File (ENDF/B-VIII.0β4 library, 31 different critical cores were selected and used for a benchmark test of the important parameter keff. The four utilized libraries are processed using Nuclear Data Processing Code (NJOY2016. The results obtained with the ENDF/B-VIII.0β4 library were compared against those calculated with ENDF/B-VI.8, ENDF/B-VII.0, and ENDF/B-VII.1 libraries using the Monte Carlo N-Particle (MCNP(X code. All the MCNP(X calculations of keff values with these four libraries were compared with the experimentally measured results, which are available in the International Critically Safety Benchmark Evaluation Project. The obtained results are discussed and analyzed in this paper.

  9. Overview and benchmark analysis of fuel cell parameters estimation for energy management purposes

    Science.gov (United States)

    Kandidayeni, M.; Macias, A.; Amamou, A. A.; Boulon, L.; Kelouwani, S.; Chaoui, H.

    2018-03-01

    Proton exchange membrane fuel cells (PEMFCs) have become the center of attention for energy conversion in many areas such as automotive industry, where they confront a high dynamic behavior resulting in their characteristics variation. In order to ensure appropriate modeling of PEMFCs, accurate parameters estimation is in demand. However, parameter estimation of PEMFC models is highly challenging due to their multivariate, nonlinear, and complex essence. This paper comprehensively reviews PEMFC models parameters estimation methods with a specific view to online identification algorithms, which are considered as the basis of global energy management strategy design, to estimate the linear and nonlinear parameters of a PEMFC model in real time. In this respect, different PEMFC models with different categories and purposes are discussed first. Subsequently, a thorough investigation of PEMFC parameter estimation methods in the literature is conducted in terms of applicability. Three potential algorithms for online applications, Recursive Least Square (RLS), Kalman filter, and extended Kalman filter (EKF), which has escaped the attention in previous works, have been then utilized to identify the parameters of two well-known semi-empirical models in the literature, Squadrito et al. and Amphlett et al. Ultimately, the achieved results and future challenges are discussed.

  10. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias; Smith, Neil; Ghanem, Bernard

    2016-01-01

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  11. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  12. Benchmarking CRISPR on-target sgRNA design.

    Science.gov (United States)

    Yan, Jifang; Chuai, Guohui; Zhou, Chi; Zhu, Chenyu; Yang, Jing; Zhang, Chao; Gu, Feng; Xu, Han; Wei, Jia; Liu, Qi

    2017-02-15

    CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-based gene editing has been widely implemented in various cell types and organisms. A major challenge in the effective application of the CRISPR system is the need to design highly efficient single-guide RNA (sgRNA) with minimal off-target cleavage. Several tools are available for sgRNA design, while limited tools were compared. In our opinion, benchmarking the performance of the available tools and indicating their applicable scenarios are important issues. Moreover, whether the reported sgRNA design rules are reproducible across different sgRNA libraries, cell types and organisms remains unclear. In our study, a systematic and unbiased benchmark of the sgRNA predicting efficacy was performed on nine representative on-target design tools, based on six benchmark data sets covering five different cell types. The benchmark study presented here provides novel quantitative insights into the available CRISPR tools. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Criticality Benchmark Results Using Various MCNP Data Libraries

    International Nuclear Information System (INIS)

    Frankle, Stephanie C.

    1999-01-01

    A suite of 86 criticality benchmarks has been recently implemented in MCNPtrademark as part of the nuclear data validation effort. These benchmarks have been run using two sets of MCNP continuous-energy neutron data: ENDF/B-VI based data through Release 2 (ENDF60) and the ENDF/B-V based data. New evaluations were completed for ENDF/B-VI for a number of the important nuclides such as the isotopes of H, Be, C, N, O, Fe, Ni, 235,238 U, 237 Np, and 239,240 Pu. When examining the results of these calculations for the five manor categories of 233 U, intermediate-enriched 235 U (IEU), highly enriched 235 U (HEU), 239 Pu, and mixed metal assembles, we find the following: (1) The new evaluations for 9 Be, 12 C, and 14 N show no net effect on k eff ; (2) There is a consistent decrease in k eff for all of the solution assemblies for ENDF/B-VI due to 1 H and 16 O, moving k eff further from the benchmark value for uranium solutions and closer to the benchmark value for plutonium solutions; (3) k eff decreased for the ENDF/B-VI Fe isotopic data, moving the calculated k eff further from the benchmark value; (4) k eff decreased for the ENDF/B-VI Ni isotopic data, moving the calculated k eff closer to the benchmark value; (5) The W data remained unchanged and tended to calculate slightly higher than the benchmark values; (6) For metal uranium systems, the ENDF/B-VI data for 235 U tends to decrease k eff while the 238 U data tends to increase k eff . The net result depends on the energy spectrum and material specifications for the particular assembly; (7) For more intermediate-energy systems, the changes in the 235,238 U evaluations tend to increase k eff . For the mixed graphite and normal uranium-reflected assembly, a large increase in k eff due to changes in the 238 U evaluation moved the calculated k eff much closer to the benchmark value. (8) There is little change in k eff for the uranium solutions due to the new 235,238 U evaluations; and (9) There is little change in k eff

  14. Scanning anisotropy parameters in horizontal transversely isotropic media

    KAUST Repository

    Masmoudi, Nabil; Stovas, Alexey; Alkhalifah, Tariq Ali

    2016-01-01

    in reservoir characterisation, specifically in terms of fracture delineation. We propose a travel-time-based approach to estimate the anellipticity parameter η and the symmetry axis azimuth ϕ of a horizontal transversely isotropic medium, given an inhomogeneous

  15. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  16. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    Science.gov (United States)

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  17. Towards benchmarking an in-stream water quality model

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available A method of model evaluation is presented which utilises a comparison with a benchmark model. The proposed benchmarking concept is one that can be applied to many hydrological models but, in this instance, is implemented in the context of an in-stream water quality model. The benchmark model is defined in such a way that it is easily implemented within the framework of the test model, i.e. the approach relies on two applications of the same model code rather than the application of two separate model codes. This is illustrated using two case studies from the UK, the Rivers Aire and Ouse, with the objective of simulating a water quality classification, general quality assessment (GQA, which is based on dissolved oxygen, biochemical oxygen demand and ammonium. Comparisons between the benchmark and test models are made based on GQA, as well as a step-wise assessment against the components required in its derivation. The benchmarking process yields a great deal of important information about the performance of the test model and raises issues about a priori definition of the assessment criteria.

  18. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Science.gov (United States)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  19. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Directory of Open Access Journals (Sweden)

    Murata Isao

    2017-01-01

    Full Text Available There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author’s group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is “equally” due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A making neutrons conveying the contribution, indirect controbution of neutrons (B making the neutrons (A and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  20. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    Energy Technology Data Exchange (ETDEWEB)

    Nowell, Lisa H., E-mail: lhnowell@usgs.gov [U.S. Geological Survey, California Water Science Center, Placer Hall, 6000 J Street, Sacramento, CA 95819 (United States); Norman, Julia E., E-mail: jnorman@usgs.gov [U.S. Geological Survey, Oregon Water Science Center, 2130 SW 5" t" h Avenue, Portland, OR 97201 (United States); Ingersoll, Christopher G., E-mail: cingersoll@usgs.gov [U.S. Geological Survey, Columbia Environmental Research Center, 4200 New Haven Road, Columbia, MO 65021 (United States); Moran, Patrick W., E-mail: pwmoran@usgs.gov [U.S. Geological Survey, Washington Water Science Center, 934 Broadway, Suite 300, Tacoma, WA 98402 (United States)

    2016-04-15

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics

  1. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    International Nuclear Information System (INIS)

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics

  2. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  3. The national hydrologic bench-mark network

    Science.gov (United States)

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  4. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  5. Supply chain integration scales validation and benchmark values

    Directory of Open Access Journals (Sweden)

    Juan A. Marin-Garcia

    2013-06-01

    Full Text Available Purpose: The clarification of the constructs of the supply chain integration (clients, suppliers, external and internal, the creation of a measurement instrument based on a list of items taken from earlier papers, the validation of these scales and a preliminary benchmark to interpret the scales by percentiles based on a set of control variables (size of the plant, country, sector and degree of vertical integration. Design/methodology/approach: Our empirical analysis is based on the HPM project database (2005-2007 timeframe. The international sample is made up of 266 plants across ten countries: Austria, Canada, Finland, Germany, Italy, Japan, Korea, Spain, Sweden and the USA. In each country. We analized the descriptive statistics, internal consistency testing to purify the items (inter-item correlations, Cronbach’s alpha, squared multiple correlation, corrected item-total correlation, exploratory factor analysis, and finally, a confirmatory factor analysis to check the convergent and discriminant validity of the scales. The analyses will be done with the SPSS and EQS programme using the maximum likelihood parameter estimation method. Findings: The four proposed scales show excellent psychometric properties. Research limitations/implications: with a clearer and more concise designation of the supply chain integration measurement scales more reliable and accurate data could be taken to analyse the relations between these constructs with other variables of interest to the academic l fields. Practical implications: providing scales that are valid as a diagnostic tool for best practices, as well as providing a benchmark with which to compare the score for each individual plant against a collection of industrial companies from the machinery, electronics and transportation sectors. Originality/value: supply chain integration may be a major factor in explaining the performance of companies. The results are nevertheless inconclusive, the vast range

  6. Collimated proton pencil-beam scanning for superficial targets: impact of the order of range shifter and aperture

    Science.gov (United States)

    Bäumer, C.; Janson, M.; Timmermann, B.; Wulff, J.

    2018-04-01

    To assess if apertures shall be mounted upstream or downstream of a range shifting block if these field-shaping devices are combined with the pencil-beam scanning delivery technique (PBS). The lateral dose fall-off served as a benchmark parameter. Both options realizing PBS-with-apertures were compared to the uniform scanning mode. We also evaluated the difference regarding the out-of-field dose caused by interactions of protons in beam-shaping devices. The potential benefit of the downstream configuration over the upstream configuration was estimated analytically. Guided by this theoretical evaluation a mechanical adapter was developed which transforms the upstream configuration provided by the proton machine vendor to a downstream configuration. Transversal dose profiles were calculated with the Monte-Carlo based dose engine of the commercial treatment planning system RayStation 6. Two-dimensional dose planes were measured with an ionization chamber array and a scintillation detector at different depths and compared to the calculation. Additionally, a clinical example for the irradiation of the orbit was compared for both PBS options and a uniform scanning treatment plan. Assuming the same air gap the lateral dose fall-off at the field edge at a few centimeter depth is 20% smaller for the aperture-downstream configuration than for the upstream one. For both options of PBS-with-apertures the dose fall-off is larger than in uniform scanning delivery mode if the minimum accelerator energy is 100 MeV. The RayStation treatment planning system calculated the width of the lateral dose fall-off with an accuracy of typically 0.1 mm–0.3 mm. Although experiments and calculations indicate a ranking of the three delivery options regarding lateral dose fall-off, there seems to be a limited impact on a multi-field treatment plan.

  7. Comparing the Robustness of Evolutionary Algorithms on the Basis of Benchmark Functions

    Directory of Open Access Journals (Sweden)

    DENIZ ULKER, E.

    2013-05-01

    Full Text Available In real-world optimization problems, even though the solution quality is of great importance, the robustness of the solution is also an important aspect. This paper investigates how the optimization algorithms are sensitive to the variations of control parameters and to the random initialization of the solution set for fixed control parameters. The comparison is performed of three well-known evolutionary algorithms which are Particle Swarm Optimization (PSO algorithm, Differential Evolution (DE algorithm and the Harmony Search (HS algorithm. Various benchmark functions with different characteristics are used for the evaluation of these algorithms. The experimental results show that the solution quality of the algorithms is not directly related to their robustness. In particular, the algorithm that is highly robust can have a low solution quality, or the algorithm that has a high quality of solution can be quite sensitive to the parameter variations.

  8. Benchmarking in the globalised world and its impact on South ...

    African Journals Online (AJOL)

    In order to understand the potential impact of international benchmarking on South African institutions, it is important to explore the future role of benchmarking on the international level. In this regard, examples of transnational benchmarking activities will be considered. As a result of the involvement of South African ...

  9. BN-600 hybrid core benchmark analyses

    International Nuclear Information System (INIS)

    Kim, Y.I.; Stanculescu, A.; Finck, P.; Hill, R.N.; Grimm, K.N.

    2003-01-01

    Benchmark analyses for the hybrid BN-600 reactor that contains three uranium enrichment zones and one plutonium zone in the core, have been performed within the frame of an IAEA sponsored Coordinated Research Project. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN-600 core were evaluated. The comparison of the diffusion and transport results obtained for the homogeneous representation generally shows good agreement for most parameters between the RZ and HEX-Z models. The burnup effect and the heterogeneity effect on most reactivity parameters also show good agreement for the HEX-Z diffusion and transport theory results. A large difference noticed for the sodium and steel density coefficients is mainly due to differences in the spatial coefficient predictions for non fuelled regions. The burnup reactivity loss was evaluated to be 0.025 (4.3 $) within ∼ 5.0% standard deviation. The heterogeneity effect on most reactivity coefficients was estimated to be small. The heterogeneity treatment reduced the control rod worth by 2.3%. The heterogeneity effect on the k-eff and control rod worth appeared to differ strongly depending on the heterogeneity treatment method. A substantial spread noticed for several reactivity coefficients did not give a significant impact on the transient behavior prediction. This result is attributable to compensating effects between several reactivity effects and the specific design of the partially MOX fuelled hybrid core. (author)

  10. Benchmarking Naval Shipbuilding with 3D Laser Scanning, Additive Manufacturing, and Collaborative Product Lifecycle Management

    Science.gov (United States)

    2015-09-20

    are also available. These technologies are past the disillusionment stage and are in the enlightenment phase where benefits are being derived, as... technologies such as three-dimensional scanning (3DLS), product lifecycle management (PLM), and additive manufacturing (AM) to reduce costs while...naval shipbuilding savings. The research was conducted in two phases. In the first phase, secondary research was conducted on the three technologies

  11. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    .... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...

  12. Definition and Analysis of Heavy Water Reactor Benchmarks for Testing New Wims-D Libraries

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2000-01-01

    This work is part of the IAEA-WIMS Library Update Project (WLUP). A group of heavy water reactor benchmarks have been selected for testing new WIMS-D libraries, including calculations with WIMSD5B program and the analysis of results.These benchmarks cover a wide variety of reactors and conditions, from fresh fuels to high burnup, and from natural to enriched uranium.Besides, each benchmark includes variations in lattice pitch and in coolants (normally heavy water and void).Multiplication factors with critical experimental bucklings and other parameters are calculated and compared with experimental reference values.The WIMS libraries used for the calculations were generated with basic data from JEF-2.2 Rev.3 (JEF) and ENDF/B-VI iNReleaseln 5 (E6) Results obtained with WIMS-86 (W86) library, included with WIMSD5B package, from Windfrith, UK with adjusted data, are included also, for showing the improvements obtained with the new -not adjusted- libraries.The calculations with WIMSD5B were made with two methods (input program options): PIJ (two-dimension collision probability method) and DSN (one-dimension Sn method, with homogenization of materials by ring).The general conclusions are: the library based on JEF data and the DSN meted give the best results, that in average are acceptable

  13. Supply network configuration—A benchmarking problem

    Science.gov (United States)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  14. A heat transport benchmark problem for predicting the impact of measurements on experimental facility design

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2016-01-01

    Highlights: • Predictive Modeling of Coupled Multi-Physics Systems (PM_CMPS) methodology is used. • Impact of measurements for reducing predicted uncertainties is highlighted. • Presented thermal-hydraulics benchmark illustrates generally applicable concepts. - Abstract: This work presents the application of the “Predictive Modeling of Coupled Multi-Physics Systems” (PM_CMPS) methodology conceived by Cacuci (2014) to a “test-section benchmark” problem in order to quantify the impact of measurements for reducing the uncertainties in the conceptual design of a proposed experimental facility aimed at investigating the thermal-hydraulics characteristics expected in the conceptual design of the G4M reactor (GEN4ENERGY, 2012). This “test-section benchmark” simulates the conditions experienced by the hottest rod within the conceptual design of the facility's test section, modeling the steady-state conduction in a rod heated internally by a cosinus-like heat source, as typically encountered in nuclear reactors, and cooled by forced convection to a surrounding coolant flowing along the rod. The PM_CMPS methodology constructs a prior distribution using all of the available computational and experimental information, by relying on the maximum entropy principle to maximize the impact of all available information and minimize the impact of ignorance. The PM_CMPS methodology then constructs the posterior distribution using Bayes’ theorem, and subsequently evaluates it via saddle-point methods to obtain explicit formulas for the predicted optimal temperature distributions and predicted optimal values for the thermal-hydraulics model parameters that characterized the test-section benchmark. In addition, the PM_CMPS methodology also yields reduced uncertainties for both the model parameters and responses. As a general rule, it is important to measure a quantity consistently with, and more accurately than, the information extant prior to the measurement. For

  15. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  16. Josephson scanning tunneling microscopy -- a local and direct probe of the superconducting order parameter

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Hikari; Dynes, Robert; Barber Jr., Richard. P.; Ono, S.; Ando, Y.

    2009-09-01

    Direct measurements of the superconducting superfluid on the surface of vacuum-cleaved Bi2Sr2CaCu2O8+delta (BSCCO) samples are reported. These measurements are accomplished via Josephson tunneling into the sample using a novel scanning tunneling microscope (STM) equipped with a superconducting tip. The spatial resolution of the STM of lateral distances less than the superconducting coherence length allows it to reveal local inhomogeneities in the pair wavefunction of the BSCCO. Instrument performance is demonstrated first with Josephson measurements of Pb films followed by the layered superconductor NbSe2. The relevant measurement parameter, the Josephson ICRN product, is discussed within the context of both BCS superconductors and the high transition temperature superconductors. The local relationship between the ICRN product and the quasiparticle density of states (DOS) gap are presented within the context of phase diagrams for BSCCO. Excessive current densities can be produced with these measurements and have been found to alter the local DOS in the BSCCO. Systematic studies of this effect were performed to determine the practical measurement limits for these experiments. Alternative methods for preparation of the BSCCO surface are also discussed.

  17. THE IMPORTANCE OF BENCHMARKING IN MAKING MANAGEMENT DECISIONS

    Directory of Open Access Journals (Sweden)

    Adriana-Mihaela IONESCU

    2016-06-01

    Full Text Available Launching a new business or project leads managers to make decisions and choose strategies that will then apply in their company. Most often, they take decisions only on instinct, but there are also companies that use benchmarking studies. Benchmarking is a highly effective management tool and is useful in the new competitive environment that has emerged from the need of organizations to constantly improve their performance in order to be competitive. Using this benchmarking process, organizations try to find the best practices applied in a business, learn from famous leaders and identify ways to increase their performance and competitiveness. Thus, managers gather information about market trends and about competitors, especially about the leaders in the field, and use these information in finding ideas and setting of guidelines for development. Benchmarking studies are often used in businesses of commerce, real estate, and industry and high-tech software.

  18. The International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Briggs, J.B.

    2003-01-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)

  19. Definition and Analysis of Heavy Water Reactor Benchmarks for Testing New Wims-D Libraries; Definicion y Analisis de Benchmarks de Reactores de Agua Pesada para Pruebas de Nuevas Bibliotecas de Datos Wims-D

    Energy Technology Data Exchange (ETDEWEB)

    Leszczynski, Francisco [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    This work is part of the IAEA-WIMS Library Update Project (WLUP). A group of heavy water reactor benchmarks have been selected for testing new WIMS-D libraries, including calculations with WIMSD5B program and the analysis of results.These benchmarks cover a wide variety of reactors and conditions, from fresh fuels to high burnup, and from natural to enriched uranium.Besides, each benchmark includes variations in lattice pitch and in coolants (normally heavy water and void).Multiplication factors with critical experimental bucklings and other parameters are calculated and compared with experimental reference values.The WIMS libraries used for the calculations were generated with basic data from JEF-2.2 Rev.3 (JEF) and ENDF/B-VI iNReleaseln 5 (E6) Results obtained with WIMS-86 (W86) library, included with WIMSD5B package, from Windfrith, UK with adjusted data, are included also, for showing the improvements obtained with the new -not adjusted- libraries.The calculations with WIMSD5B were made with two methods (input program options): PIJ (two-dimension collision probability method) and DSN (one-dimension Sn method, with homogenization of materials by ring).The general conclusions are: the library based on JEF data and the DSN meted give the best results, that in average are acceptable.

  20. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.E.; Cheng, E.T.

    1985-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li/sub 17/Pb/sub 83/ and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the TBR to group structure and weighting spectrum increases and Li enrichment decrease with up to 20% discrepancies for thin natural Li/sub 17/Pb/sub 83/ blankets

  1. Benchmarking EUV mask inspection beyond 0.25 NA

    International Nuclear Information System (INIS)

    Goldberg, Kenneth A.; Mochi, Iacopo; Anderson, Erik H.; Rekawa, Seno B.; Kemp, Charles D.; Huh, S.; Han, H.-S.; Naulleau, P.; Gunion, R.F.

    2008-01-01

    The SEMATECH Berkeley Actinic Inspection Tool (AIT) is an EUV-wavelength mask inspection microscope designed for direct aerial image measurements, and pre-commercial EUV mask research. Operating on a synchrotron bending magnet beamline, the AIT uses an off-axis Fresnel zoneplate lens to project a high-magnification EUV image directly onto a CCD camera. We present the results of recent system upgrades that have improved the imaging resolution, illumination uniformity, and partial coherence. Benchmarking tests show image contrast above 75% for 100-nm mask features, and significant improvements and across the full range of measured sizes. The zoneplate lens has been replaced by an array of user-selectable zoneplates with higher magnification and NA values up to 0.0875, emulating the spatial resolution of a 0.35-NA 4 x EUV stepper. Illumination uniformity is above 90% for mask areas 2-(micro)m-wide and smaller. An angle-scanning mirror reduces the high coherence of the synchrotron beamline light source giving measured σ values of approximately 0.125 at 0.0875 NA

  2. A fast algorithm for 3D azimuthally anisotropic velocity scan

    KAUST Repository

    Hu, Jingwei; Fomel, Sergey; Ying, Lexing

    2014-01-01

    © 2014 European Association of Geoscientists & Engineers. The conventional velocity scan can be computationally expensive for large-scale seismic data sets, particularly when the presence of anisotropy requires multiparameter scanning. We introduce a fast algorithm for 3D azimuthally anisotropic velocity scan by generalizing the previously proposed 2D butterfly algorithm for hyperbolic Radon transforms. To compute semblance in a two-parameter residual moveout domain, the numerical complexity of our algorithm is roughly O(N3logN) as opposed to O(N5) of the straightforward velocity scan, with N being the representative of the number of points in a particular dimension of either data space or parameter space. Synthetic and field data examples demonstrate the superior efficiency of the proposed algorithm.

  3. A fast algorithm for 3D azimuthally anisotropic velocity scan

    KAUST Repository

    Hu, Jingwei

    2014-11-11

    © 2014 European Association of Geoscientists & Engineers. The conventional velocity scan can be computationally expensive for large-scale seismic data sets, particularly when the presence of anisotropy requires multiparameter scanning. We introduce a fast algorithm for 3D azimuthally anisotropic velocity scan by generalizing the previously proposed 2D butterfly algorithm for hyperbolic Radon transforms. To compute semblance in a two-parameter residual moveout domain, the numerical complexity of our algorithm is roughly O(N3logN) as opposed to O(N5) of the straightforward velocity scan, with N being the representative of the number of points in a particular dimension of either data space or parameter space. Synthetic and field data examples demonstrate the superior efficiency of the proposed algorithm.

  4. A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.

    Science.gov (United States)

    Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas

    2014-01-01

    The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.

  5. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  6. Assessing reactor physics codes capabilities to simulate fast reactors on the example of the BN-600 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Vladimir [Scientific and Engineering Centre for Nuclear and Radiation Safety (SES NRS), Moscow (Russian Federation); Bousquet, Jeremy [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Garching (Germany)

    2016-11-15

    This work aims to assess the capabilities of reactor physics codes (initially validated for thermal reactors) to simulate fast sodium cooled reactors. The BFS-62-3A critical experiment from the BN-600 Hybrid Core Benchmark Analyses was chosen for the investigation. Monte-Carlo codes (KENO from SCALE and SERPENT 2.1.23) and the deterministic diffusion code DYN3D-MG are applied to calculate the neutronic parameters. It was found that the multiplication factor and reactivity effects calculated by KENO and SERPENT using the ENDF/B-VII.0 continuous energy library are in a good agreement with each other and with the measured benchmark values. Few-groups macroscopic cross sections, required for DYN3D-MG, were prepared in applying different methods implemented in SCALE and SERPENT. The DYN3D-MG results of a simplified benchmark show reasonable agreement with results from Monte-Carlo calculations and measured values. The former results are used to justify DYN3D-MG implementation for sodium cooled fast reactors coupled deterministic analysis.

  7. Benchmark matrix and guide: Part II.

    Science.gov (United States)

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  8. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  9. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  10. Simplified two and three dimensional HTTR benchmark problems

    International Nuclear Information System (INIS)

    Zhang Zhan; Rahnema, Farzad; Zhang Dingkang; Pounders, Justin M.; Ougouag, Abderrafi M.

    2011-01-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  11. A large-scale benchmark of gene prioritization methods.

    Science.gov (United States)

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  12. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    Science.gov (United States)

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  13. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  14. Core Benchmarks Descriptions

    International Nuclear Information System (INIS)

    Pavlovichev, A.M.

    2001-01-01

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented

  15. A Benchmark Estimate for the Capital Stock. An Optimal Consistency Method

    OpenAIRE

    Jose Miguel Albala-Bertrand

    2001-01-01

    There are alternative methods to estimate a capital stock for a benchmark year. These methods, however, do not allow for an independent check, which could establish whether the estimated benchmark level is too high or too low. I propose here an optimal consistency method (OCM), which may allow estimating a capital stock level for a benchmark year and/or checking the consistency of alternative estimates of a benchmark capital stock.

  16. The institutionalization of benchmarking in the Danish construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard; Gottlieb, Stefan Christoffer

    , the chapter accounts for the data collection methods used to conduct the empirical data collection and the appertaining choices that are made, based on the account for analyzing institutionalization processes. The analysis unfolds over seven chapters, starting with an exposition of the political foundation...... and disseminated to the construction industry. The fourth chapter demonstrates how benchmarking was concretized into a benchmarking system and articulated to address several political focus areas for the construction industry. BEC accordingly became a political arena where many local perspectives and strategic...... emerged as actors expressed diverse political interests in the institutionalization of benchmarking. The political struggles accounted for in chapter five constituted a powerful political pressure and called for transformations of the institutionalization in order for benchmarking to attain institutional...

  17. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  18. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.L.; Cheng, E.T.

    1986-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li 17 Pb 83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li 17 Pb 83 blankets. (author)

  19. Quantitative assessment of image artifacts from root filling materials on CBCT scans made using several exposure parameters

    Energy Technology Data Exchange (ETDEWEB)

    Rabelo, Katharina Alves; Cavalcanti, Yuri Wanderley; De Oliveira Pinto, Martina Gerlane; De Melo, Daniela Pita [Dept. of Oral Diagnosis, State University of Paraiba, Campina Grande (Brazil); Melo, Saulo Leonardo Sousa [Dept. of Oral Pathology, Radiology and Medicine, University of Iowa, Iowa City (United States); Campos, Paulo Sergio Flores; De Andrade Freitas Oliveira, Luciana Soares [Federal University of Bahia, Salvador (Brazil)

    2017-09-15

    To quantify artifacts from different root filling materials in cone-beam computed tomography (CBCT) images acquired using different exposure parameters. Fifteen single-rooted teeth were scanned using 8 different exposure protocols with 3 different filling materials and once without filling material as a control group. Artifact quantification was performed by a trained observer who made measurements in the central axial slice of all acquired images in a fixed region of interest using ImageJ. Hyperdense artifacts, hypodense artifacts, and the remaining tooth area were identified, and the percentages of hyperdense and hypodense artifacts, remaining tooth area, and tooth area affected by the artifacts were calculated. Artifacts were analyzed qualitatively by 2 observers using the following scores: absence (0), moderate presence (1), and high presence (2) for hypodense halos, hypodense lines, and hyperdense lines. Two-way ANOVA and the post-hoc Tukey test were used for quantitative and qualitative artifact analysis. The Dunnet test was also used for qualitative analysis. The significance level was set at P<.05. There were no significant interactions among the exposure parameters in the quantitative or qualitative analysis. Significant differences were observed among the studied filling materials in all quantitative analyses. In the qualitative analyses, all materials differed from the control group in terms of hypodense and hyperdense lines (P<.05). Fiberglass posts did not differ statistically from the control group in terms of hypodense halos (P>.05). Different exposure parameters did not affect the objective or subjective observations of artifacts in CBCT images; however, the filling materials used in endodontic restorations did affect both types of assessments.

  20. Guideline for benchmarking thermal treatment systems for low-level mixed waste

    International Nuclear Information System (INIS)

    Hoffman, D.P.; Gibson, L.V. Jr.; Hermes, W.H.; Bastian, R.E.; Davis, W.T.

    1994-01-01

    A process for benchmarking low-level mixed waste (LLMW) treatment technologies has been developed. When used in conjunction with the identification and preparation of surrogate waste mixtures, and with defined quality assurance and quality control procedures, the benchmarking process will effectively streamline the selection of treatment technologies being considered by the US Department of Energy (DOE) for LLMW cleanup and management. Following the quantitative template provided in the benchmarking process will greatly increase the technical information available for the decision-making process. The additional technical information will remove a large part of the uncertainty in the selection of treatment technologies. It is anticipated that the use of the benchmarking process will minimize technology development costs and overall treatment costs. In addition, the benchmarking process will enhance development of the most promising LLMW treatment processes and aid in transferring the technology to the private sector. To instill inherent quality, the benchmarking process is based on defined criteria and a structured evaluation format, which are independent of any specific conventional treatment or emerging process technology. Five categories of benchmarking criteria have been developed for the evaluation: operation/design; personnel health and safety; economics; product quality; and environmental quality. This benchmarking document gives specific guidance on what information should be included and how it should be presented. A standard format for reporting is included in Appendix A and B of this document. Special considerations for LLMW are presented and included in each of the benchmarking categories

  1. IRPhEP-handbook, International Handbook of Evaluated Reactor Physics Benchmark Experiments

    International Nuclear Information System (INIS)

    Sartori, Enrico; Blair Briggs, J.

    2008-01-01

    experimental series that were performed at 17 different reactor facilities. The Handbook is organized in a manner that allows easy inclusion of additional evaluations, as they become available. Additional evaluations are in progress and will be added to the handbook periodically. Content: FUND - Fundamental; GCR - Gas Cooled (Thermal) Reactor; HWR - Heavy Water Moderated Reactor; LMFR - Liquid Metal Fast Reactor; LWR - Light Water Moderated Reactor; PWR - Pressurized Water Reactor; VVER - VVER Reactor; Evaluations published as drafts 2 - Related Information: International Criticality Safety Benchmark Evaluation Project (ICSBEP); IRPHE/B and W-SS-LATTICE, Spectral Shift Reactor Lattice Experiments; IRPHE-JAPAN, Reactor Physics Experiments carried out in Japan ; IRPHE/JOYO MK-II, JOYO MK-II core management and characteristics database ; IRPhE/RRR-SEG, Reactor Physics Experiments from Fast-Thermal Coupled Facility; IRPHE-SNEAK, KFK SNEAK Fast Reactor Experiments, Primary Documentation ; IRPhE/STEK, Reactor Physics Experiments from Fast-Thermal Coupled Facility ; IRPHE-ZEBRA, AEEW Fast Reactor Experiments, Primary Documentation ; IRPHE-DRAGON-DPR, OECD High Temperature Reactor Dragon Project, Primary Documents; IRPHE-ARCH-01, Archive of HTR Primary Documents ; IRPHE/AVR, AVR High Temperature Reactor Experience, Archival Documentation ; IRPHE-KNK-II-ARCHIVE, KNK-II fast reactor documents, power history and measured parameters; IRPhE/BERENICE, effective delayed neutron fraction measurements ; IRPhE-TAPIRO-ARCHIVE, fast neutron source reactor primary documents, reactor physics experiments. The International Handbook of Evaluated Reactor Physics Benchmark Experiments was prepared by a working party comprised of experienced reactor physics personnel from Belgium, Brazil, Canada, P.R. of China, Germany, Hungary, Japan, Republic of Korea, Russian Federation, Switzerland, United Kingdom, and the United States of America. The IRPhEP Handbook is available to authorised requesters from the

  2. Report on the on-going EUREDATA Benchmark Exercise on data analysis

    International Nuclear Information System (INIS)

    Besi, A.; Colombo, A.G.

    1989-01-01

    In April 1987 the JRC was charged by the Assembly of the EuReDatA members with the organization and the coordination of a Benchmark Exercise (BE) on data analysis. The main aim of the BE is a comparison of the methods used by the various organizations to estimate reliability parameters and functions from field data. The reference data set was to be constituted by raw data taken from the Component Event Data Bank (CEDB). The CEDB is a centralized bank, which collects data describing the operational behaviour of components of nuclear power plants operating in various European Countries. (orig./HSCH)

  3. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    Science.gov (United States)

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.

  4. Benchmark calculations for VENUS-2 MOX -fueled reactor dosimetry

    International Nuclear Information System (INIS)

    Kim, Jong Kung; Kim, Hong Chul; Shin, Chang Ho; Han, Chi Young; Na, Byung Chan

    2004-01-01

    As a part of a Nuclear Energy Agency (NEA) Project, it was pursued the benchmark for dosimetry calculation of the VENUS-2 MOX-fueled reactor. In this benchmark, the goal is to test the current state-of-the-art computational methods of calculating neutron flux to reactor components against the measured data of the VENUS-2 MOX-fuelled critical experiments. The measured data to be used for this benchmark are the equivalent fission fluxes which are the reaction rates divided by the U 235 fission spectrum averaged cross-section of the corresponding dosimeter. The present benchmark is, therefore, defined to calculate reaction rates and corresponding equivalent fission fluxes measured on the core-mid plane at specific positions outside the core of the VENUS-2 MOX-fuelled reactor. This is a follow-up exercise to the previously completed UO 2 -fuelled VENUS-1 two-dimensional and VENUS-3 three-dimensional exercises. The use of MOX fuel in LWRs presents different neutron characteristics and this is the main interest of the current benchmark compared to the previous ones

  5. Benchmarking the cost efficiency of community care in Australian child and adolescent mental health services: implications for future benchmarking.

    Science.gov (United States)

    Furber, Gareth; Brann, Peter; Skene, Clive; Allison, Stephen

    2011-06-01

    The purpose of this study was to benchmark the cost efficiency of community care across six child and adolescent mental health services (CAMHS) drawn from different Australian states. Organizational, contact and outcome data from the National Mental Health Benchmarking Project (NMHBP) data-sets were used to calculate cost per "treatment hour" and cost per episode for the six participating organizations. We also explored the relationship between intake severity as measured by the Health of the Nations Outcome Scales for Children and Adolescents (HoNOSCA) and cost per episode. The average cost per treatment hour was $223, with cost differences across the six services ranging from a mean of $156 to $273 per treatment hour. The average cost per episode was $3349 (median $1577) and there were significant differences in the CAMHS organizational medians ranging from $388 to $7076 per episode. HoNOSCA scores explained at best 6% of the cost variance per episode. These large cost differences indicate that community CAMHS have the potential to make substantial gains in cost efficiency through collaborative benchmarking. Benchmarking forums need considerable financial and business expertise for detailed comparison of business models for service provision.

  6. Calculation of the 5th AER dynamic benchmark with APROS

    International Nuclear Information System (INIS)

    Puska, E.K.; Kontio, H.

    1998-01-01

    The model used for calculation of the 5th AER dynamic benchmark with APROS code is presented. In the calculation of the 5th AER dynamic benchmark the three-dimensional neutronics model of APROS was used. The core was divided axially into 20 nodes according to the specifications of the benchmark and each six identical fuel assemblies were placed into one one-dimensional thermal hydraulic channel. The five-equation thermal hydraulic model was used in the benchmark. The plant process and automation was described with a generic VVER-440 plant model created by IVO PE. (author)

  7. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  8. Presidential Address 1997--Benchmarks for the Next Millennium.

    Science.gov (United States)

    Baker, Pamela C.

    1997-01-01

    Reflects on the century's preeminent benchmarks, including the evolution in the lives of people with disabilities and the prevention of many causes of mental retardation. The ethical challenges of genetic engineering and diagnostic technology and the need for new benchmarks in policy, practice, and research are discussed. (CR)

  9. [Benchmarking and other functions of ROM: back to basics].

    Science.gov (United States)

    Barendregt, M

    2015-01-01

    Since 2011 outcome data in the Dutch mental health care have been collected on a national scale. This has led to confusion about the position of benchmarking in the system known as routine outcome monitoring (rom). To provide insight into the various objectives and uses of aggregated outcome data. A qualitative review was performed and the findings were analysed. Benchmarking is a strategy for finding best practices and for improving efficacy and it belongs to the domain of quality management. Benchmarking involves comparing outcome data by means of instrumentation and is relatively tolerant with regard to the validity of the data. Although benchmarking is a function of rom, it must be differentiated form other functions from rom. Clinical management, public accountability, research, payment for performance and information for patients are all functions of rom which require different ways of data feedback and which make different demands on the validity of the underlying data. Benchmarking is often wrongly regarded as being simply a synonym for 'comparing institutions'. It is, however, a method which includes many more factors; it can be used to improve quality and has a more flexible approach to the validity of outcome data and is less concerned than other rom functions about funding and the amount of information given to patients. Benchmarking can make good use of currently available outcome data.

  10. A simulation study on proton computed tomography (CT) stopping power accuracy using dual energy CT scans as benchmark.

    Science.gov (United States)

    Hansen, David C; Seco, Joao; Sørensen, Thomas Sangild; Petersen, Jørgen Breede Baltzer; Wildberger, Joachim E; Verhaegen, Frank; Landry, Guillaume

    2015-01-01

    Accurate stopping power estimation is crucial for treatment planning in proton therapy, and the uncertainties in stopping power are currently the largest contributor to the employed dose margins. Dual energy x-ray computed tomography (CT) (clinically available) and proton CT (in development) have both been proposed as methods for obtaining patient stopping power maps. The purpose of this work was to assess the accuracy of proton CT using dual energy CT scans of phantoms to establish reference accuracy levels. A CT calibration phantom and an abdomen cross section phantom containing inserts were scanned with dual energy and single energy CT with a state-of-the-art dual energy CT scanner. Proton CT scans were simulated using Monte Carlo methods. The simulations followed the setup used in current prototype proton CT scanners and included realistic modeling of detectors and the corresponding noise characteristics. Stopping power maps were calculated for all three scans, and compared with the ground truth stopping power from the phantoms. Proton CT gave slightly better stopping power estimates than the dual energy CT method, with root mean square errors of 0.2% and 0.5% (for each phantom) compared to 0.5% and 0.9%. Single energy CT root mean square errors were 2.7% and 1.6%. Maximal errors for proton, dual energy and single energy CT were 0.51%, 1.7% and 7.4%, respectively. Better stopping power estimates could significantly reduce the range errors in proton therapy, but requires a large improvement in current methods which may be achievable with proton CT.

  11. Contrast enhanced CT-scans are not comparable to non-enhanced scans in emphysema quantification

    International Nuclear Information System (INIS)

    Heussel, C.P.; Kappes, J.; Hantusch, R.; Hartlieb, S.; Weinheimer, O.; Kauczor, H.-U.; Eberhardt, R.

    2010-01-01

    Systemic, interventional and surgical treatments have gone new ways in treatment of emphysema. For longitudinal therapy monitoring and as end-points for clinical trials, quantification of the disease is necessary. Sensitive, easy to measure, as well as stable and reproducible parameters have to be characterized. One parameter that might affect emphysema quantification is IV contrast enhancement, which might also be indicated. Whether or not the contrast enhanced scan is also suited for emphysema quantification or an additional scan is necessary, a retrospective analysis of 12 adult patients undergoing clinically indicated both, a non-enhanced and enhanced thin section MSCT within a week (median 0 days, range 0-4 days) was done. The in-house YACTA software was used for automatic quantification of lung and emphysema volume, emphysema index, mean lung density, and 5th, 10th, 15th percentile. After IV contrast administration, the median CT derived lung volume decreased mild by 1.1%, while median emphysema volume decreased by relevant 11%. This results in a decrease of median emphysema index by 9%. The median lung density (15th percentile) increased after contrast application by 18 HU (9 HU). CT quantification delivers emphysema values that are clearly affected by IV contrast application. The detected changes after contrast application show the results of higher density in the lung parenchyma. Therefore the amount of quantified emphysema is reduced and the lung density increased after contrast enhancement. In longitudinal analyses, non-enhanced scans should be the reference, while enhanced scans cannot be used.

  12. Technologies of polytechnic education in global benchmark higher education institutions

    Science.gov (United States)

    Kurushina, V. A.; Kurushina, E. V.; Zemenkova, M. Y.

    2018-05-01

    The Russian polytechnic education is going through the sequence of transformations started with introduction of bachelor and master degrees in the higher education instead of the previous “specialists”. The next stage of reformation in the Russian polytechnic education should imply the growth in quality of teaching and learning experience that is possible to achieve by accumulating the best education practices of the world-class universities using the benchmarking method. This paper gives an overview of some major distinctive features of the foreign benchmark higher education institution and the Russian university of polytechnic profile. The parameters that allowed the authors to select the foreign institution for comparison include the scope of educational profile, industrial specialization, connections with the leading regional corporations, size of the city and number of students. When considering the possibilities of using relevant higher education practices of the world level, the authors emphasize the importance of formation of a new mentality of an engineer, the role of computer technologies in engineering education, the provision of licensed software for the educational process which exceeds the level of a regional Russian university, and successful staff technologies (e.g., inviting “guest” lecturers or having 2-3 lecturers per course).

  13. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    International Nuclear Information System (INIS)

    Marshall, M.A.; Bess, J.D.

    2011-01-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas(reg s ign) reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 ± 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  14. Qualification of coupled 3D neutron kinetic/thermal hydraulic code systems by the calculation of a VVER-440 benchmark. Re-connection of an isolated loop

    Energy Technology Data Exchange (ETDEWEB)

    Kotsarev, Alexander; Lizorkin, Mikhail [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation); Bencik, Marek; Hadek, Jan [UJV Rez, a.s., Rez (Czech Republic); Kozmenkov, Yaroslav; Kliem, Soeren [Helmholtz-Zentrum Dresden-Rossendorf (HZDR) e.V., Dresden (Germany)

    2016-09-15

    The 7th AER dynamic benchmark is a continuation of the efforts to validate the codes systematically for the estimation of the transient behavior of VVER type nuclear power plants. The main part of the benchmark is the simulation of the re-connection of an isolated circulation loop with low temperature in a VVER-440 plant. This benchmark was calculated by the National Research Centre ''Kurchatov Institute'' (with the code ATHLET/BIPR-VVER), UJV Rez (with the code RELAP5-3D {sup copyright}) and HZDR (with the code DYN3D/ATHLET). The paper gives an overview of the behavior of the main thermal hydraulic and neutron kinetic parameters in the provided solutions.

  15. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  16. Reactor group constants and benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)

  17. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. EU and OECD benchmarking and peer review compared

    NARCIS (Netherlands)

    Groenendijk, Nico

    2009-01-01

    Benchmarking and peer review are essential elements of the so-called EU open method of coordination (OMC) which has been contested in the literature for lack of effectiveness. In this paper we compare benchmarking and peer review procedures as used by the EU with those used by the OECD. Different

  19. Benchmarking with the BLASST Sessional Staff Standards Framework

    Science.gov (United States)

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  20. Comparison between FDG Uptake and Clinicopathologic and Immunohistochemical Parameters in Pre-operative PET/CT Scan of Primary Gastric Carcinoma

    International Nuclear Information System (INIS)

    Han, Eun Ji; Choi, Woo Hee; Chung, Yong An; Kim, Ki Jun; Maeng, Lee So; Sohn, Kyung Myung; Jung, Hyun Suk; Sohn, Hyung Sun; Chung, Soo Kyo

    2009-01-01

    The purpose of this study was to find out what clinicopathologic or immunohistochemical parameter that may affect FDG uptake of primary tumor in PET/CT scan of the gastric carcinoma patient. Eighty-nine patients with stomach cancer who underwent pre-operative FDG PET/CT scans were included. In cases with perceptible FDG uptake in primary tumor, the maximum standardized uptake value (SUVmax) was calculated. The clinicopathologic results such as depth of invasion (T stage), tumor size, lymph node metastasis, tumor differentiation and Lauren's classification and immunohistochemical markers such as Ki-67 index, expression of p53, EGFR, Cathepsin D, c-erb-B2 and COX-2 were reviewed. Nineteen out of 89 gastric carcinomas showed imperceptible FDG uptake on PET/CT images. In cases with perceptible FDG uptake in primary tumor, SUVmax was significantly higher in T2, T3 and T4 tumors than T1 tumors (5.8±3.1 vs. 3.7±2.1, p=0.002). SUVmax of large tumors (above or equal to 3 cm) was also significantly higher than SUVmax of small ones (less than 3 cm) (5.7±3.2 vs. 3.7±2.0, p=0.002). The intestinal types of gastric carcinomas according to Lauren showed higher FDG uptake compared to the non-intestinal types (5.4±2.8 vs. 3.7±1.3, p=0.003). SUVmax between p53 positive group and negative group was significantly different (6.0±2.8 vs. 4.4±3.0, p=0.035). No significant difference was found in presence of LN metastasis, tumor differentiation, Ki-67 index, and expression of EGFR, Cathepsin D, c-erb-B2 and COX-2. T stage of gastric carcinoma influenced the detectability of gastric cancer on FDG PET/CT scan. When gastric carcinoma was perceptible on PET/CT scan, T stage, size of primary tumor, Lauren's classification and p53 expression were related to degree of FDG uptake in primary tumor