WorldWideScience

Sample records for benchmark parameter scan

  1. Development of a benchmark parameter scan for Higgs bosons in the NMSSM Model and a study of the sensitivity for H→AA→4τ in vector boson fusion with the ATLAS detector

    International Nuclear Information System (INIS)

    An evaluation of the discovery potential for NMSSM Higgs bosons of the ATLAS experiment at the LHC is presented. For this purpose, seven two-dimensional benchmark planes in the six-dimensional parameter space of the NMSSM Higgs sector are defined. These planes include different types of phenomenology for which the discovery of NMSSM Higgs bosons is especially challenging and which are considered typical for the NMSSM. They are subsequently used to give a detailed evaluation of the Higgs boson discovery potential based on Monte Carlo studies from the ATLAS collaboration. Afterwards, the possibility of discovering NMSSM Higgs bosons via the H1→A1A1→4τ→4μ+8ν decay chain and with the vector boson fusion production mode is investigated. A particular emphasis is put on the mass reconstruction from the complex final state. Furthermore, a study of the jet reconstruction performance at the ATLAS experiment which is of crucial relevance for vector boson fusion searches is presented. A good detectability of the so-called tagging jets that originate from the scattered partons in the vector boson fusion process is of critical importance for an early Higgs boson discovery in many models and also within the framework of the NMSSM. (orig.)

  2. Development of a benchmark parameter scan for Higgs bosons in the NMSSM Model and a study of the sensitivity for H{yields}AA{yields}4{tau} in vector boson fusion with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Rottlaender, Iris

    2008-08-15

    An evaluation of the discovery potential for NMSSM Higgs bosons of the ATLAS experiment at the LHC is presented. For this purpose, seven two-dimensional benchmark planes in the six-dimensional parameter space of the NMSSM Higgs sector are defined. These planes include different types of phenomenology for which the discovery of NMSSM Higgs bosons is especially challenging and which are considered typical for the NMSSM. They are subsequently used to give a detailed evaluation of the Higgs boson discovery potential based on Monte Carlo studies from the ATLAS collaboration. Afterwards, the possibility of discovering NMSSM Higgs bosons via the H{sub 1}{yields}A{sub 1}A{sub 1}{yields}4{tau}{yields}4{mu}+8{nu} decay chain and with the vector boson fusion production mode is investigated. A particular emphasis is put on the mass reconstruction from the complex final state. Furthermore, a study of the jet reconstruction performance at the ATLAS experiment which is of crucial relevance for vector boson fusion searches is presented. A good detectability of the so-called tagging jets that originate from the scattered partons in the vector boson fusion process is of critical importance for an early Higgs boson discovery in many models and also within the framework of the NMSSM. (orig.)

  3. Multi-parameters scanning in HTI media

    KAUST Repository

    Masmoudi, Nabil

    2014-08-05

    Building credible anisotropy models is crucial in imaging. One way to estimate anisotropy parameters is to relate them analytically to traveltime, which is challenging in inhomogeneous media. Using perturbation theory, we develop traveltime approximations for transversely isotropic media with horizontal symmetry axis (HTI) as explicit functions of the anellipticity parameter η and the symmetry axis azimuth ϕ in inhomogeneous background media. Specifically, our expansion assumes an inhomogeneous elliptically anisotropic background medium, which may be obtained from well information and stacking velocity analysis in HTI media. This formulation has advantages on two fronts: on one hand, it alleviates the computational complexity associated with solving the HTI eikonal equation, and on the other hand, it provides a mechanism to scan for the best fitting parameters η and ϕ without the need for repetitive modeling of traveltimes, because the traveltime coefficients of the expansion are independent of the perturbed parameters η and ϕ. The accuracy of our expansion is further enhanced by the use of shanks transform. We show the effectiveness of our scheme with tests on a 3D model and we propose an approach for multi-parameters scanning in TI media.

  4. Scanning anisotropy parameters in complex media

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-03-21

    Parameter estimation in an inhomogeneous anisotropic medium offers many challenges; chief among them is the trade-off between inhomogeneity and anisotropy. It is especially hard to estimate the anisotropy anellipticity parameter η in complex media. Using perturbation theory and Taylor’s series, I have expanded the solutions of the anisotropic eikonal equation for transversely isotropic (TI) media with a vertical symmetry axis (VTI) in terms of the independent parameter η from a generally inhomogeneous elliptically anisotropic medium background. This new VTI traveltime solution is based on a set of precomputed perturbations extracted from solving linear partial differential equations. The traveltimes obtained from these equations serve as the coefficients of a Taylor-type expansion of the total traveltime in terms of η. Shanks transform is used to predict the transient behavior of the expansion and improve its accuracy using fewer terms. A homogeneous medium simplification of the expansion provides classical nonhyperbolic moveout descriptions of the traveltime that are more accurate than other recently derived approximations. In addition, this formulation provides a tool to scan for anisotropic parameters in a generally inhomogeneous medium background. A Marmousi test demonstrates the accuracy of this approximation. For a tilted axis of symmetry, the equations are still applicable with a slightly more complicated framework because the vertical velocity and δ are not readily available from the data.

  5. T3PS: Tool for Parallel Processing in Parameter Scans

    CERN Document Server

    Maurer, Vinzenz

    2015-01-01

    T3PS is a program that can be used to quickly design and perform parameter scans while easily taking advantage of the multi-core architecture of current processors. It takes an easy to read and write parameter scan definition file format as input. Based on the parameter ranges and other options contained therein, it distributes the calculation of the parameter space over multiple processes and possibly computers. The derived data is saved in a plain text file format readable by most plotting software. The supported scanning strategies include: grid scan, random scan, Markov Chain Monte Carlo, numerical optimization. Several example parameter scans are shown and compared with results in the literature.

  6. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  7. Optimal z-axis scanning parameters for gynecologic cytology specimens

    OpenAIRE

    Amber D Donnelly; Mukherjee, Maheswari S.; Lyden, Elizabeth R.; Bridge, Julia A.; Subodh M Lele; Najia Wright; Mary F McGaughey; Culberson, Alicia M.; Adam J. Horn; Whitney R Wedel; Stanley J Radio

    2013-01-01

    Background: The use of virtual microscopy (VM) in clinical cytology has been limited due to the inability to focus through three dimensional (3D) cell clusters with a single focal plane (2D images). Limited information exists regarding the optimal scanning parameters for 3D scanning. Aims: The purpose of this study was to determine the optimal number of the focal plane levels and the optimal scanning interval to digitize gynecological (GYN) specimens prepared on SurePath™ glass slides while m...

  8. Scanning Parameter Space for NIF capsules in HYDRA

    Energy Technology Data Exchange (ETDEWEB)

    Fetterman, A; Herrmann, M C; Haan, S

    2004-11-10

    The authors have implemented an automated pulse shaper for NIF capsules in HYDRA. We have developed the infrastructure to do scans using the automatic pulse shaper across any n-dimensions of capsule parameter space. Using this infrastructure, we have performed several scans examining parameters for uniformly doped Beryllium capsules. To coordinate more closely with the anticipated experimental shock timing strategy, we have started to develop an automated pulse shaper which uses planar geometry and liquid DD.

  9. Clusters as benchmarks for measuring fundamental stellar parameters

    CERN Document Server

    Bell, Cameron P M

    2016-01-01

    In this contribution I will discuss fundamental stellar parameters as determined from young star clusters; specifically those with ages less than or approximately equal to that of the Pleiades. I will focus primarily on the use of stellar evolutionary models to determine the ages and masses of stars, as well as discuss the limitations of such models using a combination of both young clusters and eclipsing binary systems. In addition, I will also highlight a few interesting recent results from large on-going spectroscopic surveys (specifically Gaia-ESO and APOGEE/IN-SYNC) which are continuing to challenge our understanding of the formation and early evolutionary stages of young clusters.

  10. Benchmark 3-Flavor Pattern and Small Universal Flavor-Electroweak Parameter

    OpenAIRE

    Lipmanov, E. M.

    2008-01-01

    The electroweak theory contains too many empirical parameters. Most of them are related to the flavor part of particle physics. In this paper we discuss a relevant simple idea: the complicated system of actual dimensionless, small versus large, quantities in elementary particle flavor phenomenology is small deviated from an explicitly defined benchmark flavor pattern with no tuning parameters. One small empirical universal dimensionless parameter measures this deviation. Its possible physical...

  11. Multipinhole SPECT helical scan parameters and imaging volume

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao [Department of Nuclear Medicine, State University of New York at Buffalo, Buffalo, New York 14214 (United States); Wei, Qingyang; Dai, Tiantian; Ma, Tianyu [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Lecomte, Roger [Department of Nuclear Medicine and Radiobiology, Sherbrooke Molecular Imaging Center, Université de Sherbrooke, Sherbrooke, Quebec J1H 5N4 (Canada)

    2015-11-15

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluated by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.

  12. Multipinhole SPECT helical scan parameters and imaging volume

    International Nuclear Information System (INIS)

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluated by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters

  13. Domain decomposition PN solutions to the 3D transport benchmark over a range in parameter space

    International Nuclear Information System (INIS)

    The objectives of this contribution are twofold. First, the Domain Decomposition (DD) method used in the PARAFISH parallel transport solver is re-interpreted as a Generalized Schwarz Splitting as defined by Tang [SIAM J Sci Stat Comput, vol.13 (2), pp. 573-595, 1992]. Second, PARAFISH provides spherical harmonic (i.e., PN) solutions to the NEA benchmark suite for 3D transport methods and codes over a range in parameter space. To the best of the author's knowledge, these are the first spherical harmonic solutions provided for this demanding benchmark suite. They have been obtained using 512 CPU cores of the JuRoPa machine installed at the Juelich Computing Center (Germany). (author)

  14. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    Science.gov (United States)

    Mohanty, Sankhya; Hattel, Jesper H.

    2015-03-01

    Selective laser melting is yet to become a standardized industrial manufacturing technique. The process continues to suffer from defects such as distortions, residual stresses, localized deformations and warpage caused primarily due to the localized heating, rapid cooling and high temperature gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge. In this paper, a methodology for generating reliable, optimized scanning paths and process parameters for selective laser melting of a standard sample is introduced. The processing of the sample is simulated by sequentially coupling a calibrated 3D pseudo-analytical thermal model with a 3D finite element mechanical model. The optimized processing parameters are subjected to a Monte Carlo method based uncertainty and reliability analysis. The reliability of the scanning paths are established using cumulative probability distribution functions for process output criteria such as sample density, thermal homogeneity, etc. A customized genetic algorithm is used along with the simulation model to generate optimized cellular scanning strategies and processing parameters, with an objective of reducing thermal asymmetries and mechanical deformations. The optimized scanning strategies are used for selective laser melting of the standard samples, and experimental and numerical results are compared.

  15. Simultaneous Thermodynamic and Kinetic Parameters Determination Using Differential Scanning Calorimetry

    Directory of Open Access Journals (Sweden)

    Nader Frikha

    2011-01-01

    Full Text Available Problem statement: The determination of reaction kinetics is of major importance, as for industrial reactors optimization as for environmental reasons or energy limitations. Although calorimetry is often used for the determination of thermodynamic parameters alone, the question that arises is: how can we apply the Differential Scanning Calorimetry for the determination of kinetic parameters. The objective of this study consists to proposing an original methodology for the simultaneous determination of thermodynamic and kinetic parameters, using a laboratory scale Differential Scanning Calorimeter (DSC. The method is applied to the dichromate-catalysed hydrogen peroxide decomposition. Approach: The methodology is based on operating of experiments carried out with a Differential Scanning Calorimeter. The interest of this approach proposed is that it requires very small quantities of reactants (about a few grams to be implemented. The difficulty lies in the fact that, using such microcalorimeters, the reactants temperature cannot directly be measured and a particular calibration procedure has thus to be developed, to determine the media temperature in an indirect way. The proposed methodology for determination of kinetics parameters is based on resolution of the coupled heat and mass balances. Results: A complete kinetic law is proposed. The Arrhenius parameters are determined as frequency factor k0 = 1.39×109 s−1 and activation energy E = 54.9 kJ mol−1. The measured enthalpy of reaction is ΔrH=−94 kJ mol−1. Conclusion: The comparison of the results obtained by such an original methodology with those obtained using a conventional laboratory scale reactor calorimetry, for the kinetics determination of, shows that this new approach is very relevant.

  16. Fundamental M-dwarf parameters from high-resolution spectra using PHOENIX ACES models: I. Parameter accuracy and benchmark stars

    CERN Document Server

    Passegger, Vera Maria; Reiners, Ansgar

    2016-01-01

    M-dwarf stars are the most numerous stars in the Universe; they span a wide range in mass and are in the focus of ongoing and planned exoplanet surveys. To investigate and understand their physical nature, detailed spectral information and accurate stellar models are needed. We use a new synthetic atmosphere model generation and compare model spectra to observations. To test the model accuracy, we compared the models to four benchmark stars with atmospheric parameters for which independent information from interferometric radius measurements is available. We used $\\chi^2$ -based methods to determine parameters from high-resolution spectroscopic observations. Our synthetic spectra are based on the new PHOENIX grid that uses the ACES description for the equation of state. This is a model generation expected to be especially suitable for the low-temperature atmospheres. We identified suitable spectral tracers of atmospheric parameters and determined the uncertainties in $T_{\\rm eff}$, $\\log{g}$, and [Fe/H] resul...

  17. Effects of cross sections library parameters on the OECD/NEA Oskarshamn-2 benchmark solution

    International Nuclear Information System (INIS)

    Highlights: • A 3D NK–TH model was developed using RELAP5-3D© for studying BWR instability events. • A cross section library was generated using the available CASMO format data. • To evaluate reactor stability parameters a tool was developed and validated. • The effect of some neutronic parameters on the reactor stability was investigated. • The Oskarshamn-2 1999 event stability parameters were properly reproduced. - Abstract: The OECD/NEA proposes a new international benchmark based on data collected during an instability transient occurred at the Oskarshamn-2 NPP. This benchmark is aimed at testing the coupled 3D Neutron Kinetic–Thermal Hydraulic (3D NK–TH) codes on challenging situations. The ENEA “Casaccia” Research Center, is participating to this benchmark, developing a computational model using the RELAP5-3D© code. The 3DNK model has been already developed from the cross sections dataset calculated by OKG, the Oskarshamn-2 licensee, through the CASMO lattice code. In order to use this neutron cross sections database in RELAP5-3D© a n-dimensional polynomials data fitting and base cross sections values calculations are required. An ad-hoc tool, named PROMETHEUS, has been developed for automatically generating RELAP5-3D©-compatible cross sections libraries. This tool allows at easily visualizing the complex structure of the neutronic datasets; moreover it is exploited for deriving different cross sections libraries needed to evaluate neutronic parameters effects on the reactor instability prediction. Thus, the effects of the fuel temperature and control rod histories, of the discontinuity factors (averaged/not averaged) and of the neutron poisons has been assessed. A ranking table has been produced, demonstrating the relevance of the not-averaged discontinuity factors and of the on-transient neutron poisons calculations for the correct prediction of the Oskarshamn-2 event

  18. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    International Nuclear Information System (INIS)

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description

  19. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    Energy Technology Data Exchange (ETDEWEB)

    Kljenak, Ivo, E-mail: ivo.kljenak@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Kuznetsov, Mikhail, E-mail: mike.kuznetsov@kit.edu [Karlsruhe Institute of Technology, Kaiserstraße 12, 76131 Karlsruhe (Germany); Kostka, Pal, E-mail: kostka@nubiki.hu [NUBIKI Nuclear Safety Research Institute, Konkoly-Thege Miklós út 29-33, 1121 Budapest (Hungary); Kubišova, Lubica, E-mail: lubica.kubisova@ujd.gov.sk [Nuclear Regulatory Authority of the Slovak Republic, Bajkalská 27, 82007 Bratislava (Slovakia); Maltsev, Mikhail, E-mail: maltsev_MB@aep.ru [JSC Atomenergoproekt, 1, st. Podolskykh Kursantov, Moscow (Russian Federation); Manzini, Giovanni, E-mail: giovanni.manzini@rse-web.it [Ricerca sul Sistema Energetico, Via Rubattino 54, 20134 Milano (Italy); Povilaitis, Mantas, E-mail: mantas.p@mail.lei.lt [Lithuania Energy Institute, Breslaujos g.3, 44403 Kaunas (Lithuania)

    2015-03-15

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description.

  20. Optimal z-axis scanning parameters for gynecologic cytology specimens

    Directory of Open Access Journals (Sweden)

    Amber D Donnelly

    2013-01-01

    Full Text Available Background: The use of virtual microscopy (VM in clinical cytology has been limited due to the inability to focus through three dimensional (3D cell clusters with a single focal plane (2D images. Limited information exists regarding the optimal scanning parameters for 3D scanning. Aims: The purpose of this study was to determine the optimal number of the focal plane levels and the optimal scanning interval to digitize gynecological (GYN specimens prepared on SurePath™ glass slides while maintaining a manageable file size. Subjects and Methods: The iScanCoreo Au scanner (Ventana, AZ, USA was used to digitize 192 SurePath™ glass slides at three focal plane levels at 1 μ interval. The digitized virtual images (VI were annotated using BioImagene′s Image Viewer. Five participants interpreted the VI and recorded the focal plane level at which they felt confident and later interpreted the corresponding glass slide specimens using light microscopy (LM. The participants completed a survey about their experiences. Inter-rater agreement and concordance between the VI and the glass slide specimens were evaluated. Results: This study determined an overall high intra-rater diagnostic concordance between glass and VI (89-97%, however, the inter-rater agreement for all cases was higher for LM (94% compared with VM (82%. Survey results indicate participants found low grade dysplasia and koilocytes easy to diagnose using three focal plane levels, the image enhancement tool was useful and focusing through the cells helped with interpretation; however, the participants found VI with hyperchromatic crowded groups challenging to interpret. Participants reported they prefer using LM over VM. This study supports using three focal plane levels and 1 μ interval to expand the use of VM in GYN cytology. Conclusion: Future improvements in technology and appropriate training should make this format a more preferable and practical option in clinical cytology.

  1. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  2. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2015-01-01

    Selective laser melting is yet to become a standardized industrial manufacturing technique. The process continues to suffer from defects such as distortions, residual stresses, localized deformations and warpage caused primarily due to the localized heating, rapid cooling and high temperature...... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths and...... process parameters for selective laser melting of a standard sample is introduced. The processing of the sample is simulated by sequentially coupling a calibrated 3D pseudo-analytical thermal model with a 3D finite element mechanical model.The optimized processing parameters are subjected to a Monte Carlo...

  3. Adaptive Matching of the Scanning Aperture of the Environment Parameter

    Science.gov (United States)

    Choni, Yu. I.; Yunusov, N. N.

    2016-04-01

    We analyze a matching system for the scanning aperture antenna radiating through a layer with unpredictably changing parameters. Improved matching has been achieved by adaptive motion of a dielectric plate in the gap between the aperture and the radome. The system is described within the framework of an infinite layered structure. The validity of the model has been confirmed by numerical simulation using CST Microwave Studio software and by an experiment. It is shown that the reflection coefficient at the input of some types of a matching device, which is due to the deviation of the load impedance from the nominal value, is determined by a compact and versatile formula. The potential efficiency of the proposed matching system is shown by a specific example, and its dependence on the choice of the starting position of the dielectric plate is demonstrated.

  4. Effects of cross sections libraries parameters on the OECD/NEA Oskarshamn-2 benchmark solution

    International Nuclear Information System (INIS)

    The OECD/NEA proposes a new international benchmark based on the data collected from an instability transient occurred at the Oskarshamn-2 NPP with the aim to test the coupled 3D Neutron Kinetic/Thermal Hydraulic codes on challenging situations. The ENEA 'Casaccia' Research Center is participating to this benchmark, developing a computational model using RELAP5-3D code. The 3DNK model was developed starting from the cross sections datasets calculated by OKG, the Oskarshamn-2 licensee, using the CASMO lattice code. Integration of neutron cross sections database in RELAP5-3D required data fitting by a n-dimensional polynomials, calculations of the various polynomial coefficients and of the base cross sections values. An ad-hoc tool named PROMETHEUS has been developed for automatically generate the RELAP5-3D-compatible cross sections libraries. Thanks to this software it has been easily possible to visualize the complex structure of the neutronic data sets and to derive different cross sections libraries for evaluating the effects of some neutronic parameters on the prediction of the reactor instability. Thus, the effects of the fuel temperature and control rod history, of the discontinuity factors (averaged/not averaged), and of the neutron poisons has been assessed. A ranking table has been produced, demonstrating the relevance of the not-averaged discontinuity factors and of the on-transient neutron poisons calculations for the correct prediction of the Oskarshamn-2 event. (author)

  5. Fundamental M-dwarf parameters from high-resolution spectra using PHOENIX ACES models. I. Parameter accuracy and benchmark stars

    Science.gov (United States)

    Passegger, V. M.; Wende-von Berg, S.; Reiners, A.

    2016-03-01

    M-dwarf stars are the most numerous stars in the Universe; they span a wide range in mass and are in the focus of ongoing and planned exoplanet surveys. To investigate and understand their physical nature, detailed spectral information and accurate stellar models are needed. We use a new synthetic atmosphere model generation and compare model spectra to observations. To test the model accuracy, we compared the models to four benchmark stars with atmospheric parameters for which independent information from interferometric radius measurements is available. We used χ2-based methods to determine parameters from high-resolution spectroscopic observations. Our synthetic spectra are based on the new PHOENIX grid that uses the ACES description for the equation of state. This is a model generation expected to be especially suitable for the low-temperature atmospheres. We identified suitable spectral tracers of atmospheric parameters and determined the uncertainties in Teff, log g, and [Fe/H] resulting from degeneracies between parameters and from shortcomings of the model atmospheres. The inherent uncertainties we find are σTeff = 35 K, σlog g = 0.14, and σ[Fe/H] = 0.11. The new model spectra achieve a reliable match to our observed data; our results for Teff and log g are consistent with literature values to within 1σ. However, metallicities reported from earlier photometric and spectroscopic calibrations in some cases disagree with our results by more than 3σ. A possible explanation are systematic errors in earlier metallicity determinations that were based on insufficient descriptions of the cool atmospheres. At this point, however, we cannot definitely identify the reason for this discrepancy, but our analysis indicates that there is a large uncertainty in the accuracy of M-dwarf parameter estimates. Based on observations carried out with UVES at ESO VLT.

  6. T3PS v1.0: Tool for Parallel Processing in Parameter Scans

    Science.gov (United States)

    Maurer, Vinzenz

    2016-01-01

    T3PS is a program that can be used to quickly design and perform parameter scans while easily taking advantage of the multi-core architecture of current processors. It takes an easy to read and write parameter scan definition file format as input. Based on the parameter ranges and other options contained therein, it distributes the calculation of the parameter space over multiple processes and possibly computers. The derived data is saved in a plain text file format readable by most plotting software. The supported scanning strategies include: grid scan, random scan, Markov Chain Monte Carlo, numerical optimization. Several example parameter scans are shown and compared with results in the literature.

  7. Effects of Exciting Evaluated Nuclear Date Files on Nuclear Parameters of the BFS-62-3A Assembly Benchmark Model

    OpenAIRE

    Mikhail

    2002-01-01

    This report is continuation of studying of the experiments performed on BFS-62-3A critical assembly in Russia. The objective of work is definition of the cross section uncertainties on reactor neutronics parameters as applied to the hybrid core of the BN-600 reactor of Beloyarskaya NPP. Two-dimensional benchmark model of BFS-62-3A was created specially for these purposes and experimental values were reduced to it. Benchmark characteristics for this assembly are (1)criticality; (2)central fiss...

  8. Benchmarking the Performance of Mobile Laser Scanning Systems Using a Permanent Test Field

    Science.gov (United States)

    Kaartinen, Harri; Hyyppä, Juha; Kukko, Antero; Jaakkola, Anttoni; Hyyppä, Hannu

    2012-01-01

    The performance of various mobile laser scanning systems was tested on an established urban test field. The test was connected to the European Spatial Data Research (EuroSDR) project “Mobile Mapping—Road Environment Mapping Using Mobile Laser Scanning”. Several commercial and research systems collected laser point cloud data on the same test field. The system comparisons focused on planimetric and elevation errors using a filtered digital elevation model, poles, and building corners as the reference objects. The results revealed the high quality of the point clouds generated by all of the tested systems under good GNSS conditions. With all professional systems properly calibrated, the elevation accuracy was better than 3.5 cm up to a range of 35 m. The best system achieved a planimetric accuracy of 2.5 cm over a range of 45 m. The planimetric errors increased as a function of range, but moderately so if the system was properly calibrated. The main focus on mobile laser scanning development in the near future should be on the improvement of the trajectory solution, especially under non-ideal conditions, using both improvements in hardware and software. Test fields are relatively easy to implement in built environments and they are feasible for verifying and comparing the performance of different systems and also for improving system calibration to achieve optimum quality.

  9. Benchmarking the Performance of Mobile Laser Scanning Systems Using a Permanent Test Field

    Directory of Open Access Journals (Sweden)

    Hannu Hyyppä

    2012-09-01

    Full Text Available The performance of various mobile laser scanning systems was tested on an established urban test field. The test was connected to the European Spatial Data Research (EuroSDR project “Mobile Mapping—Road Environment Mapping Using Mobile Laser Scanning”. Several commercial and research systems collected laser point cloud data on the same test field. The system comparisons focused on planimetric and elevation errors using a filtered digital elevation model, poles, and building corners as the reference objects. The results revealed the high quality of the point clouds generated by all of the tested systems under good GNSS conditions. With all professional systems properly calibrated, the elevation accuracy was better than 3.5 cm up to a range of 35 m. The best system achieved a planimetric accuracy of 2.5 cm over a range of 45 m. The planimetric errors increased as a function of range, but moderately so if the system was properly calibrated. The main focus on mobile laser scanning development in the near future should be on the improvement of the trajectory solution, especially under non-ideal conditions, using both improvements in hardware and software. Test fields are relatively easy to implement in built environments and they are feasible for verifying and comparing the performance of different systems and also for improving system calibration to achieve optimum quality.

  10. SCAN-based hybrid and double-hybrid density functionals from parameter-free models

    CERN Document Server

    Hui, Kerwin

    2015-01-01

    By incorporating the nonempirical SCAN semilocal density functional [Sun, Ruzsinszky, and Perdew, Phys. Rev. Lett. 115, 036402 (2015)] in the underlying expression, we propose one hybrid (SCAN0) and three double-hybrid (SCAN0-DH, SCAN-QIDH, and SCAN0-2) density functionals, which are free of any empirical parameter. The SCAN-based hybrid and double-hybrid functionals consistently outperform their parent SCAN semilocal functional for a wide range of applications. The SCAN-based semilocal, hybrid, and double-hybrid functionals generally perform better than the corresponding PBE-based functionals. In addition, the SCAN0-2 and SCAN-QIDH double-hybrid functionals significantly reduce the qualitative failures of the SCAN semilocal functional, such as the self-interaction error and noncovalent interaction error, extending the applicability of the SCAN-based functionals to a very diverse range of systems.

  11. Bench-KJ: benchmark on analytical calculation of fracture mechanics parameters KI and J cracked piping components

    International Nuclear Information System (INIS)

    For many design and ageing considerations fracture mechanics is needed to evaluate cracked components. The major parameters used are K and J. For that, the different codes (RSE-M appendix 5, RCC-MRx appendix A16, R6 rule, ASME B and PV Code Section XI, API, VERLIFE, Russian Code..) propose compendia of stress intensity factors, and for some of them compendia of limit loads for usual situations, in terms of component geometry, type of defect and loading conditions. The benchmark bench-KJ, proposed in the frame of the OECD/IAGE Group, aims to compare these different estimation schemes by comparison to reference analyses done by Finite Element Method, for representative cases (pipes and elbows, mechanical or/and thermal loadings, different type and size of cracks). The objective is to have a global comparison of the procedures but also of all independent elements as stress intensity factor or reference stress. The benchmark will cover simple cases with basic mechanical loads like pressure and bending up to complex load combinations and complex geometries (cylinders and elbows) including cladding or welds: these cases are classified into 6 tasks. Twenty-nine partners are involved in this benchmark. This paper gives a short overview of the different tasks of the benchmark and presents the analysis of the results for the four first tasks, devoted on the elastic stress intensity factor calculation (task 1) and J calculation in cracked pipes (tasks 2 and 3). (authors)

  12. Precision and Accuracy Parameters in Structured Light 3-D Scanning

    Science.gov (United States)

    Eiríksson, E. R.; Wilm, J.; Pedersen, D. B.; Aanæs, H.

    2016-04-01

    Structured light systems are popular in part because they can be constructed from off-the-shelf low cost components. In this paper we quantitatively show how common design parameters affect precision and accuracy in such systems, supplying a much needed guide for practitioners. Our quantitative measure is the established VDI/VDE 2634 (Part 2) guideline using precision made calibration artifacts. Experiments are performed on our own structured light setup, consisting of two cameras and a projector. We place our focus on the influence of calibration design parameters, the calibration procedure and encoding strategy and present our findings. Finally, we compare our setup to a state of the art metrology grade commercial scanner. Our results show that comparable, and in some cases better, results can be obtained using the parameter settings determined in this study.

  13. Precision and Accuracy Parameters in Structured Light 3-D Scanning

    DEFF Research Database (Denmark)

    Eiríksson, Eyþór Rúnar; Wilm, Jakob; Pedersen, David Bue;

    2016-01-01

    Structured light systems are popular in part because they can be constructed from off-the-shelf low cost components. In this paper we quantitatively show how common design parameters affect precision and accuracy in such systems, supplying a much needed guide for practitioners. Our quantitative m...

  14. Estimation of forest parameters using airborne laser scanning data

    Directory of Open Access Journals (Sweden)

    J. Cohen

    2015-12-01

    Full Text Available Methods for the estimation of forest characteristics by airborne laser scanning (ALS data have been introduced by several authors. Tree height (TH and canopy closure (CC describing the forest properties can be used in forest, construction and industry applications, as well as research and decision making. The National Land Survey has been collecting ALS data from Finland since 2008 to generate a nationwide high resolution digital elevation model. Although this data has been collected in leaf-off conditions, it still has the potential to be utilized in forest mapping. A method where this data is used for the estimation of CC and TH in the boreal forest region is presented in this paper. Evaluation was conducted in eight test areas across Finland by comparing the results with corresponding Multi-Source National Forest Inventory (MS-NFI datasets. The ALS based CC and TH maps were generally in a good agreement with the MS-NFI data. As expected, deciduous forests caused some underestimation in CC and TH, but the effect was not major in any of the test areas. The processing chain has been fully automated enabling fast generation of forest maps for different areas.

  15. Estimation of forest parameters using airborne laser scanning data

    Science.gov (United States)

    Cohen, J.

    2015-12-01

    Methods for the estimation of forest characteristics by airborne laser scanning (ALS) data have been introduced by several authors. Tree height (TH) and canopy closure (CC) describing the forest properties can be used in forest, construction and industry applications, as well as research and decision making. The National Land Survey has been collecting ALS data from Finland since 2008 to generate a nationwide high resolution digital elevation model. Although this data has been collected in leaf-off conditions, it still has the potential to be utilized in forest mapping. A method where this data is used for the estimation of CC and TH in the boreal forest region is presented in this paper. Evaluation was conducted in eight test areas across Finland by comparing the results with corresponding Multi-Source National Forest Inventory (MS-NFI) datasets. The ALS based CC and TH maps were generally in a good agreement with the MS-NFI data. As expected, deciduous forests caused some underestimation in CC and TH, but the effect was not major in any of the test areas. The processing chain has been fully automated enabling fast generation of forest maps for different areas.

  16. Parameter scan for the CLIC Damping rings under the infleunce of intrabeam scattering

    CERN Document Server

    Antoniou, F; Papaphilippou, Y; Vivoli, A

    2010-01-01

    Due to the high bunch density, the output emittances of the CLIC Damping Rings (DR) are strongly dominated by the effect of Intrabeam Scattering (IBS). In an attempt to optimize the ring design, the bench-marking of the multiparticle tracking code SIRE with the classical IBS formalisms and approximations is first considered. The scaling of the steady state emittances and IBS growth rates is also studied, with respect to several ring parameters including energy, bunch charge and wiggler characteristics.

  17. Combining Total Monte Carlo and Benchmarks for nuclear data uncertainty propagation on an LFRs safety parameters

    CERN Document Server

    Alhassan, Erwin; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael

    2013-01-01

    Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.

  18. Benchmarking a new closed-form thermal analysis technique against a traditional lumped parameter, finite-difference method

    Energy Technology Data Exchange (ETDEWEB)

    Huff, K. D.; Bauer, T. H. (Nuclear Engineering Division)

    2012-08-20

    A benchmarking effort was conducted to determine the accuracy of a new analytic generic geology thermal repository model developed at LLNL relative to a more traditional, numerical, lumped parameter technique. The fast-running analytical thermal transport model assumes uniform thermal properties throughout a homogenous storage medium. Arrays of time-dependent heat sources are included geometrically as arrays of line segments and points. The solver uses a source-based linear superposition of closed form analytical functions from each contributing point or line to arrive at an estimate of the thermal evolution of a generic geologic repository. Temperature rise throughout the storage medium is computed as a linear superposition of temperature rises. It is modeled using the MathCAD mathematical engine and is parameterized to allow myriad gridded repository geometries and geologic characteristics [4]. It was anticipated that the accuracy and utility of the temperature field calculated with the LLNL analytical model would provide an accurate 'birds-eye' view in regions that are many tunnel radii away from actual storage units; i.e., at distances where tunnels and individual storage units could realistically be approximated as physical lines or points. However, geometrically explicit storage units, waste packages, tunnel walls and close-in rock are not included in the MathCAD model. The present benchmarking effort therefore focuses on the ability of the analytical model to accurately represent the close-in temperature field. Specifically, close-in temperatures computed with the LLNL MathCAD model were benchmarked against temperatures computed using geometrically-explicit lumped-parameter, repository thermal modeling technique developed over several years at ANL using the SINDAG thermal modeling code [5]. Application of this numerical modeling technique to underground storage of heat generating nuclear waste streams within the proposed YMR Site has been widely

  19. WLUP benchmarks

    International Nuclear Information System (INIS)

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  20. Scanning of flat textile-based radiation dosimeters: Influence of parameters on the quality of results

    International Nuclear Information System (INIS)

    Flat woven polyamide textiles were chosen for modification with nitro blue tetrazolium chloride (NBT) or 2,3,5-triphenyltetrazolium chloride (TTC). Such samples change colour from white to blue (NBT) or red (TTC) if exposed to ionizing radiation or UV light. When inhomogenously irradiated, a clear pattern of the absorbed dose distribution is visible to the naked eye. Performance of quantitative 2D analysis with the aid of a flat-bed document scanner was proposed. Most importantly, the application of a scanner is an easy method for the assessment of irradiated samples. Therefore, scanning parameters such as resolution, sharpness, scanning reproducibility and sample preparation were assessed in this work; and optimal parameters were chosen. The cause of uncertainty in the measurements is discussed. - Highlights: • 2D textile dosimetry analysis with the aid of a flat-bed scanner is shown. • Scanning parameters and reproducibility were assessed in this work. • Optimal scanning parameters were chosen. • Cause of uncertainty in the measurements is discussed

  1. A simulation study on proton computed tomography (CT) stopping power accuracy using dual energy CT scans as benchmark

    DEFF Research Database (Denmark)

    Hansen, David Christoffer; Seco, Joao; Sørensen, Thomas Sangild;

    2015-01-01

    development) have both been proposed as methods for obtaining patient stopping power maps. The purpose of this work was to assess the accuracy of proton CT using dual energy CT scans of phantoms to establish reference accuracy levels. Material and methods. A CT calibration phantom and an abdomen cross section...... phantom containing inserts were scanned with dual energy and single energy CT with a state-of-the-art dual energy CT scanner. Proton CT scans were simulated using Monte Carlo methods. The simulations followed the setup used in current prototype proton CT scanners and included realistic modeling of...... detectors and the corresponding noise characteristics. Stopping power maps were calculated for all three scans, and compared with the ground truth stopping power from the phantoms. Results. Proton CT gave slightly better stopping power estimates than the dual energy CT method, with root mean square errors...

  2. Benchmarking Density Functionals on Structural Parameters of Small-/Medium-Sized Organic Molecules.

    Science.gov (United States)

    Brémond, Éric; Savarese, Marika; Su, Neil Qiang; Pérez-Jiménez, Ángel José; Xu, Xin; Sancho-García, Juan Carlos; Adamo, Carlo

    2016-02-01

    In this Letter we report the error analysis of 59 exchange-correlation functionals in evaluating the structural parameters of small- and medium-sized organic molecules. From this analysis, recently developed double hybrids, such as xDH-PBE0, emerge as the most reliable methods, while global hybrids confirm their robustness in reproducing molecular structures. Notably the M06-L density functional is the only semilocal method reaching an accuracy comparable to hybrids'. A comparison with errors obtained on energetic databases (including thermochemistry, reaction barriers, and interaction energies) indicate that most of the functionals have a coherent behavior, showing low (or high) deviations on both energy and structure data sets. Only a few of them are more prone toward one of these two properties. PMID:26730741

  3. Study on the parameters of the scanning system for the 300 keV electron accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Leo, K. W.; Chulan, R. M., E-mail: leo@nm.gov.my; Hashim, S. A.; Baijan, A. H.; Sabri, R. M.; Mohtar, M.; Glam, H.; Lojius, L.; Zahidee, M.; Azman, A.; Zaid, M. [Malaysian Nuclear Agency, Bangi, 43000 Kajang. Selangor (Malaysia)

    2016-01-22

    This paper describes the method to identify the magnetic coil parameters of the scanning system. This locally designed low energy electron accelerator with the present energy of 140 keV will be upgraded to 300 keV. In this accelerator, scanning system is required to deflect the energetic electron beam across a titanium foil in vertical and horizontal direction. The excitation current of the magnetic coil is determined by the energy of the electron beam. Therefore, the magnetic coil parameters must be identified to ensure the matching of the beam energy and excitation coil current. As the result, the essential parameters of the effective lengths for X-axis and Y-axis have been found as 0.1198 m and 0.1134 m and the required excitation coil currents which is dependenton the electron beam energies have be identified.

  4. TORT solutions to the NEA suite of benchmarks for 3D transport methods and codes over a range in parameter space

    International Nuclear Information System (INIS)

    We present the TORT solutions to the 3D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40 x 40 x 40, 200 angles) to the finest model (160 x 160 x 160, 800 angles). The MCNP reference solution is used for evaluating the effect of model-refinement on the accuracy of the TORT solutions. The presented results show that the majority of benchmark quantities are computed with good accuracy by TORT, and that the accuracy improves with model refinement. However, this deliberately severe test has exposed some deficiencies in both deterministic and stochastic solution approaches. Specifically, TORT fails to converge the inner iterations in some benchmark configurations while MCNP produces zero tallies, or drastically poor statistics for some benchmark quantities. We conjecture that TORT's failure to converge is driven by ray effects in configurations with low scattering ratio and/or highly skewed computational cells, i.e. aspect ratio far from unity. The failure of MCNP occurs in quantities tallied over a very small area or volume in physical space, or quantities tallied many (∼25) mean free paths away from the source. Hence automated, robust, and reliable variance reduction techniques are essential for obtaining high quality reference values of the benchmark quantities. Preliminary results of the benchmark exercise indicate that the occasionally poor performance of TORT is shared with other deterministic codes. Armed with this information, method developers can now direct their attention to regions in parameter space where such failures occur and design alternative solution approaches for such instances

  5. Minimisation of parameter estimation errors in dynamic PET: choice of scanning schedules

    International Nuclear Information System (INIS)

    The authors studied the problem of finding an optimal scan schedule in positron emission tomography (PET) dynamic studies which minimises errors in estimating the transfer constants between a set of compartments. For example, the influence of scan intervals in PET on the accuracy of estimation of the rate constants and vascular component in the deoxyglucose method was examined using an empirical noise model. The simulated noisy curves used in the analysis were compared with patient data to validate the noise model. A series of scan schedules were compared for accuracy of fit by evaluating the determinant of the variance-covariance matrix of the fitted parameters as an index of parameter accuracy. For realistic noise levels there is a monotonic improvement in the index of parameter accuracy with increasing sampling frequency, particularly over the initial minutes after the tracer injection. Since faster schedules are more susceptible to errors introduced by time mismatches between plasma and tissue curves and impose greater computational and memory overhead, an initial scan duration of 30 s provides a practical trade-off for dynamic PET 18F-fluoro-deoxyglucose studies. (author)

  6. Reliability of capturing foot parameters using digital scanning and the neutral suspension casting technique

    Directory of Open Access Journals (Sweden)

    Rome Keith

    2011-03-01

    Full Text Available Abstract Background A clinical study was conducted to determine the intra and inter-rater reliability of digital scanning and the neutral suspension casting technique to measure six foot parameters. The neutral suspension casting technique is a commonly utilised method for obtaining a negative impression of the foot prior to orthotic fabrication. Digital scanning offers an alternative to the traditional plaster of Paris techniques. Methods Twenty one healthy participants volunteered to take part in the study. Six casts and six digital scans were obtained from each participant by two raters of differing clinical experience. The foot parameters chosen for investigation were cast length (mm, forefoot width (mm, rearfoot width (mm, medial arch height (mm, lateral arch height (mm and forefoot to rearfoot alignment (degrees. Intraclass correlation coefficients (ICC with 95% confidence intervals (CI were calculated to determine the intra and inter-rater reliability. Measurement error was assessed through the calculation of the standard error of the measurement (SEM and smallest real difference (SRD. Results ICC values for all foot parameters using digital scanning ranged between 0.81-0.99 for both intra and inter-rater reliability. For neutral suspension casting technique inter-rater reliability values ranged from 0.57-0.99 and intra-rater reliability values ranging from 0.36-0.99 for rater 1 and 0.49-0.99 for rater 2. Conclusions The findings of this study indicate that digital scanning is a reliable technique, irrespective of clinical experience, with reduced measurement variability in all foot parameters investigated when compared to neutral suspension casting.

  7. The Effect of Wind on Tree STEM Parameter Estimation Using Terrestrial Laser Scanning

    Science.gov (United States)

    Vaaja, M. T.; Virtanen, J.-P.; Kurkela, M.; Lehtola, V.; Hyyppä, J.; Hyyppä, H.

    2016-06-01

    The 3D measurement technique of terrestrial laser scanning (TLS) in forest inventories has shown great potential for improving the accuracy and efficiency of both individual tree and plot level data collection. However, the effect of wind has been poorly estimated in the error analysis of TLS tree measurements although it causes varying deformations to the trees. In this paper, we evaluated the effect of wind on tree stem parameter estimation at different heights using TLS. The data consists of one measured Scots pine captured from three different scanning directions with two different scanning resolutions, 6.3 mm and 3.1 mm at 10 m. The measurements were conducted under two different wind speeds, approximately 3 m/s and 9 m/s, as recorded by a nearby weather station of the Finnish Meteorological Institute. Our results show that the wind may cause both the underestimation and overestimation of tree diameter when using TLS. The duration of the scanning is found to have an impact for the measured shape of the tree stem under 9 m/s wind conditions. The results also indicate that a 9 m/s wind does not have a significant effect on the stem parameters of the lower part of a tree (stem movement.

  8. Optimization of scanning parameters for multi-slice CT colonography: Experiments with synthetic and animal phantoms

    International Nuclear Information System (INIS)

    AIM: To determine the optimal collimation, pitch, tube current and reconstruction interval for multi-slice computed tomography (CT) colonography with regard to attaining satisfactory image quality while minimizing patient radiation dose. MATERIALS AND METHODS: Multi-slice CT was performed on plastic, excised pig colon and whole pig phantoms to determine optimal settings. Performance was judged by detection of simulated polyps and statistical measures of the image parameters. Fat and muscle conspicuity was measured from images of dual tube-current prone/supine patient data to derive a measure of tube current effects on tissue contrast. RESULTS: A collimation of 4x2.5 mm was sufficient for detection of polyps 4 mm and larger, provided that a reconstruction interval of 1.25 mm was used. A pitch of 1.5 allowed faster scanning and reduced radiation dose without resulting in a loss of important information, i.e. detection of small polyps, when compared with a pitch of 0.75. Tube current and proportional radiation dose could be lowered substantially without deleterious effects on the detection of the air-mucosal interface, however, increased image noise substantially reduced conspicuity of different tissues. CONCLUSION: An optimal image acquisition set-up of 4x2.5 mm collimation, reconstruction interval of 1.25 mm, pitch of 1.5 and dual prone/supine scan of 40/100 mA tube current is proposed for our institution for scanning symptomatic patients. Indications are that where CT colonography is used for colonic polyp screening in non-symptomatic patients, a 40 mA tube current could prove satisfactory for both scans

  9. Optimization of scanning parameters for multi-slice CT colonography: Experiments with synthetic and animal phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Embleton, K.V. E-mail: k.embleton@man.ac.uk; Nicholson, D.A.; Hufton, A.P.; Jackson, A

    2003-12-01

    AIM: To determine the optimal collimation, pitch, tube current and reconstruction interval for multi-slice computed tomography (CT) colonography with regard to attaining satisfactory image quality while minimizing patient radiation dose. MATERIALS AND METHODS: Multi-slice CT was performed on plastic, excised pig colon and whole pig phantoms to determine optimal settings. Performance was judged by detection of simulated polyps and statistical measures of the image parameters. Fat and muscle conspicuity was measured from images of dual tube-current prone/supine patient data to derive a measure of tube current effects on tissue contrast. RESULTS: A collimation of 4x2.5 mm was sufficient for detection of polyps 4 mm and larger, provided that a reconstruction interval of 1.25 mm was used. A pitch of 1.5 allowed faster scanning and reduced radiation dose without resulting in a loss of important information, i.e. detection of small polyps, when compared with a pitch of 0.75. Tube current and proportional radiation dose could be lowered substantially without deleterious effects on the detection of the air-mucosal interface, however, increased image noise substantially reduced conspicuity of different tissues. CONCLUSION: An optimal image acquisition set-up of 4x2.5 mm collimation, reconstruction interval of 1.25 mm, pitch of 1.5 and dual prone/supine scan of 40/100 mA tube current is proposed for our institution for scanning symptomatic patients. Indications are that where CT colonography is used for colonic polyp screening in non-symptomatic patients, a 40 mA tube current could prove satisfactory for both scans.

  10. Effect of computed tomography scanning parameters on gold nanoparticle and iodine contrast

    Science.gov (United States)

    Galper, Merav W.; Saung, May T.; Fuster, Valentin; Roessl, Ewald; Thran, Axel; Proksa, Roland; Fayad, Zahi A.; Cormode, David P.

    2013-01-01

    Purpose Gold nanoparticles (gold-NP) have lately been proposed as alternative contrast agents to iodine-based contrast agents (iodine-CA) for computed tomography angiography. The aims of this study were to confirm an appropriate environment in which to evaluate such novel contrast agents, to investigate the comparative contrast of iodine-CA versus gold-NP and to determine optimal scanning parameters for gold-NP. Materials and methods Three different clinical scanners were used to acquire CT images. A range of concentrations (10 mM to 1.5 M) of gold-NP and iodine-CA were scanned with varying X-ray tube voltages and currents, reconstruction kernels, protocols and scanner models. The different environments investigated were air, water and water with a bone simulant (Ca3(PO4)2). Regression coefficients were derived from the attenuation values plotted against concentration and compared for statistical significance using t-values. Results As expected, contrast was linearly related to concentration up to 500-1000 mM, depending on the conditions used, whereupon a plateau of 3000 HU was reached. Attenuation was significantly different depending on the environment used (air, water or water and bone simulant). Contrast is dependent on the X-ray tube voltage used, with the contrast produced from iodine-CA sharply declining with increasing voltage, while the contrast of gold-NP varied less with tube voltage, but was maximal at 120 kV in water with bone simulant. Current, reconstruction kernels, protocols and scanner model had less effect on contrast. Conclusion Water with a bone simulant is a preferable environment for evaluating novel cardiac CT contrast agents. Relative iodine-CA vs. gold-NP contrast is dependent on the scanning conditions used. Optimal scanning conditions for gold-NP will likely use an X-ray tube voltage of 120 kV. PMID:22766909

  11. High-resolution MRI of the labyrinth. Optimization of scan parameters with 3D-FSE

    International Nuclear Information System (INIS)

    The aim of our study was to optimize the parameters of high-resolution MRI of the labyrinth with a 3D fast spin-echo (3D-FSE) sequence. We investigated repetition time (TR), echo time (TE), Matrix, field of view (FOV), and coil selection in terms of CNR (contrast-to-noise ratio) and SNR (signal-to-noise ratio) by comparing axial images and/or three-dimensional images. The optimal 3D-FSE sequence parameters were as follows: 1.5 Tesla MR unit (Signa LX, GE Medical Systems), 3D-FSE sequence, dual 3-inch surface coil, acquisition time=12.08 min, TR=5000 msec, TE=300 msec, 3 number of excitations (NEX), FOV=12 cm, matrix=256 x 256, slice thickness=0.5 mm/0.0 sp, echo train=64, bandwidth=±31.5 kHz. High-resolution MRI of the labyrinth using the optimized 3D-FSE sequence parameters permits visualization of important anatomic details (such as scala tympani and scala vestibuli), making it possible to determine inner ear anomalies and the patency of cochlear turns. To obtain excellent heavily T2-weighted axial and three-dimensional images in the labyrinth, high CNR, SNR, and spatial resolution are significant factors at the present time. Furthermore, it is important not only to optimize the scan parameters of 3D-FSE but also to select an appropriate coil for high-resolution MRI of the labyrinth. (author)

  12. Integral parameters for the Godiva benchmark calculated by using theoretical and adjusted fission spectra of 235U

    International Nuclear Information System (INIS)

    The theoretical and adjusted Watt spectrum representations for 235U are used as weighting functions to calculate Keff and θf28/θf25 for the benchmark Godiva. The results obtained show that the values of Keff and θf28/θf25 are not affected by spectrum form change. (author)

  13. Analysis of Left Ventricular Functional Parameters in Normal Korean Subjects by ECG Gated Blood Pool Scan

    International Nuclear Information System (INIS)

    The demand for refinement in noninvasive and quantitative assessment of left ventricular (LU) function is increasing. To assess normal values of left ventricular functional parameters during both systole and diastole by scintigraphic method using computerized triple-head gamma camera and to evaluate correlations between these parameters. ECG gated blood pool scan with 99mTc-Human serum albumin was performed in 94 normal Korean subjects. Ejection fraction (EF), systolic parameters [peak emptying rate (PER), average emptying rate (AER), time to peak emptying rate (TPER)], and diastolic parameters [peak filling rate (PFR), average filling rate (AFR), time to peak filling rate (TPFR)] were obtained by analysis of LV time-activity curve, the correlation of these parameters to the age and sex, and the correlation between these parameters were evaluated. 1) Mean value of ejection fraction in study subjects was 59.6 ± 5.25% and showed no significant correlation to age (r=0.08) and sex but showed most pronounced correlation to PFR (r=0.46, p<0.001), PER (r=0.41, p<0.001), AFR (r=0.34, p<0.001) and AER (r=0.28, p<0.01). 2) Mean values of systolic parameters were as follows: PER=3.22 + 0.50 end-diastolic volume/sec, AER =2.22 + 0.45 end-diastolic volume/sec, TPER=103.5 + 29.30 msec. They showed no significant correlation to age and sex. 3) Mean values of diastolic parameters were as follows: PFR=2.71+0.51 end-diastolic volume/sec, AFR=1.830.44 end-diastolic volume/sec, TPFR=132.1 + 33.45 msec. They showed strong correlation to age (r=-0/70, -0.64, 0.37, p<0.001). Left ventricular functional parameters in normal Korean subjects were obtained reliably by computerized scintigraphic method and may be applied to the evaluation of cardiac function in diseased patients.

  14. Investigation of scanning parameters for thyroid fine needle aspiration cytology specimens: A pilot study

    Directory of Open Access Journals (Sweden)

    Maheswari S Mukherjee

    2015-01-01

    Full Text Available Background: Interest in developing more feasible and affordable applications of virtual microscopy in the field of cytology continues to grow. Aims: The aim of this study was to investigate the scanning parameters for the thyroid fine needle aspiration (FNA cytology specimens. Subjects and Methods: A total of twelve glass slides from thyroid FNA cytology specimens were digitized at ×40 with 1 micron (μ interval using seven focal plane (FP levels (Group 1, five FP levels (Group 2, and three FP levels (Group 3 using iScan Coreo Au scanner (Ventana, AZ, USA producing 36 virtual images (VI. With an average wash out period of 2 days, three participants diagnosed the preannotated cells of Groups 1, 2, and 3 using BioImagene′s Image Viewer (version 3.1 (Ventana, Inc., Tucson, AZ, USA, and the corresponding 12 glass slides (Group 4 using conventional light microscopy. Results: All three raters correctly identified and showed complete agreement on the glass and VI for: 86% of the cases at FP Level 3, 83% of the cases at both the FP Levels 5 and 7. The intra-observer concordance between the glass slides and VI for all three raters was highest (97% for Level 3 and glass, same (94% for Level 5 and glass; and Level 7 and glass. The inter-rater reliability was found to be highest for the glass slides, and three FP levels (77%, followed by five FP levels (69.5%, and seven FP levels (69.1%. Conclusions: This pilot study found that among the three different FP levels, the VI digitized using three FP levels had slightly higher concordance, intra-observer concordance, and inter-rater reliability. Scanning additional levels above three FP levels did not improve concordance. We believe that there is no added benefit of acquiring five FP levels or more especially when considering the file size, and storage costs. Hence, this study reports that FP level three and 1 μ could be the potential scanning parameters for the thyroid FNA cytology specimens.

  15. Validation of CENDL and JEFF evaluated nuclear data files for TRIGA calculations through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors

    International Nuclear Information System (INIS)

    The aim of this paper is to present the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through the analysis of the integral parameters of TRX and BAPL benchmark lattices of thermal reactors for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. In this process, the 69-group cross-section library for lattice code WIMS was generated using the basic evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 with the help of nuclear data processing code NJOY99.0. Integral measurements on the thermal reactor lattices TRX-1, TRX-2, BAPL-UO2-1, BAPL-UO2-2 and BAPL-UO2-3 served as standard benchmarks for testing nuclear data files and have also been selected for this analysis. The integral parameters of the said lattices were calculated using the lattice transport code WIMSD-5B based on the generated 69-group cross-section library. The calculated integral parameters were compared to the measured values as well as the results of Monte Carlo Code MCNP. It was found that in most cases, the values of integral parameters show a good agreement with the experiment and MCNP results. Besides, the group constants in WIMS format for the isotopes U-235 and U-238 between two data files have been compared using WIMS library utility code WILLIE and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through benchmarking the integral parameters of TRX and BAPL lattices and can also be essential to implement further neutronic analysis of TRIGA Mark-II research reactor at AERE, Dhaka, Bangladesh.

  16. The quality of reconstructed 3D images in multidetector-row helical CT: experimental study involving scan parameters

    International Nuclear Information System (INIS)

    To determine which multidetector-row helical CT scanning technique provides the best-quality reconstructed 3D images, and to assess differences in image quality according to the levels of the scanning parameters used. Four objects with different surfaces and contours were scanned using multidetector-row helical CT at three detector-row collimations (1.25, 2.50, 5.00 mm), two pitches (3.0, 6.0), and three different degrees of overlap between the reconstructed slices (0%, 25%, 50%). Reconstructed 3D images of the resulting 72 sets of data were produced using volumetric rendering. The 72 images were graded on a scale from 1 (worst) to 5 (best) for each of four rating criteria, giving a mean score for each criterion and an overall mean score. Statistical analysis was used to assess differences in image quality according to scanning parameter levels. The mean score for each rating criterion, and the overall mean score, varied significantly according to the scanning parameter levels used. With regard to detector-row collimation and pitch, all levels of scanning parameters gave rise to significant differences, while in the degree of overlap of reconstructed slices, there were significant differences between overlap of 0% and of 50% in all levels of scanning parameters, and between overlap of 25% and of 50% in overall accuracy and overall mean score. Among the 18 scanning sequences, the highest score (4.94) was achieved with 1.25 mm detector-row collimation, 3.0 pitch, and 50% overlap between reconstructed slices. Comparison of the quality of reconstructed 3D images obtained using multidetector-row helical CT and various scanning techniques indicated that the 1.25 mm, 3.0, 50% scanning sequence was best. Quality improved as detector-row collimation decreased; as pitch was reduced from 6.0 to 3.0; and as overlap between reconstructed slices increased

  17. Comparison of nuclear parameters for a LMFBR heterogenous Benchmark core. Influence of different basic data sets and processing codes

    International Nuclear Information System (INIS)

    A LMFBR heterogenous core model was a few years ago proposed by CEA as a Benchmark core for comparative calculations. The geometrical RZ model consists of three radial fissile zones of the same enrichment divided at the midplane by an axial slice of internal breeder material. The fissile zones are separated by three internal breeder zones, one central zone and two breeder rings. The core has been studied with 2D diffusion codes in 10 to 25 energy groups. Comparisons have been made between CEA (CARNAVAL III) INTERATOM (KEKINR) and STUDSVIK (ENDF IV) solutions. THe spread in k (sub)eff is 1.7 percent with the lowest value for STUDSVIK (ENDF IV) and the highest value for INTERATOM (KFKINR). The spread in breeding ratio is 0.03 with the highest value for STUDSVIK and lowest for INTERATOM. This spread in k (sub) eff and BR is of the same magnitude as for homogenous benchmark core. The variations in the sodium void effect between CARNAVAL III, KFKINR and ENDF IV solutions are rather similar for the heterogenous and homogenous benchmark cores. Comparison of one-group core fission and capture cross sections indicate a dominating influence of the processing codes. The influence on k (sub) eff seem to be smaller due to cancelling effects. (author)

  18. Parametric modeling and optimization of laser scanning parameters during laser assisted machining of Inconel 718

    Science.gov (United States)

    Venkatesan, K.; Ramanujam, R.; Kuppan, P.

    2016-04-01

    This paper presents a parametric effect, microstructure, micro-hardness and optimization of laser scanning parameters (LSP) on heating experiments during laser assisted machining of Inconel 718 alloy. The laser source used for experiments is a continuous wave Nd:YAG laser with maximum power of 2 kW. The experimental parameters in the present study are cutting speed in the range of 50-100 m/min, feed rate of 0.05-0.1 mm/rev, laser power of 1.25-1.75 kW and approach angle of 60-90°of laser beam axis to tool. The plan of experiments are based on central composite rotatable design L31 (43) orthogonal array. The surface temperature is measured via on-line measurement using infrared pyrometer. Parametric significance on surface temperature is analysed using response surface methodology (RSM), analysis of variance (ANOVA) and 3D surface graphs. The structural change of the material surface is observed using optical microscope and quantitative measurement of heat affected depth that are analysed by Vicker's hardness test. The results indicate that the laser power and approach angle are the most significant parameters to affect the surface temperature. The optimum ranges of laser power and approach angle was identified as 1.25-1.5 kW and 60-65° using overlaid contour plot. The developed second order regression model is found to be in good agreement with experimental values with R2 values of 0.96 and 0.94 respectively for surface temperature and heat affected depth.

  19. Benchmarking HRD.

    Science.gov (United States)

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  20. Multidimensional benchmarking

    OpenAIRE

    Campbell, Akiko

    2016-01-01

    Benchmarking is a process of comparison between performance characteristics of separate, often competing organizations intended to enable each participant to improve its own performance in the marketplace (Kay, 2007). Benchmarking sets organizations’ performance standards based on what “others” are achieving. Most widely adopted approaches are quantitative and reveal numerical performance gaps where organizations lag behind benchmarks; however, quantitative benchmarking on its own rarely yi...

  1. Derivation of tree stem structural parameters from static terrestrial laser scanning data

    Science.gov (United States)

    Tian, Wei; Lin, Yi; Liu, Yajing; Niu, Zheng

    2014-11-01

    Accurate tree-level characteristic information is increasingly demanded for forest management and environment protection. The cutting-edge remote sensing technique of terrestrial laser scanning (TLS) shows the potential of filling this gap. This study focuses on exploring the methods for deriving various tree stem structural parameters, such as stem position, diameter at breast height (DBH), the degree of stem shrinkage, and the elevation angle and azimuth angle of stem inclination. The data for test was collected with a Leica HDS6100 TLS system in Seurasaari, Southern Finland in September 2010. In the field, the reference positions and DBHs of 100 trees were measured manually. The isolation of individual trees is based on interactive segmentation of point clouds. The estimation of stem position and DBH is based on the schematic of layering and then least-square-based circle fitting in each layer. The slope of robust fit line between the height of each layer and DBH is used to characterize the stem shrinkage. The elevation angle of stem inclination is described by the angle between the ground plane and the fitted stem axis. The angle between the north direction and the fitted stem axis gives the azimuth angle of stem inclination. The estimation of the DBHs performed with R square (R2) of 0.93 and root mean square error (RMSE) of 0.038m.The average angle corresponding to stem shrinkage is -1.86°. The elevation angles of stem inclinations are ranged from 31° to 88.3°. The results have basically validated TLS for deriving multiple structural parameters of stem, which help better grasp tree specialties.

  2. The effect of scan parameters on cone beam CT trabecular bone microstructural measurements of the human mandible

    OpenAIRE

    Ibrahim, N; Parsa, A.; Hassan, B.; van der Stelt, P; Aartman, I.H.A.; Wismeijer, D.

    2014-01-01

    The objective of this study was to investigate the effect of different cone beam CT scan parameters on trabecular bone microstructure measurements. A human mandibular cadaver was scanned using a cone beam CT (3D Accuitomo 170; J.Morita, Kyota, Japan). 20 cone beam CT images were obtained using 5 different fields of view (4X4 cm, 6x6 cm, 8X8 cm, 10x10 cm and 10X5 cm), 2 types of rotation steps (180 degrees and 360 degrees) and 2 scanning resolutions (standard and high). Image analysis software...

  3. TH-C-18A-11: Investigating the Minimum Scan Parameters Required to Generate Free-Breathing Fast-Helical CT Scans Without Motion-Artifacts

    International Nuclear Information System (INIS)

    Purpose: A recently proposed 4D-CT protocol uses deformable registration of free-breathing fast-helical CT scans to generate a breathing motion model. In order to allow accurate registration, free-breathing images are required to be free of doubling-artifacts, which arise when tissue motion is greater than scan speed. This work identifies the minimum scanner parameters required to successfully generate free-breathing fast-helical scans without doubling-artifacts. Methods: 10 patients were imaged under free breathing conditions 25 times in alternating directions with a 64-slice CT scanner using a low dose fast helical protocol. A high temporal resolution (0.1s) 4D-CT was generated using a patient specific motion model and patient breathing waveforms, and used as the input for a scanner simulation. Forward projections were calculated using helical cone-beam geometry (800 projections per rotation) and a GPU accelerated reconstruction algorithm was implemented. Various CT scanner detector widths and rotation times were simulated, and verified using a motion phantom. Doubling-artifacts were quantified in patient images using structural similarity maps to determine the similarity between axial slices. Results: Increasing amounts of doubling-artifacts were observed with increasing rotation times > 0.2s for 16×1mm slice scan geometry. No significant increase in doubling artifacts was observed for 64×1mm slice scan geometry up to 1.0s rotation time although blurring artifacts were observed >0.6s. Using a 16×1mm slice scan geometry, a rotation time of less than 0.3s (53mm/s scan speed) would be required to produce images of similar quality to a 64×1mm slice scan geometry. Conclusion: The current generation of 16 slice CT scanners, which are present in most Radiation Oncology departments, are not capable of generating free-breathing sorting-artifact-free images in the majority of patients. The next generation of CT scanners should be capable of at least 53mm/s scan speed

  4. Validation study of SRAC2006 code system based on evaluated nuclear data libraries for TRIGA calculations by benchmarking integral parameters of TRX and BAPL lattices of thermal reactors

    International Nuclear Information System (INIS)

    Highlights: ► To validate the SRAC2006 code system for TRIGA neutronics calculations. ► TRX and BAPL lattices are treated as standard benchmarks for this purpose. ► To compare the calculated results with experiment as well as MCNP values in this study. ► The study demonstrates a good agreement with the experiment and the MCNP results. ► Thus, this analysis reflects the validation study of the SRAC2006 code system. - Abstract: The goal of this study is to present the validation study of the SRAC2006 code system based on evaluated nuclear data libraries ENDF/B-VII.0 and JENDL-3.3 for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. This study is achieved through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors. In integral measurements, the thermal reactor lattices TRX-1, TRX-2, BAPL-UO2-1, BAPL-UO2-2 and BAPL-UO2-3 are treated as standard benchmarks for validating/testing the SRAC2006 code system as well as nuclear data libraries. The integral parameters of the said lattices are calculated using the collision probability transport code PIJ of the SRAC2006 code system at room temperature 20 °C based on the above libraries. The calculated integral parameters are compared to the measured values as well as the MCNP values based on the Chinese evaluated nuclear data library CENDL-3.0. It was found that in most cases, the values of integral parameters demonstrate a good agreement with the experiment and the MCNP results. In addition, the group constants in SRAC format for TRX and BAPL lattices in fast and thermal energy range respectively are compared between the above libraries and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation study of the SRAC2006 code system based on evaluated nuclear data libraries JENDL-3.3 and ENDF/B-VII.0 and can also be essential to implement further neutronics calculations of

  5. Financial benchmarking

    OpenAIRE

    Boldyreva, Anna

    2014-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  6. Optimized treatment parameters to account for interfractional variability in scanned ion beam therapy of lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Brevet, Romain

    2015-02-04

    Scanned ion beam therapy of lung tumors is severely limited in its clinical applicability by intrafractional organ motion, interference effects between beam and tumor motion (interplay) as well as interfractional anatomic changes. To compensate for dose deterioration by intrafractional motion, motion mitigation techniques, such as gating have been developed. The latter confines the irradiation to a predetermined breathing state, usually the stable end-exhale phase. However, optimization of the treatment parameters is needed to further improve target dose coverage and normal tissue sparing. The aim of the study presented in this dissertation was to determine treatment planning parameters that permit to recover good target coverage and homogeneity during a full course of lung tumor treatments. For 9 lung tumor patients from MD Anderson Cancer Center (MDACC), a total of 70 weekly time-resolved computed tomography (4DCT) datasets were available, which depict the evolution of the patient anatomy over the several fractions of the treatment. Using the GSI in-house treatment planning system (TPS) TRiP4D, 4D simulations were performed on each weekly 4DCT for each patient using gating and optimization of a single treatment plan based on a planning CT acquired prior to treatment. It was found that using a large beam spot size, a short gating window (GW), additional margins and multiple fields permitted to obtain the best results, yielding an average target coverage (V95) of 96.5%. Two motion mitigation techniques, one approximating the rescanning process (multiple irradiations of the target with a fraction of the planned dose) and one combining the latter and gating, were then compared to gating. Both did neither show an improvement in target dose coverage nor in normal tissue sparing. Finally, the total dose delivered to each patient in a simulation of a fractioned treatment was calculated and clinical requirements in terms of target coverage and normal tissue sparing were

  7. Optimized treatment parameters to account for interfractional variability in scanned ion beam therapy of lung cancer

    International Nuclear Information System (INIS)

    Scanned ion beam therapy of lung tumors is severely limited in its clinical applicability by intrafractional organ motion, interference effects between beam and tumor motion (interplay) as well as interfractional anatomic changes. To compensate for dose deterioration by intrafractional motion, motion mitigation techniques, such as gating have been developed. The latter confines the irradiation to a predetermined breathing state, usually the stable end-exhale phase. However, optimization of the treatment parameters is needed to further improve target dose coverage and normal tissue sparing. The aim of the study presented in this dissertation was to determine treatment planning parameters that permit to recover good target coverage and homogeneity during a full course of lung tumor treatments. For 9 lung tumor patients from MD Anderson Cancer Center (MDACC), a total of 70 weekly time-resolved computed tomography (4DCT) datasets were available, which depict the evolution of the patient anatomy over the several fractions of the treatment. Using the GSI in-house treatment planning system (TPS) TRiP4D, 4D simulations were performed on each weekly 4DCT for each patient using gating and optimization of a single treatment plan based on a planning CT acquired prior to treatment. It was found that using a large beam spot size, a short gating window (GW), additional margins and multiple fields permitted to obtain the best results, yielding an average target coverage (V95) of 96.5%. Two motion mitigation techniques, one approximating the rescanning process (multiple irradiations of the target with a fraction of the planned dose) and one combining the latter and gating, were then compared to gating. Both did neither show an improvement in target dose coverage nor in normal tissue sparing. Finally, the total dose delivered to each patient in a simulation of a fractioned treatment was calculated and clinical requirements in terms of target coverage and normal tissue sparing were

  8. A performance geodynamo benchmark

    Science.gov (United States)

    Matsui, H.; Heien, E. M.

    2014-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. However, to approach the parameters regime of the Earth's outer core, we need massively parallel computational environment for extremely large spatial resolutions. Local methods are expected to be more suitable for massively parallel computation because the local methods needs less data communication than the spherical harmonics expansion, but only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, some numerical dynamo models using spherical harmonics expansion has performed successfully with thousands of processes. We perform benchmark tests to asses various numerical methods to asses the next generation of geodynamo simulations. The purpose of the present benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the present study, we will report the results of the performance benchmark. We perform the participated dynamo models under the same computational environment (XSEDE TACC Stampede), and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions as possible to investigate computational capability (e

  9. Modelling anaerobic co-digestion in Benchmark Simulation Model No. 2: Parameter estimation, substrate characterisation and plant-wide integration.

    Science.gov (United States)

    Arnell, Magnus; Astals, Sergi; Åmand, Linda; Batstone, Damien J; Jensen, Paul D; Jeppsson, Ulf

    2016-07-01

    Anaerobic co-digestion is an emerging practice at wastewater treatment plants (WWTPs) to improve the energy balance and integrate waste management. Modelling of co-digestion in a plant-wide WWTP model is a powerful tool to assess the impact of co-substrate selection and dose strategy on digester performance and plant-wide effects. A feasible procedure to characterise and fractionate co-substrates COD for the Benchmark Simulation Model No. 2 (BSM2) was developed. This procedure is also applicable for the Anaerobic Digestion Model No. 1 (ADM1). Long chain fatty acid inhibition was included in the ADM1 model to allow for realistic modelling of lipid rich co-substrates. Sensitivity analysis revealed that, apart from the biodegradable fraction of COD, protein and lipid fractions are the most important fractions for methane production and digester stability, with at least two major failure modes identified through principal component analysis (PCA). The model and procedure were tested on bio-methane potential (BMP) tests on three substrates, each rich on carbohydrates, proteins or lipids with good predictive capability in all three cases. This model was then applied to a plant-wide simulation study which confirmed the positive effects of co-digestion on methane production and total operational cost. Simulations also revealed the importance of limiting the protein load to the anaerobic digester to avoid ammonia inhibition in the digester and overloading of the nitrogen removal processes in the water train. In contrast, the digester can treat relatively high loads of lipid rich substrates without prolonged disturbances. PMID:27088248

  10. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added in...... order to obtain a unique selection...

  11. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  12. Precious benchmarking

    International Nuclear Information System (INIS)

    Recently, there has been a new word added to our vocabulary - benchmarking. Because of benchmarking, our colleagues travel to power plants all around the world and guests from the European power plants visit us. We asked Marek Niznansky from the Nuclear Safety Department in Jaslovske Bohunice NPP to explain us this term. (author)

  13. Body Tumor CT Perfusion Protocols: Optimization of Acquisition Scan Parameters in a Rat Tumor Model

    OpenAIRE

    Tognolini, Alessia; Schor-Bardach, Rachel; Pianykh, Oleg S.; Wilcox, Carol J.; Raptopoulos, Vassilios; Goldberg, S. Nahum

    2009-01-01

    Purpose: To evaluate the effects of total scanning time (TST), interscan delay (ISD), inclusion of image at peak vascular enhancement (IPVE), and selection of the input function vessel on the accuracy of tumor blood flow (BF) calculation with computed tomography (CT) in an animal model.

  14. Highest performance in 3D metal cutting at smallest footprint: benchmark of a robot based system vs. parameters of gantry systems

    Science.gov (United States)

    Scheller, Torsten; Bastick, André; Michel-Triller, Robert; Manzella, Christon

    2014-02-01

    In the automotive industry as well as in other industries ecological aspects regarding energy savings are driving new technologies and materials, e.g. lightweight materials as aluminium or press hardened steels. Processing such parts especially complex 3D shaped parts laser manufacturing has become the key process offering highest efficiency. The most established systems for 3D cutting applications are based on gantry systems. The disadvantage of those systems is their huge footprint to realize the required stability and work envelope. Alternatively a robot based system might be of advantage if accuracy, speed and overall performance would be capable processing automotive parts. With the BIM "beam in motion" system, JENOPTIK Automatisierungstechnik GmbH has developed a modular robot based laser processing machine, which meets all OEM specs processing press hardened steel parts. A benchmark of the BIM versus a gantry system was done regarding all required parameters to fulfil OEM specifications for press hardened steel parts. As a result a highly productive, accurate and efficient system can be described based on one or multiple robot modules working simultaneously together. The paper presents the improvements on the robot machine concept BIM addressed in 2012 [1] leading to an industrial proven system approach for the automotive industry. It further compares the performance and the parameters for 3D cutting applications of the BIM system versus a gantry system by samples of applied parts. Finally an overview of suitable applications for processing complex 3D parts with high productivity at small footprint is given.

  15. A benchmark study for different numerical parameters and their impact on the calculated strain levels for a model part door outer

    International Nuclear Information System (INIS)

    To increase the accuracy of finite element simulations in daily practice the local German and Austrian Deep Drawing Research Groups of IDDRG founded a special Working Group in year 2000. The main objective of this group was the continuously ongoing study and discussion of numerical / material effects in simulation jobs and to work out possible solutions. As a first theme of this group the intensive study of small die radii and the possibility of detecting material failure in these critical forming positions was selected. The part itself is a fictional body panel outside in which the original door handle of the VW Golf A4 has been constructed, a typical position of possible material necking or rupture in the press shop. All conditions to do a successful simulation have been taken care of in advance, material data, boundary conditions, friction, FLC and others where determined for the two materials in investigation - a mild steel and a dual phase steel HXT500X. The results of the experiments have been used to design the descriptions of two different benchmark runs for the simulation. The simulations with different programs as well as with different parameters showed on one hand negligible and on the other hand parameters with strong impact on the result - thereby having a different impact on a possible material failure prediction

  16. Benchmarking of a treatment planning system for spot scanning proton therapy: Comparison and analysis of robustness to setup errors of photon IMRT and proton SFUD treatment plans of base of skull meningioma

    Energy Technology Data Exchange (ETDEWEB)

    Harding, R., E-mail: ruth.harding2@wales.nhs.uk [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF, United Kingdomand Abertawe Bro Morgannwg University Health Board, Medical Physics and Clinical Engineering, Swansea SA2 8QA (United Kingdom); Trnková, P.; Lomax, A. J. [Paul Scherrer Institute, Centre for Proton Therapy, Villigen 5232 (Switzerland); Weston, S. J.; Lilley, J.; Thompson, C. M.; Cosgrove, V. P. [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF (United Kingdom); Short, S. C. [Leeds Institute of Molecular Medicine, Oncology and Clinical Research, Leeds LS9 7TF, United Kingdomand St James’s Institute of Oncology, Oncology, Leeds LS9 7TF (United Kingdom); Loughrey, C. [St James’s Institute of Oncology, Oncology, Leeds LS9 7TF (United Kingdom); Thwaites, D. I. [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF, United Kingdomand Institute of Medical Physics, School of Physics, University of Sydney, Sydney NSW 2006 (Australia)

    2014-11-01

    Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was to benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.

  17. Radioiodine scan index: A simplified, quantitative treatment response parameter for metastatic thyroid carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jong Ryool; Ahn, Byeong Cheol; Jeong, Shin Young; Lee, Sang Woo; Lee, Jae Tae [Dept. of Nuclear Medicine, Kyungpook National University School of Medicine and Hospital, Daegu (Korea, Republic of)

    2015-09-15

    We aimed to develop and validate a simplified, novel quantification method for radioiodine whole-body scans (WBSs) as a predictor for the treatment response in differentiated thyroid carcinoma (DTC) patients with distant metastasis. We retrospectively reviewed serial WBSs after radioiodine treatment from 2008 to 2011 in patients with metastatic DTC. For standardization of TSH simulation, only a subset of patients whose TSH level was fully enhanced (TSH > 80 mU/l) was enrolled. The radioiodine scan index (RSI) was calculated by the ratio of tumor-to-brain uptake. We compared correlations between the RSI and TSH-stimulated serum thyroglobulin (TSH{sub sT}g) level and between the RSI and Tg reduction rate of consecutive radioiodine treatments. A total of 30 rounds of radioiodine treatment for 15 patients were eligible. Tumor histology was 11 papillary and 4 follicular subtypes. The TSH{sub sT}g level was mean 980 ng/ml (range, 0.5–11,244). The Tg reduction rate after treatment was a mean of −7 % (range, −90 %–210 %). Mean RSI was 3.02 (range, 0.40–10.97). RSI was positively correlated with the TSH{sub sT}g level (R2 = 0.3084, p = 0.001) and negatively correlated with the Tg reduction rate (R2 = 0.2993, p = 0.037). The regression equation to predict treatment response was as follows: Tg reduction rate = −14.581 × RSI + 51.183. Use of the radioiodine scan index derived from conventional WBS is feasible to reflect the serum Tg level in patients with metastatic DTC, and it may be useful for predicting the biologic treatment response after radioiodine treatment.

  18. Scatter radiation breast exposure during head CT: impact of scanning conditions and anthropometric parameters on shielded and unshielded breast dose

    Energy Technology Data Exchange (ETDEWEB)

    Klasic, B. [Hospital for pulmonary diseases, Zagreb (Croatia); Knezevic, Z.; Vekic, B. [Rudjer Boskovic Institute, Zagreb (Croatia); Brnic, Z.; Novacic, K. [Merkur Univ. Hospital, Zagreb (Croatia)

    2006-07-01

    Constantly increasing clinical requests for CT scanning of the head on our facility continue to raise concern regarding radiation exposure of patients, especially radiosensitive tissues positioned close to the scanning plane. The aim of our prospective study was to estimate scatter radiation doses to the breast from routine head CT scans, both with and without use of lead shielding, and to establish influence of various technical and anthropometric factors on doses using statistical data analysis. In 85 patient referred to head CT for objective medical reasons, one breast was covered with lead apron during CT scanning. Radiation doses were measured at skin of both breasts and over the apron simultaneously, by the use of thermo luminescent dosimeters. The doses showed a mean reduction by 37% due to lead shielding. After we statistically analyzed our data, we observed significant correlation between under-the-shield dose and values of technical parameters. We used multiple linear regression model to describe the relationships of doses to unshielded and shielded breast respectively, with anthropometric and technical factors. Our study proved lead shielding of the breast to be effective, easy to use and leading to a significant reduction in scatter dose. (author)

  19. Scatter radiation breast exposure during head CT: impact of scanning conditions and anthropometric parameters on shielded and unshielded breast dose

    International Nuclear Information System (INIS)

    Constantly increasing clinical requests for CT scanning of the head on our facility continue to raise concern regarding radiation exposure of patients, especially radiosensitive tissues positioned close to the scanning plane. The aim of our prospective study was to estimate scatter radiation doses to the breast from routine head CT scans, both with and without use of lead shielding, and to establish influence of various technical and anthropometric factors on doses using statistical data analysis. In 85 patient referred to head CT for objective medical reasons, one breast was covered with lead apron during CT scanning. Radiation doses were measured at skin of both breasts and over the apron simultaneously, by the use of thermo luminescent dosimeters. The doses showed a mean reduction by 37% due to lead shielding. After we statistically analyzed our data, we observed significant correlation between under-the-shield dose and values of technical parameters. We used multiple linear regression model to describe the relationships of doses to unshielded and shielded breast respectively, with anthropometric and technical factors. Our study proved lead shielding of the breast to be effective, easy to use and leading to a significant reduction in scatter dose. (author)

  20. Methodology for Determining Optimal Exposure Parameters of a Hyperspectral Scanning Sensor

    Science.gov (United States)

    Walczykowski, P.; Siok, K.; Jenerowicz, A.

    2016-06-01

    The purpose of the presented research was to establish a methodology that would allow the registration of hyperspectral images with a defined spatial resolution on a horizontal plane. The results obtained within this research could then be used to establish the optimum sensor and flight parameters for collecting aerial imagery data using an UAV or other aerial system. The methodology is based on an user-selected optimal camera exposure parameters (i.e. time, gain value) and flight parameters (i.e. altitude, velocity). A push-broom hyperspectral imager- the Headwall MicroHyperspec A-series VNIR was used to conduct this research. The measurement station consisted of the following equipment: a hyperspectral camera MicroHyperspec A-series VNIR, a personal computer with HyperSpec III software, a slider system which guaranteed the stable motion of the sensor system, a white reference panel and a Siemens star, which was used to evaluate the spatial resolution. Hyperspectral images were recorded at different distances between the sensor and the target- from 5m to 100m. During the registration process of each acquired image, many exposure parameters were changed, such as: the aperture value, exposure time and speed of the camera's movement on the slider. Based on all of the registered hyperspectral images, some dependencies between chosen parameters had been developed: - the Ground Sampling Distance - GSD and the distance between the sensor and the target, - the speed of the camera and the distance between the sensor and the target, - the exposure time and the gain value, - the Density Number and the gain value. The developed methodology allowed us to determine the speed and the altitude of an unmanned aerial vehicle on which the sensor would be mounted, ensuring that the registered hyperspectral images have the required spatial resolution.

  1. Time-domain scanning optical mammography: II. Optical properties and tissue parameters of 87 carcinomas

    International Nuclear Information System (INIS)

    Within a clinical trial on scanning time-domain optical mammography reported on in a companion publication (part I), craniocaudal and mediolateral projection optical mammograms were recorded from 154 patients, suspected of having breast cancer. Here we report on in vivo optical properties of the subset of 87 histologically validated carcinomas which were visible in optical mammograms recorded at two or three near-infrared wavelengths. Tumour absorption and reduced scattering coefficients were derived from distributions of times of flight of photons recorded at the tumour site employing the model of diffraction of photon density waves by a spherical inhomogeneity, located in an otherwise homogeneous tissue slab. Effective tumour radii, taken from pathology, and tumour location along the compression direction, deduced from off-axis optical scans of the tumour region, were included in the analysis as prior knowledge, if available. On average, tumour absorption coefficients exceeded those of surrounding healthy breast tissue by a factor of about 2.5 (670 nm), whereas tumour reduced scattering coefficients were larger by about 20% (670 nm). From absorption coefficients at 670 nm and 785 nm total haemoglobin concentration and blood oxygen saturation were deduced for tumours and surrounding healthy breast tissue. Apart from a few outliers total haemoglobin concentration was observed to be systematically larger in tumours compared to healthy breast tissue. In contrast, blood oxygen saturation was found to be a poor discriminator for tumours and healthy breast tissue; both median values of blood oxygen saturation are the same within their statistical uncertainties. However, the ratio of total haemoglobin concentration over blood oxygen saturation further improves discrimination between tumours and healthy breast tissue. For 29 tumours detected in optical mammograms recorded at three wavelengths (670 nm, 785 nm, 843 nm or 884 nm), scatter power was derived from transport

  2. Effect of Tropicamide and Homatropine Eye Drops on A-Scan Parameters of the Phakic Normal Eyes

    OpenAIRE

    Jagdish Bhatia

    2011-01-01

    Objectives: A prospective study to evaluate the changes in A-Scan axial parameters of phakic normal eyes before and after instillation of 1�0topical Tropicamide and 2�0Homatropine eye drops.Methods: Anterior chamber depth, lens thickness, vitreous chamber length, and ocular axial length were measured in 76 eyes before and after cycloplegia induced by 1�0topical Tropicamide, and in 28 eyes with 2�0Homatropine eye drops.Results: Anterior chamber depth demonstrated increase from baseline reading...

  3. CCF benchmark test

    International Nuclear Information System (INIS)

    A benchmark test on common cause failures (CCF) was performed giving interested institutions in Germany the opportunity of demonstrating and justifying their interpretations of events, their methods and models for analyzed CCF. The participants of this benchmark test belonged to expert and consultant organisations and to industrial institutions. The task for the benchmark test was to analyze two typical groups of motor-operated valves in German nuclear power plants. The benchmark test was carried out in two steps. In the first step the participants were to assess in a qualitative way some 200 event-reports on isolation valves. They then were to establish, quantitatively, the reliability parameters for the CCF in the two groups of motor-operated valves using their own methods and their own calculation models. In a second step the reliability parameters were to be recalculated on the basis of a common reference of well defined events, chosen from all given events, in order to analyze the influence of the calculation models on the reliability parameters. (orig.)

  4. Benchmark exercise

    International Nuclear Information System (INIS)

    The motivation to conduct this benchmark exercise, a summary of the results, and a discussion of and conclusions from the intercomparison are given in Section 5.2. This section contains further details of the results of the calculations and intercomparisons, illustrated by tables and figures, but avoiding repetition of Section 5.2 as far as possible. (author)

  5. Ultrasonic C-Scan Parameters for Detection of Hydride Blisters in Zirconium Pressure Tube

    International Nuclear Information System (INIS)

    EMAT Since Zr-2.5Nb pressure tubes have a high risk for the formation of blisters during their operation in pressurized heavy water reactors, there has been a strong incentive to develop a method for the non-destructive detection of blisters grown on the tube surfaces. However, because there is little mismatch in acoustic impedance between the hydride blisters and zirconium matrix, it is not easy to distinguish the boundary between the blister and zirconium matrix wit h the conventional methods. This study focused on the development of the ultrasonic method to detect the hydride blisters formed on Zr-2.5Nb pressure tubes. Hydride blisters were grown on the outer surface of the zirconium pressure tubes using a cold finger attached to steady state thermal diffusion equipment. An ultrasonic velocity ratio method as well as conventional ultrasonic parameters with immersion technique was developed to detect smaller hydride blisters on the zirconium pressure tube.

  6. Reducing CT radiation dose with iterative reconstruction algorithms: The influence of scan and reconstruction parameters on image quality and CTDIvol

    International Nuclear Information System (INIS)

    Highlights: • Iterative reconstruction (IR) and filtered back projection (FBP) were compared. • CT image noise was reduced by 12.4%–52.2% using IR in comparison to FBP. • IR did not affect high- and low-contrast resolution. • CTDIvol was reduced by 26–50% using hybrid IR at comparable image quality levels. • IR produced good to excellent image quality in patients. - Abstract: Objectives: In this phantom CT study, we investigated whether images reconstructed using filtered back projection (FBP) and iterative reconstruction (IR) with reduced tube voltage and current have equivalent quality. We evaluated the effects of different acquisition and reconstruction parameter settings on image quality and radiation doses. Additionally, patient CT studies were evaluated to confirm our phantom results. Methods: Helical and axial 256 multi-slice computed tomography scans of the phantom (Catphan®) were performed with varying tube voltages (80–140 kV) and currents (30–200 mAs). 198 phantom data sets were reconstructed applying FBP and IR with increasing iterations, and soft and sharp kernels. Further, 25 chest and abdomen CT scans, performed with high and low exposure per patient, were reconstructed with IR and FBP. Two independent observers evaluated image quality and radiation doses of both phantom and patient scans. Results: In phantom scans, noise reduction was significantly improved using IR with increasing iterations, independent from tissue, scan-mode, tube-voltage, current, and kernel. IR did not affect high-contrast resolution. Low-contrast resolution was also not negatively affected, but improved in scans with doses <5 mGy, although object detectability generally decreased with the lowering of exposure. At comparable image quality levels, CTDIvol was reduced by 26–50% using IR. In patients, applying IR vs. FBP resulted in good to excellent image quality, while tube voltage and current settings could be significantly decreased. Conclusions: Our

  7. Revisiting the TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes Over a Range in Parameter Space

    International Nuclear Information System (INIS)

    Improved TORT solutions to the 3D transport codes suite of benchmarks exercise are presented in this study. Preliminary TORT solutions to this benchmark indicate that the majority of benchmark quantities for most benchmark cases are computed with good accuracy, and that accuracy improves with model refinement. However, TORT fails to compute accurate results for some benchmark cases with aspect ratios drastically different from 1, possibly due to ray effects. In this work, we employ the standard approach of splitting the solution to the transport equation into an uncollided flux and a fully collided flux via the code sequence GRTUNCL3D and TORT to mitigate ray effects. The results of this code sequence presented in this paper show that the accuracy of most benchmark cases improved substantially. Furthermore, the iterative convergence problems reported for the preliminary TORT solutions have been resolved by bringing the computational cells' aspect ratio closer to unity and, more importantly, by using 64-bit arithmetic precision in the calculation sequence. Results of this study are also reported

  8. Influence of confocal scanning laser microscopy specific acquisition parameters on the detection and matching of speeded-up robust features.

    Science.gov (United States)

    Stanciu, Stefan G; Hristu, Radu; Stanciu, George A

    2011-04-01

    The robustness and distinctiveness of local features to various object or scene deformations and to modifications of the acquisition parameters play key roles in the design of many computer vision applications. In this paper we present the results of our experiments on the behavior of a recently developed technique for local feature detection and description, Speeded-Up Robust Features (SURF), regarding image modifications specific to Confocal Scanning Laser Microscopy (CSLM). We analyze the repeatability of detected SURF keypoints and the precision-recall of their matching under modifications of three important CSLM parameters: pinhole aperture, photomultiplier (PMT) gain and laser beam power. During any investigation by CSLM these three parameters have to be modified, individually or together, in order to optimize the contrast and the Signal Noise Ratio (SNR), being also inherently modified when changing the microscope objective. Our experiments show that an important amount of SURF features can be detected at the same physical locations in images collected at different values of the pinhole aperture, PMT gain and laser beam power, and further on can be successfully matched based on their descriptors. In the final part, we exemplify the potential of SURF in CSLM imaging by presenting a SURF-based computer vision application that deals with the mosaicing of images collected by this technique. PMID:21349249

  9. Extracting Roof Parameters and Heat Bridges Over the City of Oldenburg from Hyperspectral, Thermal, and Airborne Laser Scanning Data

    Science.gov (United States)

    Bannehr, L.; Luhmann, Th.; Piechel, J.; Roelfs, T.; Schmidt, An.

    2011-09-01

    Remote sensing methods are used to obtain different kinds of information about the state of the environment. Within the cooperative research project HiReSens, funded by the German BMBF, a hyperspectral scanner, an airborne laser scanner, a thermal camera, and a RGB-camera are employed on a small aircraft to determine roof material parameters and heat bridges of house tops over the city Oldenburg, Lower Saxony. HiReSens aims to combine various geometrical highly resolved data in order to achieve relevant evidence about the state of the city buildings. Thermal data are used to obtain the energy distribution of single buildings. The use of hyperspectral data yields information about material consistence of roofs. From airborne laser scanning data (ALS) digital surface models are inferred. They build the basis to locate the best orientations for solar panels of the city buildings. The combination of the different data sets offers the opportunity to capitalize synergies between differently working systems. Central goals are the development of tools for the collection of heat bridges by means of thermal data, spectral collection of roofs parameters on basis of hyperspectral data as well as 3D-capture of buildings from airborne lasers scanner data. Collecting, analyzing and merging of the data are not trivial especially not when the resolution and accuracy is aimed in the domain of a few decimetre. The results achieved need to be regarded as preliminary. Further investigations are still required to prove the accuracy in detail.

  10. Influence of Confocal Scanning Laser Microscopy specific acquisition parameters on the detection and matching of Speeded-Up Robust Features

    International Nuclear Information System (INIS)

    The robustness and distinctiveness of local features to various object or scene deformations and to modifications of the acquisition parameters play key roles in the design of many computer vision applications. In this paper we present the results of our experiments on the behavior of a recently developed technique for local feature detection and description, Speeded-Up Robust Features (SURF), regarding image modifications specific to Confocal Scanning Laser Microscopy (CSLM). We analyze the repeatability of detected SURF keypoints and the precision-recall of their matching under modifications of three important CSLM parameters: pinhole aperture, photomultiplier (PMT) gain and laser beam power. During any investigation by CSLM these three parameters have to be modified, individually or together, in order to optimize the contrast and the Signal Noise Ratio (SNR), being also inherently modified when changing the microscope objective. Our experiments show that an important amount of SURF features can be detected at the same physical locations in images collected at different values of the pinhole aperture, PMT gain and laser beam power, and further on can be successfully matched based on their descriptors. In the final part, we exemplify the potential of SURF in CSLM imaging by presenting a SURF-based computer vision application that deals with the mosaicing of images collected by this technique. -- Research highlights: → Influence of pinhole aperture modifications on SURF detection and matching in CSLM images. → Influence of photomultiplier gain modifications on SURF detection and matching in CSLM images. → Influence of laser beam power modifications on SURF detection and matching in CSLM images. → SURF-based automated mosaicing of CSLM images.

  11. Examination of scan parameters that influence image quality in three-dimensional rotational angiography (3D-RA)

    International Nuclear Information System (INIS)

    Because the imaging qualities of three-dimensional rotational angiography (3D-RA) are influenced by various imaging parameters, it is difficult to obtain high-quality images from 3D-RA. In this study, we compared two methods of 3D-RA, the propeller rotation technique (PRT) and the roll rotation technique (RRT) by changing image intensifier (I.I.) sizes (5, 7 and 9 inches) using a test chart and handmade phantom. The results of this study demonstrated that one of the factors determining the image quality of 3D-RA was spatial resolution. Therefore, it was important to choose an optimum I.I. size that was similar in size to collimating the region of interest (ROI) in clinical use. Another factor influencing image quality was radiographic condition, especially the setting of tube voltage. This factor was indispensable in obtaining good image contrast, but the use of high-voltage exposure was one of the reasons for lower image contrast. Therefore, if image contrast was insufficient, the image qualities of 3D-RA became worse with increasing tube voltage because the tube voltage in this study was automatically changed according to scanning method and I.I. size. In addition, because the spatial resolution of PRT was similar to that of RRT, we thought it better to use PRT because the data acquisition time (scan time) of this technique was 4 seconds shorter than that of RRT, whereas, if PRT was used, it was necessary to set a suitable rate of injection of contrast medium because the setting of the tube voltage of PRT was 10 kV higher than that of RRT. In conclusion, to improve the image qualities of 3D-RA, we considered it necessary to obtain sufficient image contrast not influenced by high tube voltage and to choose an optimum I.I. size suitable for the spatial resolution of ROI. (author)

  12. General squark flavour mixing: constraints, phenomenology and benchmarks

    CERN Document Server

    De Causmaecker, Karen; Herrmann, Bjoern; Mahmoudi, Farvah; O'Leary, Ben; Porod, Werner; Sekmen, Sezen; Strobbe, Nadja

    2015-01-01

    We present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.

  13. Repeatability and Reproducibility of Retinal Nerve Fiber Layer Parameters Measured by Scanning Laser Polarimetry with Enhanced Corneal Compensation in Normal and Glaucomatous Eyes

    OpenAIRE

    Mirian Ara; Antonio Ferreras; Pajarin, Ana B.; Pilar Calvo; Michele Figus; Paolo Frezzotti

    2015-01-01

    Objective. To assess the intrasession repeatability and intersession reproducibility of peripapillary retinal nerve fiber layer (RNFL) thickness parameters measured by scanning laser polarimetry (SLP) with enhanced corneal compensation (ECC) in healthy and glaucomatous eyes. Methods. One randomly selected eye of 82 healthy individuals and 60 glaucoma subjects was evaluated. Three scans were acquired during the first visit to evaluate intravisit repeatability. A different operator obtained two...

  14. The PRISM Benchmark Suite

    OpenAIRE

    Kwiatkowsa, Marta; Norman, Gethin; Parker, David

    2012-01-01

    We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.

  15. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  16. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  17. Investigation of the influence of image reconstruction filter and scan parameters on operation of automatic tube current modulation systems for different CT scanners

    International Nuclear Information System (INIS)

    Variation in the user selected CT scanning parameters under automatic tube current modulation (ATCM) between hospitals has a substantial influence on the radiation doses and image quality for patients. The aim of this study was to investigate the effect of changing image reconstruction filter and scan parameter settings on tube current, dose and image quality for various CT scanners operating under ATCM. The scan parameters varied were pitch factor, rotation time, collimator configuration, kVp, image thickness and image filter convolution (FC) used for reconstruction. The Toshiba scanner varies the tube current to achieve a set target noise. Changes in the FC setting and image thickness for the first reconstruction were the major factors affecting patient dose. A two-step change in FC from smoother to sharper filters doubles the dose, but is counterbalanced by an improvement in spatial resolution. In contrast, Philips and Siemens scanners maintained tube current values similar to those for a reference image and patient, and the tube current only varied slightly for changes in individual CT scan parameters. The selection of a sharp filter increased the image noise, while use of iDose iterative reconstruction reduced the noise. Since the principles used by CT manufacturers for ATCM vary, it is important that parameters which affect patient dose and image quality for each scanner are made clear to operator to aid in optimisation. (authors)

  18. Influence of scan duration on the estimation of pharmacokinetic parameters for breast lesions: a study based on CAIPIRINHA-Dixon-TWIST-VIBE technique

    International Nuclear Information System (INIS)

    To evaluate the influence of scan duration on pharmacokinetic parameters and their performance in differentiating benign from malignant breast lesions. Dynamic breast imaging was performed on a 3.0-T MR system using a prototype CAIPIRINHA-Dixon-TWISTVIBE (CDT-VIBE) sequence with a temporal resolution of 11.9 s. Enrolled in the study were 53 women with 55 lesions (26 benign and 29 malignant). Pharmacokinetic parameters (Ktrans, ve, kep and iAUC) were calculated for various scan durations from 1 to 7 min after injection of contrast medium using the Tofts model. Ktrans, kep and ve calculated from the 1-min dataset were significantly different from those calculated from the other datasets. In benign lesions, Ktrans, kep and ve were significantly different only between 1 min and 2 min (corrected P > 0.05), but in malignant lesions there were significant differences for any of the comparisons up to 6 min vs. 7 min (corrected P > 0.05). There were no significant differences in AUCs for any of the parameters (P > 0.05). In breast dynamic contrast-enhanced MRI the scan duration has a significant impact on pharmacokinetic parameters, but the diagnostic ability may not be significantly affected. A scan duration of 5 min after injection of contrast medium may be sufficient for calculation of Tofts model pharmacokinetic parameters. (orig.)

  19. Influence of scan duration on the estimation of pharmacokinetic parameters for breast lesions: a study based on CAIPIRINHA-Dixon-TWIST-VIBE technique

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Wen; Zhao, Bin; Wang, Guangbin; Wang, Cuiyan [Shandong University, Department of MR Imaging, Shandong Medical Imaging Research Institute, Jinan, Shandong (China); Liu, Hui [Siemens Healthcare, MR Collaborations NE Asia, Shanghai (China)

    2015-04-01

    To evaluate the influence of scan duration on pharmacokinetic parameters and their performance in differentiating benign from malignant breast lesions. Dynamic breast imaging was performed on a 3.0-T MR system using a prototype CAIPIRINHA-Dixon-TWISTVIBE (CDT-VIBE) sequence with a temporal resolution of 11.9 s. Enrolled in the study were 53 women with 55 lesions (26 benign and 29 malignant). Pharmacokinetic parameters (Ktrans, ve, kep and iAUC) were calculated for various scan durations from 1 to 7 min after injection of contrast medium using the Tofts model. Ktrans, kep and ve calculated from the 1-min dataset were significantly different from those calculated from the other datasets. In benign lesions, Ktrans, kep and ve were significantly different only between 1 min and 2 min (corrected P > 0.05), but in malignant lesions there were significant differences for any of the comparisons up to 6 min vs. 7 min (corrected P > 0.05). There were no significant differences in AUCs for any of the parameters (P > 0.05). In breast dynamic contrast-enhanced MRI the scan duration has a significant impact on pharmacokinetic parameters, but the diagnostic ability may not be significantly affected. A scan duration of 5 min after injection of contrast medium may be sufficient for calculation of Tofts model pharmacokinetic parameters. (orig.)

  20. Visual information transfer. Part 1: Assessment of specific information needs. Part 2: Parameters of appropriate instrument scanning behavior

    Science.gov (United States)

    Comstock, J. R., Jr.; Kirby, R. H.; Coates, G. D.

    1985-01-01

    The present study explored eye scan behavior as a function of level of subject training. Oculometric (eye scan) measures were recorded from each of ten subjects during training trials on a CRT based flight simulation task. The task developed for the study incorporated subtasks representative of specific activities performed by pilots, but which could be performed at asymptotic levels within relatively short periods of training. Changes in eye scan behavior were examined as initially untrained subjects developed skill in the task. Eye scan predictors of performance on the task were found. Examination of eye scan in proximity to selected task events revealed differences in the distribution of looks at the instruments as a function of level of training.

  1. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    Science.gov (United States)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We

  2. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  3. Simultaneous multi-parameter observation of Harring-tonine-treating HL-60 cells with both two-photon and confo-cal laser scanning microscopy

    Institute of Scientific and Technical Information of China (English)

    张春阳; 李艳平; 马辉; 李素文; 薛绍白; 陈瓞延

    2001-01-01

    Harringtonine (HT), a kind of anticancer drug isolated from Chinese herb-Cephalotaxus hainanensis Li, can induce apoptosis in promyelocytic leukemia HL-60 cells. With both two-photon laser scanning microscopy and confocal laser scanning microscopy in combination with the fluores-cent probe Hoechst 33342, tetramethyrhodamine ethyl ester (TMRE) and Fluo 3-AM, we simulta-neously observed HT-induced changes in nuclear morphology, mitochondrial membrane potential and intracellular calcium concentration ([Ca2+]i) in HL-60 cells, and developed a real-time, sensitive and invasive method for simultaneous multi-parameter observation of drug- treating living cells at the level of single cell.

  4. Effect of imaging parameters of spiral CT scanning on image quality for the dental implants. Visual evaluation using a semi-anthropomorphic mandible phantom

    International Nuclear Information System (INIS)

    The purpose of this study was to evaluate the effect of parameters of spiral CT scanning on the image quality required for the planning of dental implants operations. A semi-anthropomorphic mandible phantom which has artificial mandibular canals and teeth roots was used as a standard object for imaging. Spiral CT scans for the phantom settled in water phantom with diameters of 20 and 16 cm were performed. Visibility of the artificial mandibular canal made of a Teflon tube and gaps between tooth apex and canal in the mandibular phantom was evaluated for various combinations of the slice thickness, tables speeds, angles to the canal, and x-ray tube currents. Teeth roots were made of PVC (poly vinyl chloride). The artificial mandibular canal was clearly observed on the images of 1 mm slice thickness. At the same table speed of 2 mm /rotation, the images of thin slice (1 mm) were superior to that of thick slice (2 mm). The gap between teeth apex and canal was erroneously diagnosed on the images with table speeds of 3 mm/rotation. Horizontal scanning in parallel to the canal result in poor image quality for observation of mandibular canals because of the partial volume effect. A relatively high x-ray tube current (125 mA) at thin slice (1 mm) scanning was required for scanning the mandibular phantom in 20 cm water vessel. Spiral scanning with slice thickness of 1 mm and table speeds of 1 of 2 mm/rotation seemed to be suitable for dental implants. The result of this study suggested that diagnosis from two independent spiral scans with a different angle to the object was more accurate and more efficient than single spiral scanning. (author)

  5. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part II: Benchmark comparisons of PUMA core parameters with MCNP5 and improvements due to a simple cell heterogeneity correction

    International Nuclear Information System (INIS)

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO2 rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)

  6. Development of a two-parameter slit-scan flow cytometer for screening of normal and aberrant chromosomes: application to a karyotype of Sus scrofa domestica (pig)

    Science.gov (United States)

    Hausmann, Michael; Doelle, Juergen; Arnold, Armin; Stepanow, Boris; Wickert, Burkhard; Boscher, Jeannine; Popescu, Paul C.; Cremer, Christoph

    1992-07-01

    Laser fluorescence activated slit-scan flow cytometry offers an approach to a fast, quantitative characterization of chromosomes due to morphological features. It can be applied for screening of chromosomal abnormalities. We give a preliminary report on the development of the Heidelberg slit-scan flow cytometer. Time-resolved measurement of the fluorescence intensity along the chromosome axis can be registered simultaneously for two parameters when the chromosome axis can be registered simultaneously for two parameters when the chromosome passes perpendicularly through a narrowly focused laser beam combined by a detection slit in the image plane. So far automated data analysis has been performed off-line on a PC. In its final performance, the Heidelberg slit-scan flow cytometer will achieve on-line data analysis that allows an electro-acoustical sorting of chromosomes of interest. Interest is high in the agriculture field to study chromosome aberrations that influence the size of litters in pig (Sus scrofa domestica) breeding. Slit-scan measurements have been performed to characterize chromosomes of pigs; we present results for chromosome 1 and a translocation chromosome 6/15.

  7. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in...

  8. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  9. Interplay effects in proton scanning for lung: a 4D Monte Carlo study assessing the impact of tumor and beam delivery parameters

    International Nuclear Information System (INIS)

    Relative motion between a tumor and a scanning proton beam results in a degradation of the dose distribution (interplay effect). This study investigates the relationship between beam scanning parameters and the interplay effect, with the goal of finding parameters that minimize interplay. 4D Monte Carlo simulations of pencil beam scanning proton therapy treatments were performed using the 4DCT geometry of five lung cancer patients of varying tumor size (50.4–167.1 cc) and motion amplitude (2.9–30.1 mm). Treatments were planned assuming delivery in 35 × 2.5 Gy(RBE) fractions. The spot size, time to change the beam energy (τes), time required for magnet settling (τss), initial breathing phase, spot spacing, scanning direction, scanning speed, beam current and patient breathing period were varied for each of the five patients. Simulations were performed for a single fraction and an approximation of conventional fractionation. For the patients considered, the interplay effect could not be predicted using the superior–inferior motion amplitude alone. Larger spot sizes (σ ∼ 9–16 mm) were less susceptible to interplay, giving an equivalent uniform dose (EUD) of 99.0 ± 4.4% (1 standard deviation) in a single fraction compared to 86.1 ± 13.1% for smaller spots (σ ∼ 2–4 mm). The smaller spot sizes gave EUD values as low as 65.3% of the prescription dose in a single fraction. Reducing the spot spacing improved the target dose homogeneity. The initial breathing phase can have a significant effect on the interplay, particularly for shorter delivery times. No clear benefit was evident when scanning either parallel or perpendicular to the predominant axis of motion. Longer breathing periods decreased the EUD. In general, longer delivery times led to lower interplay effects. Conventional fractionation showed significant improvement in terms of interplay, giving a EUD of at least 84.7% and 100.0% of the prescription dose for the small and larger spot sizes

  10. Interplay effects in proton scanning for lung: a 4D Monte Carlo study assessing the impact of tumor and beam delivery parameters

    Science.gov (United States)

    Dowdell, S.; Grassberger, C.; Sharp, G. C.; Paganetti, H.

    2013-06-01

    Relative motion between a tumor and a scanning proton beam results in a degradation of the dose distribution (interplay effect). This study investigates the relationship between beam scanning parameters and the interplay effect, with the goal of finding parameters that minimize interplay. 4D Monte Carlo simulations of pencil beam scanning proton therapy treatments were performed using the 4DCT geometry of five lung cancer patients of varying tumor size (50.4-167.1 cc) and motion amplitude (2.9-30.1 mm). Treatments were planned assuming delivery in 35 × 2.5 Gy(RBE) fractions. The spot size, time to change the beam energy (τes), time required for magnet settling (τss), initial breathing phase, spot spacing, scanning direction, scanning speed, beam current and patient breathing period were varied for each of the five patients. Simulations were performed for a single fraction and an approximation of conventional fractionation. For the patients considered, the interplay effect could not be predicted using the superior-inferior motion amplitude alone. Larger spot sizes (σ ˜ 9-16 mm) were less susceptible to interplay, giving an equivalent uniform dose (EUD) of 99.0 ± 4.4% (1 standard deviation) in a single fraction compared to 86.1 ± 13.1% for smaller spots (σ ˜ 2-4 mm). The smaller spot sizes gave EUD values as low as 65.3% of the prescription dose in a single fraction. Reducing the spot spacing improved the target dose homogeneity. The initial breathing phase can have a significant effect on the interplay, particularly for shorter delivery times. No clear benefit was evident when scanning either parallel or perpendicular to the predominant axis of motion. Longer breathing periods decreased the EUD. In general, longer delivery times led to lower interplay effects. Conventional fractionation showed significant improvement in terms of interplay, giving a EUD of at least 84.7% and 100.0% of the prescription dose for the small and larger spot sizes respectively

  11. Research Reactor Benchmarks

    International Nuclear Information System (INIS)

    A criticality benchmark experiment performed at the Jozef Stefan Institute TRIGA Mark II research reactor is described. This experiment and its evaluation are given as examples of benchmark experiments at research reactors. For this reason the differences and possible problems compared to other benchmark experiments are particularly emphasized. General guidelines for performing criticality benchmarks in research reactors are given. The criticality benchmark experiment was performed in a normal operating reactor core using commercially available fresh 20% enriched fuel elements containing 12 wt% uranium in uranium-zirconium hydride fuel material. Experimental conditions to minimize experimental errors and to enhance computer modeling accuracy are described. Uncertainties in multiplication factor due to fuel composition and geometry data are analyzed by sensitivity analysis. The simplifications in the benchmark model compared to the actual geometry are evaluated. Sample benchmark calculations with the MCNP and KENO Monte Carlo codes are given

  12. Effect of duration of scan acquisition on CT perfusion parameter values in primary and metastatic tumors in the lung

    International Nuclear Information System (INIS)

    Objectives: To assess the effect of acquisition duration (Tacq) and pre-enhancement set points (T1) on computer tomography perfusion (CTp) parameter values in primary and metastatic tumors in the lung. Materials and methods: 24 lung CTp datasets (10 primary; 14 metastatic), acquired using a two phase protocol spanning 125 s, in 12 patients with lung tumors, were analyzed by deconvolution modeling to yield tumor blood flow (BF), blood volume (BV), mean transit time (MTT), and permeability (PS) values. CTp analyses were undertaken for the reference dataset (i.e., T1 = t0) with varying Tacq from 12 to 125 s. This was repeated for shifts in T1 (±0.5 s, ±1.0 s, ±2.0 s relative to the reference at t0). Resultant CTp values were plotted against Tacq; values at 30 s, 50 s, 65 s and 125 s were compared using linear mixed model. Results: All CTp parameter values were noticeably influenced by Tacq, with generally less marked changes beyond 50 s, and with no difference in behavior between primary and secondary tumors. Apart from BV, which attained a plateau at approximately 50 s, the other three CTp parameters did not reach steady-state values within the available 125 s of data, with values at 30 s, 50 s and 65 s significantly different from 125 s (p 1 also affected the CTp parameters values, with positive shifts having greater impact on CTp values than negative shifts. Conclusion: CTp parameter values derived from deconvolution modeling can be markedly affected by Tacq, and pre-enhancement set-points. 50 s acquisition may be adequate for BV, but longer than 125 s is probably required for reliable characterization of the other three CTp parameters

  13. Effect of duration of scan acquisition on CT perfusion parameter values in primary and metastatic tumors in the lung

    Energy Technology Data Exchange (ETDEWEB)

    Ng, Chaan S., E-mail: cng@mdanderson.org [Departments of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Chandler, Adam G., E-mail: adam.chandler@mdanderson.org [Departments of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); CT research, GE Healthcare, Waukesha, Wisconsin (United States); Wei, Wei, E-mail: wwei@mdanderson.org [Departments of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Anderson, Ella F., E-mail: eanderson@mdanderson.org [Departments of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Herron, Delise H., E-mail: dherron@mdanderson.org [Departments of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Kurzrock, Razelle, E-mail: rkurzrock@ucsd.edu [Departments of Investigational Cancer Therapeutics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Charnsangavej, Chusilp, E-mail: ccharn@mdanderson.org [Departments of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2013-10-01

    Objectives: To assess the effect of acquisition duration (T{sub acq}) and pre-enhancement set points (T{sub 1}) on computer tomography perfusion (CTp) parameter values in primary and metastatic tumors in the lung. Materials and methods: 24 lung CTp datasets (10 primary; 14 metastatic), acquired using a two phase protocol spanning 125 s, in 12 patients with lung tumors, were analyzed by deconvolution modeling to yield tumor blood flow (BF), blood volume (BV), mean transit time (MTT), and permeability (PS) values. CTp analyses were undertaken for the reference dataset (i.e., T{sub 1} = t{sub 0}) with varying T{sub acq} from 12 to 125 s. This was repeated for shifts in T{sub 1} (±0.5 s, ±1.0 s, ±2.0 s relative to the reference at t{sub 0}). Resultant CTp values were plotted against T{sub acq}; values at 30 s, 50 s, 65 s and 125 s were compared using linear mixed model. Results: All CTp parameter values were noticeably influenced by T{sub acq}, with generally less marked changes beyond 50 s, and with no difference in behavior between primary and secondary tumors. Apart from BV, which attained a plateau at approximately 50 s, the other three CTp parameters did not reach steady-state values within the available 125 s of data, with values at 30 s, 50 s and 65 s significantly different from 125 s (p < 0.004). Shifts in T{sub 1} also affected the CTp parameters values, with positive shifts having greater impact on CTp values than negative shifts. Conclusion: CTp parameter values derived from deconvolution modeling can be markedly affected by T{sub acq}, and pre-enhancement set-points. 50 s acquisition may be adequate for BV, but longer than 125 s is probably required for reliable characterization of the other three CTp parameters.

  14. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Takashina, Masaaki; Koizumi, Masahiko [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P., E-mail: vadim.p.moskvin@gmail.com [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States)

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  15. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    International Nuclear Information System (INIS)

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation

  16. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  17. Benchmark calculations on the nuclear quadrupole-coupling parameters for open-shell molecules using non-relativistic and scalar-relativistic coupled-cluster methods

    International Nuclear Information System (INIS)

    Quantum-chemical computations of nuclear quadrupole-coupling parameters for 24 open-shell states of small molecules based on non-relativistic and spin-free exact two-component (SFX2C) relativistic equation-of-motion coupled-cluster (EOM-CC) as well as spin-orbital-based restricted open-shell Hartree-Fock coupled-cluster (ROHF-CC) methods are reported. Relativistic effects, the performance of the EOM-CC and ROHF-CC methods for treating electron correlation, as well as basis-set convergence have been carefully analyzed. Consideration of relativistic effects is necessary for accurate calculations on systems containing third-row (K-Kr) and heavier elements, as expected, and the SFX2C approach is shown to be a useful cost-effective option here. Further, it is demonstrated that the EOM-CC methods constitute flexible and accurate alternatives to the ROHF-CC methods in the calculations of nuclear quadrupole-coupling parameters for open-shell states

  18. Quality parameters of digital aerial survey and airborne laser scanning covering the entire area of the Czech Republic

    Directory of Open Access Journals (Sweden)

    Jiří Šíma

    2013-11-01

    Full Text Available The paper illustrates the development of digital aerial survey and digital elevation models covering the entire area of the Czech Republic at the beginning of 21st century. It also presents some results of systematic investigation of their quality parameters reached by the author in cooperation with Department of Geomatics at the Faculty of Applied Sciences of the University of Western Bohemia in Pilsen and the Land Survey Office.

  19. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Experimental results of pulse parameters and control rod worth measurements at TRIGA Mark 2 reactor in Ljubljana are presented. The measurements were performed with a completely fresh, uniform, and compact core. Only standard fuel elements with 12 wt% uranium were used. Special efforts were made to get reliable and accurate results at well-defined experimental conditions, and it is proposed to use the results as a benchmark test case for TRIGA reactors

  20. Optimisation of the CT parameters with evaluation of MDCT double-scan images in the planning of the dental implant treatment

    International Nuclear Information System (INIS)

    Background: The aim of the present study was optimisation of the examination parameters and evaluation of reliability of the MDCT double-scan images obtained with computer navigation for dental implant treatment. Material/Methods: With the use of a MDCT scanner SOMATOM Sensation (Siemens), CT-images of a phantom were performed: slice-collimation (10 A - 0.75 mm, 10 A - 1.5 mm), slice-thickness (0.75, 1, 2, 3, 5 mm), pitch (0.5, 1, 1.5). Additionally, the analysis on various filters from H20f to H60f was performed. For study used a phantom of the human cadaver head. Qualitative analysis was done using Nobel Guide (Nobel Biocare, Sweden), assessing possible artefacts on the images, and measurements of the bone structure on all filters in comparison with the real image. Results: The quality of the phantom images was assessed as optimal for the slices thickness 0.75 and 1 mm. The use of various values of the pitch did not have statistically significant difference on the image quality. Application of various filters did not alter the parameters of the bone structure, however the use of lower filters (H30f and H40f) had a beneficial effect on the quality of 3D reconstruction. The arrangement of the 'window' parameters in CT seemed to have a greater influence on the measurement and evaluation of the bone structure. Conclusions: Slice-collimation and slice-thickness are the most important parameters in selection of the optimal scan-protocol. It is recommended to use in the postprocessing, the mentioned above parameter succession with the application of various filters (H30f and H60f) at a stable arrangement of the 'window' in the CT examination. (authors)

  1. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  2. Bayesian Benchmark Dose Analysis

    OpenAIRE

    Fang, Qijun; Piegorsch, Walter W.; Barnes, Katherine Y.

    2014-01-01

    An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indee...

  3. Risk Management with Benchmarking

    OpenAIRE

    Suleyman Basak; Alex Shapiro; Lucie Teplá

    2005-01-01

    Portfolio theory must address the fact that, in reality, portfolio managers are evaluated relative to a benchmark, and therefore adopt risk management practices to account for the benchmark performance. We capture this risk management consideration by allowing a prespecified shortfall from a target benchmark-linked return, consistent with growing interest in such practice. In a dynamic setting, we demonstrate how a risk-averse portfolio manager optimally under- or overperforms a target benchm...

  4. SPIDER - VII. The Central Dark Matter Content of Bright Early-Type Galaxies: Benchmark Correlations with Mass, Structural Parameters and Environment

    CERN Document Server

    Tortora, C; Napolitano, N R; de Carvalho, R R; Romanowsky, A J

    2012-01-01

    We analyze the central dark-matter (DM) content of $\\sim 4,500$ massive ($M_\\star \\gsim 10^{10} \\, M_\\odot$), low-redshift ($z<0.1$), early-type galaxies (ETGs), with high-quality $ugrizYJHK$ photometry and optical spectroscopy from SDSS and UKIDSS. We estimate the "central" fraction of DM within the $K$-band effective radius, \\Re. The main results of the present work are the following: (1) DM fractions increase systematically with both structural parameters (i.e. \\Re, and S\\'ersic index, $n$) and mass proxies (central velocity dispersion, stellar and dynamical mass), as in previous studies, and decrease with central stellar density. 2) All correlations involving DM fractions are caused by two fundamental ones with galaxy effective radius and central velocity dispersion. These correlations are independent of each other, so that ETGs populate a central-DM plane (DMP), i.e. a correlation among fraction of total-to-stellar mass, effective radius, and velocity dispersion, whose scatter along the total-to-stell...

  5. Development of a benchmarking methodology for evaluating oxidation ditch control strategies

    OpenAIRE

    Abusam, A.A.A.

    2001-01-01

    Keywords: wastewater, oxidation ditch, carrousel, modeling, activated sludge, ASM No. 1, oxygen transfer rate, aeration, parameter estimation, calibration, sensitivity analysis, uncertainty analysis, sensors, horizontal velocity, benchmark, benchmarking, control strategies, simulation. The purpose of this thesis was to develop a benchmarking methodology for evaluating control strategies for oxidation ditch wastewater treatment plants. A benchmark consists of a description of the plant layout,...

  6. Aeroelastic Benchmark Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the...

  7. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  8. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as...... important; (2) will, that activists and issue entrepreneurs will carry the message forward; and (3) expertise, that benchmarks created can be defended as accurate representations of what is happening on the issue of concern. We contrast two types of benchmarking cycles where salience, will, and expertise...

  9. Liver Steatosis Assessed by Controlled Attenuation Parameter (CAP) Measured with the XL Probe of the FibroScan: A Pilot Study Assessing Diagnostic Accuracy.

    Science.gov (United States)

    Sasso, Magali; Audière, Stéphane; Kemgang, Astrid; Gaouar, Farid; Corpechot, Christophe; Chazouillères, Olivier; Fournier, Céline; Golsztejn, Olivier; Prince, Stéphane; Menu, Yves; Sandrin, Laurent; Miette, Véronique

    2016-01-01

    To assess liver steatosis, the controlled attenuation parameter (CAP; giving an estimate of ultrasound attenuation ∼3.5 MHz) is available with the M probe of the FibroScan. We report on the adaptation of the CAP for the FibroScan XL probe (center frequency 2.5 MHz) without modifying the range of values (100-400 dB/m). CAP validation was successfully performed on Field II simulations and on tissue-mimicking phantoms. In vivo performance was assessed in a cohort of 59 patients spanning the range of steatosis. In vivo reproducibility was good and similar with both probes. The area under receiver operative characteristic curve was equal to 0.83/0.84 and 0.92/0.91 for the M/XL probes to detect >2% and >16% liver fat, respectively, as assessed by magnetic resonance imaging. Patients can now be assessed simultaneously for steatosis and fibrosis using the FibroScan, regardless of their morphology. PMID:26386476

  10. Remote sensing of ice crystal asymmetry parameter using multi-directional polarization measurements – Part 2: Application to the Research Scanning Polarimeter

    Directory of Open Access Journals (Sweden)

    B. van Diedenhoven

    2013-03-01

    Full Text Available A new method to retrieve ice cloud asymmetry parameters from multi-directional polarized reflectance measurements is applied to measurements of the airborne Research Scanning Polarimeter (RSP obtained during the CRYSTAL-FACE campaign in 2002. The method assumes individual hexagonal ice columns and plates serve as proxies for more complex shapes and aggregates. The closest fit is searched in a look-up table of simulated polarized reflectances computed for cloud layers that contain individual, randomly oriented hexagonal columns and plates with a virtually continuous selection of aspect ratios and distortion. The asymmetry parameter, aspect ratio and distortion of the hexagonal particle that leads to the best fit with the measurements are considered the retrieved values. Two cases of thick convective clouds and two cases of thinner anvil cloud layers are analyzed. Median asymmetry parameters retrieved by the RSP range from 0.76 to 0.78, and are generally smaller than those currently assumed in most climate models and satellite retrievals. In all cases the measurements indicate roughened or distorted ice crystals, which is consistent with previous findings. Retrieved aspect ratios in three of the cases range from 0.9 to 1.6, indicating compact particles dominate the cloud-top shortwave radiation. Retrievals for the remaining case indicate plate-like ice crystals with aspect ratios around 0.3. The RSP retrievals are qualitatively consistent with the CPI images obtained in the same cloud layers. Retrieved asymmetry parameters are compared to those determined in situ by the Cloud Integrating Nephelometer (CIN. For two cases, the median values of asymmetry parameter retrieved by CIN and RSP agree within 0.01, while for the two other cases RSP asymmetry parameters are about 0.03–0.05 greater than those obtained by the CIN. Part of this bias might be explained by vertical variation of the asymmetry parameter or ice shattering on the CIN probe, or both.

  11. Motion Interplay as a Function of Patient Parameters and Spot Size in Spot Scanning Proton Therapy for Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Grassberger, Clemens, E-mail: Grassberger.Clemens@mgh.harvard.edu [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States); Center for Proton Radiotherapy, Paul Scherrer Institute, Villigen (Switzerland); Dowdell, Stephen [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States); Lomax, Antony [Center for Proton Radiotherapy, Paul Scherrer Institute, Villigen (Switzerland); Sharp, Greg; Shackleford, James; Choi, Noah; Willers, Henning; Paganetti, Harald [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States)

    2013-06-01

    Purpose: To quantify the impact of respiratory motion on the treatment of lung tumors with spot scanning proton therapy. Methods and Materials: Four-dimensional Monte Carlo simulations were used to assess the interplay effect, which results from relative motion of the tumor and the proton beam, on the dose distribution in the patient. Ten patients with varying tumor sizes (2.6-82.3 cc) and motion amplitudes (3-30 mm) were included in the study. We investigated the impact of the spot size, which varies between proton facilities, and studied single fractions and conventionally fractionated treatments. The following metrics were used in the analysis: minimum/maximum/mean dose, target dose homogeneity, and 2-year local control rate (2y-LC). Results: Respiratory motion reduces the target dose homogeneity, with the largest effects observed for the highest motion amplitudes. Smaller spot sizes (σ ≈ 3 mm) are inherently more sensitive to motion, decreasing target dose homogeneity on average by a factor 2.8 compared with a larger spot size (σ ≈ 13 mm). Using a smaller spot size to treat a tumor with 30-mm motion amplitude reduces the minimum dose to 44.7% of the prescribed dose, decreasing modeled 2y-LC from 87.0% to 2.7%, assuming a single fraction. Conventional fractionation partly mitigates this reduction, yielding a 2y-LC of 71.6%. For the large spot size, conventional fractionation increases target dose homogeneity and prevents a deterioration of 2y-LC for all patients. No correlation with tumor volume is observed. The effect on the normal lung dose distribution is minimal: observed changes in mean lung dose and lung V{sub 20} are <0.6 Gy(RBE) and <1.7%, respectively. Conclusions: For the patients in this study, 2y-LC could be preserved in the presence of interplay using a large spot size and conventional fractionation. For treatments using smaller spot sizes and/or in the delivery of single fractions, interplay effects can lead to significant deterioration of

  12. Motion Interplay as a Function of Patient Parameters and Spot Size in Spot Scanning Proton Therapy for Lung Cancer

    International Nuclear Information System (INIS)

    Purpose: To quantify the impact of respiratory motion on the treatment of lung tumors with spot scanning proton therapy. Methods and Materials: Four-dimensional Monte Carlo simulations were used to assess the interplay effect, which results from relative motion of the tumor and the proton beam, on the dose distribution in the patient. Ten patients with varying tumor sizes (2.6-82.3 cc) and motion amplitudes (3-30 mm) were included in the study. We investigated the impact of the spot size, which varies between proton facilities, and studied single fractions and conventionally fractionated treatments. The following metrics were used in the analysis: minimum/maximum/mean dose, target dose homogeneity, and 2-year local control rate (2y-LC). Results: Respiratory motion reduces the target dose homogeneity, with the largest effects observed for the highest motion amplitudes. Smaller spot sizes (σ ≈ 3 mm) are inherently more sensitive to motion, decreasing target dose homogeneity on average by a factor 2.8 compared with a larger spot size (σ ≈ 13 mm). Using a smaller spot size to treat a tumor with 30-mm motion amplitude reduces the minimum dose to 44.7% of the prescribed dose, decreasing modeled 2y-LC from 87.0% to 2.7%, assuming a single fraction. Conventional fractionation partly mitigates this reduction, yielding a 2y-LC of 71.6%. For the large spot size, conventional fractionation increases target dose homogeneity and prevents a deterioration of 2y-LC for all patients. No correlation with tumor volume is observed. The effect on the normal lung dose distribution is minimal: observed changes in mean lung dose and lung V20 are <0.6 Gy(RBE) and <1.7%, respectively. Conclusions: For the patients in this study, 2y-LC could be preserved in the presence of interplay using a large spot size and conventional fractionation. For treatments using smaller spot sizes and/or in the delivery of single fractions, interplay effects can lead to significant deterioration of the dose

  13. A performance benchmark test for geodynamo simulations

    Science.gov (United States)

    Matsui, H.; Heien, E. M.

    2013-12-01

    dynamo models on XSEDE TACC Stampede, and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions to investigate computational capability under the closer condition to the Earth's outer core. We compare the results of the accuracy benchmark and performance benchmark tests by various codes and discuss characteristics of the simulation methods for geodynamo problems.

  14. Reducing CT radiation dose with iterative reconstruction algorithms: The influence of scan and reconstruction parameters on image quality and CTDI{sub vol}

    Energy Technology Data Exchange (ETDEWEB)

    Klink, Thorsten, E-mail: klink_t1@ukw.de [Inselspital – Bern University Hospital, University Institute of Diagnostic, Interventional, and Pediatric Radiology, Freiburgstrasse 10, 3010 Bern (Switzerland); University of Würzburg, Insitute of Diagnostic and Interventional Radiology, Oberdürrbacher Str. 6, 97080 Würzburg (Germany); Obmann, Verena, E-mail: verena.obmann@insel.ch [Inselspital – Bern University Hospital, University Institute of Diagnostic, Interventional, and Pediatric Radiology, Freiburgstrasse 10, 3010 Bern (Switzerland); Heverhagen, Johannes, E-mail: johannes.heverhagen@insel.ch [Inselspital – Bern University Hospital, University Institute of Diagnostic, Interventional, and Pediatric Radiology, Freiburgstrasse 10, 3010 Bern (Switzerland); Stork, Alexander, E-mail: a.stork@roentgeninstitut.de [Roentgeninstitut Duesseldorf, Kaiserswerterstrasse 89, 40476 Duesseldorf (Germany); Adam, Gerhard, E-mail: g.adam@uke.de [University Medical Center Hamburg Eppendorf, Department of Diagnostic and Interventional Radiology, Martinistrasse 52, 20246 Hamburg (Germany); Begemann, Philipp, E-mail: p.begemann@roentgeninstitut.de [Roentgeninstitut Duesseldorf, Kaiserswerterstrasse 89, 40476 Duesseldorf (Germany)

    2014-09-15

    Highlights: • Iterative reconstruction (IR) and filtered back projection (FBP) were compared. • CT image noise was reduced by 12.4%–52.2% using IR in comparison to FBP. • IR did not affect high- and low-contrast resolution. • CTDI{sub vol} was reduced by 26–50% using hybrid IR at comparable image quality levels. • IR produced good to excellent image quality in patients. - Abstract: Objectives: In this phantom CT study, we investigated whether images reconstructed using filtered back projection (FBP) and iterative reconstruction (IR) with reduced tube voltage and current have equivalent quality. We evaluated the effects of different acquisition and reconstruction parameter settings on image quality and radiation doses. Additionally, patient CT studies were evaluated to confirm our phantom results. Methods: Helical and axial 256 multi-slice computed tomography scans of the phantom (Catphan{sup ®}) were performed with varying tube voltages (80–140 kV) and currents (30–200 mAs). 198 phantom data sets were reconstructed applying FBP and IR with increasing iterations, and soft and sharp kernels. Further, 25 chest and abdomen CT scans, performed with high and low exposure per patient, were reconstructed with IR and FBP. Two independent observers evaluated image quality and radiation doses of both phantom and patient scans. Results: In phantom scans, noise reduction was significantly improved using IR with increasing iterations, independent from tissue, scan-mode, tube-voltage, current, and kernel. IR did not affect high-contrast resolution. Low-contrast resolution was also not negatively affected, but improved in scans with doses <5 mGy, although object detectability generally decreased with the lowering of exposure. At comparable image quality levels, CTDI{sub vol} was reduced by 26–50% using IR. In patients, applying IR vs. FBP resulted in good to excellent image quality, while tube voltage and current settings could be significantly decreased

  15. A Voxel-Based Method for Automated Identification and Morphological Parameters Estimation of Individual Street Trees from Mobile Laser Scanning Data

    Directory of Open Access Journals (Sweden)

    Hongxing Liu

    2013-01-01

    Full Text Available As an important component of urban vegetation, street trees play an important role in maintenance of environmental quality, aesthetic beauty of urban landscape, and social service for inhabitants. Acquiring accurate and up-to-date inventory information for street trees is required for urban horticultural planning, and municipal urban forest management. This paper presents a new Voxel-based Marked Neighborhood Searching (VMNS method for efficiently identifying street trees and deriving their morphological parameters from Mobile Laser Scanning (MLS point cloud data. The VMNS method consists of six technical components: voxelization, calculating values of voxels, searching and marking neighborhoods, extracting potential trees, deriving morphological parameters, and eliminating pole-like objects other than trees. The method is validated and evaluated through two case studies. The evaluation results show that the completeness and correctness of our method for street tree detection are over 98%. The derived morphological parameters, including tree height, crown diameter, diameter at breast height (DBH, and crown base height (CBH, are in a good agreement with the field measurements. Our method provides an effective tool for extracting various morphological parameters for individual street trees from MLS point cloud data.

  16. The Snowmass points and slopes: Benchmarks for SUSY searches

    International Nuclear Information System (INIS)

    The ''Snowmass Points and Slopes'' (SPS) are a set of benchmark points and parameter lines in the MSSM parameter space corresponding to different scenarios in the search for Supersymmetry at present and future experiments. This set of benchmarks was agreed upon at the 2001 ''Snowmass Workshop on the Future of Particle Physics'' as a consensus based on different existing proposals

  17. The Snowmass points and slopes: benchmarks for SUSY searches

    International Nuclear Information System (INIS)

    The ''snowmass points and slopes'' (SPS) are a set of benchmark points and parameter lines in the MSSM parameter space corresponding to different scenarios in the search for Supersymmetry at present and future experiments. This set of benchmarks was agreed upon at the 2001 ''Snowmass Workshop on the Future of Particle Physics'' as a consensus based on different existing proposals. (orig.)

  18. The Snowmass points and slopes : benchmarks for SUSY searches

    International Nuclear Information System (INIS)

    The ''Snowmass Points and Slopes'' (SPS) are a set of benchmark points and parameter lines in the MSSM parameter space corresponding to different scenarios in the search for Supersymmetry at present and future experiments. This set of benchmarks was agreed upon at the 2001 ''Snowmass Workshop on the Future of Particle Physics'' as a consensus based on different existing proposals

  19. The Snowmass Points and Slopes: benchmarks for SUSY searches

    International Nuclear Information System (INIS)

    The ''Snowmass Points and Slopes'' (SPS) are a set of benchmark points and parameter lines in the MSSM parameter space corresponding to different scenarios in the search for Supersymmetry at present and future experiments. This set of benchmarks was agreed upon at the 2001 ''Snowmass Workshop on the Future of Particle Physics'' as a consensus based on different existing proposals. (orig.)

  20. The Snowmass Points and Slopes: Benchmarks for SUSY Studies

    International Nuclear Information System (INIS)

    The ''Snowmass Points and Slopes'' (SPS) are a set of benchmark points and parameter lines in the MSSM parameter space corresponding to different scenarios in the search for Supersymmetry at present and future experiments. This set of benchmarks was agreed upon at the 2001 ''Snowmass Workshop on the Future of Particle Physics'' as a consensus based on different existing proposals

  1. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt at...

  2. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  3. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  4. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  5. Optimization of scanning parameters in children CT examination%儿童CT检查中扫描参数的优化

    Institute of Scientific and Technical Information of China (English)

    李大伟; 周献锋; 杨春勇; 王进; 涂彧; 余宁乐

    2014-01-01

    Objective To reduce the radiation dose to children from CT scanning through proper adjustment to milliamps (mAs) and scan lengths with a view to learning the relationship between scanning condition and radiation dose.Methods To compare the differences in main scanning parameters used for head,chest and abdomen at multi-detector CT examination of paediatric patients (< 1 year old,1-5 years old,6-10 years old,11-15 years old) at seven hospitals in Jiangsu province.CT dose index (CTDI) and dose-length-product (DLP) were gained by using standard children dose model (diameter 16 cm) under the same scanning conditions.Effective doses (E) at different parts of the body from children CT scanning were estimated after modification by empirical weighting factor.Statistical analyses of mAs,scan lengths and DLP were performed with SPSS 16.0 software.The differences in radiation dose due to the choice of condition of scanning were compared between two typical hospitals.Results The mean values of effective doses to paediatric patients during head,chest and abdomen CT scanning were 2.46,5.69,11.86 mSv,respectively.DLP was correlated positively with mAs and scan length (head,chest and abdomen examination,r =0.81,0.81,0.92,P <0.05).Due to higher mAs used,the effective dose from chest and abdomen CT examination among all age groups was higher than that in Germany Galanski research.Due to larger scanning length in abdominal examination among all age groups,effective doses in hospital were the highest.Conclusions Reasonablely reducing the scan length and mAs during CT scanning could lower children's CT radiation risk,while clinical diagnosis is not affected.%目的 了解儿童CT检查扫描条件选择及其所致辐射剂量的相关性,以期通过适当调节mAs、扫描长度等参数,降低儿童CT检查患者受照剂量.方法 比较江苏省7家医院不同年龄组(<1岁、1~5岁、6~10岁和11 ~15岁)儿童头颅、胸部、腹部多排螺旋CT检查主要扫描

  6. Simulation benchmarks for low-pressure plasmas: capacitive discharges

    CERN Document Server

    Turner, M M; Donko, Z; Eremin, D; Kelly, S J; Lafleur, T; Mussenbrock, T

    2012-01-01

    Benchmarking is generally accepted as an important element in demonstrating the correctness of computer simulations. In the modern sense, a benchmark is a computer simulation result that has evidence of correctness, is accompanied by estimates of relevant errors, and which can thus be used as a basis for judging the accuracy and efficiency of other codes. In this paper, we present four benchmark cases related to capacitively coupled discharges. These benchmarks prescribe all relevant physical and numerical parameters. We have simulated the benchmark conditions using five independently developed particle-in-cell codes. We show that the results of these simulations are statistically indistinguishable, within bounds of uncertainty that we define. We therefore claim that the results of these simulations represent strong benchmarks, that can be used as a basis for evaluating the accuracy of other codes. These other codes could include other approaches than particle-in-cell simulations, where benchmarking could exa...

  7. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    Science.gov (United States)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-08-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  8. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    Science.gov (United States)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-06-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  9. Automated tracking of quantitative parameters from single line scanning of vocal folds: a case study of the 'messa di voce' exercise.

    Science.gov (United States)

    Dejonckere, Philippe H; Lebacq, Jean; Bocchi, Leonardo; Orlandi, Silvia; Manfredi, Claudia

    2015-04-01

    This article presents a novel application of the 'single line scanning' of the vocal fold vibrations (kymography) in singing pedagogy, particularly in a specific technical voice exercise: the 'messa di voce'. It aims at giving the singer relevant and valid short-term feedback. A user-friendly automatic analysis program makes possible a precise, immediate quantification of the essential physiological parameters characterizing the changes in glottal impedance, concomitant with the progressive increase and decrease of the lung pressure. The data provided by the program show a strong correlation with the hand-made measurements. Additional measurements such as subglottic pressure and flow glottography by inverse filtering can be meaningfully correlated with the data obtained from the kymographic images. PMID:24456119

  10. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  11. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  12. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  13. Comparison between FDG Uptake and Clinicopathologic and Immunohistochemical Parameters in Pre-operative PET/CT Scan of Primary Gastric Carcinoma

    International Nuclear Information System (INIS)

    The purpose of this study was to find out what clinicopathologic or immunohistochemical parameter that may affect FDG uptake of primary tumor in PET/CT scan of the gastric carcinoma patient. Eighty-nine patients with stomach cancer who underwent pre-operative FDG PET/CT scans were included. In cases with perceptible FDG uptake in primary tumor, the maximum standardized uptake value (SUVmax) was calculated. The clinicopathologic results such as depth of invasion (T stage), tumor size, lymph node metastasis, tumor differentiation and Lauren's classification and immunohistochemical markers such as Ki-67 index, expression of p53, EGFR, Cathepsin D, c-erb-B2 and COX-2 were reviewed. Nineteen out of 89 gastric carcinomas showed imperceptible FDG uptake on PET/CT images. In cases with perceptible FDG uptake in primary tumor, SUVmax was significantly higher in T2, T3 and T4 tumors than T1 tumors (5.8±3.1 vs. 3.7±2.1, p=0.002). SUVmax of large tumors (above or equal to 3 cm) was also significantly higher than SUVmax of small ones (less than 3 cm) (5.7±3.2 vs. 3.7±2.0, p=0.002). The intestinal types of gastric carcinomas according to Lauren showed higher FDG uptake compared to the non-intestinal types (5.4±2.8 vs. 3.7±1.3, p=0.003). SUVmax between p53 positive group and negative group was significantly different (6.0±2.8 vs. 4.4±3.0, p=0.035). No significant difference was found in presence of LN metastasis, tumor differentiation, Ki-67 index, and expression of EGFR, Cathepsin D, c-erb-B2 and COX-2. T stage of gastric carcinoma influenced the detectability of gastric cancer on FDG PET/CT scan. When gastric carcinoma was perceptible on PET/CT scan, T stage, size of primary tumor, Lauren's classification and p53 expression were related to degree of FDG uptake in primary tumor

  14. Gaia FGK benchmark stars: Metallicity

    Science.gov (United States)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  15. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  16. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  17. Benchmarking in University Toolbox

    OpenAIRE

    Katarzyna Kuźmicz

    2015-01-01

    In the face of global competition and rising challenges that higher education institutions (HEIs) meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indica...

  18. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  19. Benchmarking conflict resolution algorithms

    OpenAIRE

    Vanaret, Charlie; Gianazza, David; Durand, Nicolas; Gotteland, Jean-Baptiste

    2012-01-01

    Applying a benchmarking approach to conflict resolution problems is a hard task, as the analytical form of the constraints is not simple. This is especially the case when using realistic dynamics and models, considering accelerating aircraft that may follow flight paths that are not direct. Currently, there is a lack of common problems and data that would allow researchers to compare the performances of several conflict resolution algorithms. The present paper introduces a benchmarking approa...

  20. Benchmarking and regulation

    OpenAIRE

    Agrell, Per Joakim; Bogetoft, Peter

    2013-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publication...

  1. Accelerator shielding benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, H.; Ban, S.; Nakamura, T. [and others

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author).

  2. New LHC benchmarks for the CP-conserving two-Higgs-doublet model

    International Nuclear Information System (INIS)

    We introduce a strategy to study the parameter space of the general, CP-conserving, two-Higgs-doublet Model (2HDM) with a softly broken Z2-symmetry by means of a new ''hybrid'' basis. In this basis the input parameters are the measured values of the mass of the observed Standard Model (SM)-like Higgs boson and its coupling strength to vector boson pairs, the mass of the second CP-even Higgs boson, the ratio of neutral Higgs vacuum expectation values, and three additional dimensionless parameters. Using the hybrid basis, we present numerical scans of the 2HDM parameter space where we survey available parameter regions and analyze model constraints. From these results, we define a number of benchmark scenarios that capture different aspects of non-standard Higgs phenomenology that are of interest for future LHC Higgs searches. (orig.) 2

  3. New LHC benchmarks for the CP-conserving two-Higgs-doublet model

    Energy Technology Data Exchange (ETDEWEB)

    Haber, Howard E. [University of California, Santa Cruz Institute for Particle Physics, Santa Cruz, CA (United States); Staal, Oscar [Stockholm University, Department of Physics, The Oskar Klein Centre, Stockholm (Sweden)

    2015-10-15

    We introduce a strategy to study the parameter space of the general, CP-conserving, two-Higgs-doublet Model (2HDM) with a softly broken Z{sub 2}-symmetry by means of a new ''hybrid'' basis. In this basis the input parameters are the measured values of the mass of the observed Standard Model (SM)-like Higgs boson and its coupling strength to vector boson pairs, the mass of the second CP-even Higgs boson, the ratio of neutral Higgs vacuum expectation values, and three additional dimensionless parameters. Using the hybrid basis, we present numerical scans of the 2HDM parameter space where we survey available parameter regions and analyze model constraints. From these results, we define a number of benchmark scenarios that capture different aspects of non-standard Higgs phenomenology that are of interest for future LHC Higgs searches. (orig.) 2.

  4. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  5. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  6. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  7. WWER in-core fuel management benchmark definition

    International Nuclear Information System (INIS)

    Two benchmark problems for WWER-440, including design parameters, operating conditions and measured quantities are discussed in this paper. Some benchmark results for infinitive multiplication factor -Keff, natural boron concentration - Cβ and relative power distribution - Kq obtained by use of the code package are represented. (authors). 5 refs., 3 tabs

  8. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    More than 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The benchmark calculations reported here are part of an ongoing multiyear, multiperson effort to benchmark version 4 of the MCNP code. The MCNP is a Monte Carlo three-dimensional general-purpose, continuous-energy neutron, photon, and electron transport code. It is used around the world for many applications including aerospace, oil-well logging, physics experiments, criticality safety, reactor analysis, medical imaging, defense applications, accelerator design, radiation hardening, radiation shielding, health physics, fusion research, and education. The first phase of the benchmark project consisted of analytic and photon problems. The second phase consists of the ENDF/B-V neutron problems reported in this paper and in more detail in the comprehensive report. A cooperative program being carried out a General Electric, San Jose, consists of light water reactor benchmark problems. A subsequent phase focusing on electron problems is planned

  9. Shielding Benchmark Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-09-17

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC).

  10. Benchmarking in radiation protection in pharmaceutical industries

    International Nuclear Information System (INIS)

    A benchmarking on radiation protection in seven pharmaceutical companies in Germany and Switzerland was carried out. As the result relevant parameters describing the performance and costs of radiation protection were acquired and compiled and subsequently depicted in figures in order to make these data comparable. (orig.)

  11. Determination of crystallization kinetics parameters of a Li1.5Al0.5Ge1.5(PO43 (LAGP glass by differential scanning calorimetry

    Directory of Open Access Journals (Sweden)

    A. M. Rodrigues

    2013-01-01

    Full Text Available Crystallization kinetics parameters of a stoichiometric glass with the composition Li1.5Al0.5Ge1.5(PO43 were investigated by subjecting parallelepipedonal samples (3 × 3 × 1.5 mm to heat treatment in a differential scanning calorimeter at different heating rates (3, 5, 8 and 10 °C/min. The data were analyzed using Ligero's and Kissinger's methods to determine the activation energy (E of crystallization, which yielded, respectively, E = 415 ± 37 kJ/mol and 378 ± 19 kJ/mol. Ligero's method was also employed to calculate the Avrami coefficient (n, which was found to be n = 3.0. A second set of samples were heat-treated in a tubular furnace at temperatures above the glass transition temperature, Tg, to induce crystallization. The X-ray diffraction analysis of these samples indicated the presence of LiGe2(PO43 which displays a NASICON-type structure. An analysis by optical microscopy revealed the presence of spheric crystals located primarily in the volume, in agreement with the crystallization mechanism predicted by the Avrami coefficient.

  12. Spatial and optical parameters of contrails in the vortex and dispersion regime determined by means of a ground-based scanning lidar

    Energy Technology Data Exchange (ETDEWEB)

    Freudenthaler, V.; Homburg, F.; Jaeger, H. [Fraunhofer-Inst. fuer Atmosphaerische Umweltforschung (IFU), Garmisch-Partenkirchen (Germany)

    1997-12-31

    The spatial growth of individual condensation trails (contrails) of commercial aircrafts in the time range from 15 s to 60 min behind the aircraft is investigated by means of a ground-based scanning backscatter lidar. The growth in width is mainly governed by wind shear and varies between 18 m/min and 140 m/min. The growth of the cross-section varies between 3500 m{sup 2}/min and 25000 m{sup 2}/min. These values are in agreement with results of model calculations and former field measurements. The vertical growth is often limited by boundaries of the humid layer at flight level, but values up to 18 m/min were observed. Optical parameters like depolarization, optical depth and lidar ratio, i.e. the extinction-to-backscatter ratio, have been retrieved from the measurements at a wavelength of 532 nm. The linear depolarization rises from values as low as 0.06 for a young contrail (10 s old) to values around 0.5, typical for aged contrails. The latter indicates the transition from non-crystalline to crystalline particles in persistent contrails within a few minutes. The scatter of depolarization values measured in individual contrails is narrow, independent of the contrails age, and suggests a rather uniform growth of the particles inside a contrail. (author) 18 refs.

  13. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  14. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  15. Remote Sensing Segmentation Benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.

    Piscataway, NJ : IEEE Press, 2012, s. 1-4. ISBN 978-1-4673-4960-4. [IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS). Tsukuba Science City (JP), 11.11.2012] R&D Projects: GA ČR GAP103/11/0335; GA ČR GA102/08/0593 Grant ostatní: CESNET(CZ) 409/2011 Keywords : remote sensing * segmentation * benchmark Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2013/RO/mikes-remote sensing segmentation benchmark.pdf

  16. Size-dependent scanning parameters (kVp and mAs) for photon-counting spectral CT system in pediatric imaging: simulation study

    Science.gov (United States)

    Chen, Han; Danielsson, Mats; Xu, Cheng

    2016-06-01

    We are developing a photon-counting spectral CT detector with a small pixel size of 0.4× 0.5 mm2, offering a potential advantage for better visualization of small structures in pediatric patients. The purpose of this study is to determine the patient size dependent scanning parameters (kVp and mAs) for pediatric CT in two imaging cases: adipose imaging and iodinated blood imaging. Cylindrical soft-tissue phantoms of diameters between 10–25 cm were used to mimic patients of different ages from 0 to 15 y. For adipose imaging, a 5 mm diameter adipose sphere was assumed as an imaging target, while in the case of iodinated imaging, an iodinated blood sphere of 1 mm in diameter was assumed. By applying the geometry of a commercial CT scanner (GE Lightspeed VCT), simulations were carried out to calculate the detectability index, {{d}\\prime 2} , with tube potentials varying from 40 to 140 kVp. The optimal kVp for each phantom in each imaging case was determined such that the dose-normalized detectability index, {{d}\\prime 2}/ dose, is maximized. With the assumption that the detectability index in pediatric imaging is required the same as in typical adult imaging, the value of mAs at optimal kVp for each phantom was selected to achieve a reference detectability index that was obtained by scanning an adult phantom (30 cm in diameter) in a typical adult CT procedure (120 kVp and 200 mAs) using a modeled energy-integrating system. For adipose imaging, the optimal kVps are 50, 60, 80, and 120 kVp, respectively, for phantoms of 10, 15, 20, and 25 cm in diameter. The corresponding mAs values required to achieve the reference detectability index are only 9%, 23%, 24%, and 54% of the mAs that is used for adult patients at 120 kVp, for 10, 15, 20, and 25 cm diameter phantoms, respectively. In the case of iodinated imaging, a tube potential of 60 kVp was found optimal for all phantoms investigated, and the mAs values required to achieve the reference detectability

  17. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the...... symmetric NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  18. A new numerical benchmark of a freshwater lens

    Science.gov (United States)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  19. Benchmarking the World's Best

    Science.gov (United States)

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  20. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  1. Benchmark problem proposal

    International Nuclear Information System (INIS)

    The meeting of the Radiation Energy Spectra Unfolding Workshop organized by the Radiation Shielding Information Center is discussed. The plans of the unfolding code benchmarking effort to establish methods of standardization for both the few channel neutron and many channel gamma-ray and neutron spectroscopy problems are presented

  2. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  3. Benchmarking Public Procurement 2016

    OpenAIRE

    World Bank Group

    2015-01-01

    Benchmarking Public Procurement 2016 Report aims to develop actionable indicators which will help countries identify and monitor policies and regulations that impact how private sector companies do business with the government. The project builds on the Doing Business methodology and was initiated at the request of the G20 Anti-Corruption Working Group.

  4. NAS Parallel Benchmarks Results

    Science.gov (United States)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  5. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  6. VHTRC temperature coefficient benchmark problem

    International Nuclear Information System (INIS)

    As an activity of IAEA Coordinated Research Programme, a benchmark problem is proposed for verifications of neutronic calculation codes for a low enriched uranium fuel high temperature gas-cooled reactor. Two problems are given on the base of heating experiments at the VHTRC which is a pin-in-block type core critical assembly loaded mainly with 4% enriched uranium coated particle fuel. One problem, VH1-HP, asks to calculate temperature coefficient of reactivity from the subcritical reactivity values at five temperature steps between an room temperature where the assembly is nearly at critical state and 200degC. The other problem, VH1-HC, asks to calculate the effective multiplication factor of nearly critical loading cores at the room temperature and 200degC. Both problems further ask to calculate cell parameters such as migration area and spectral indices. Experimental results corresponding to main calculation items are also listed for comparison. (author)

  7. Texture Segmentation Benchmark

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Mikeš, Stanislav

    Los Alamitos : IEEE Press, 2008, s. 2933-2936. ISBN 978-1-4244-2174-9. [19th International Conference on Pattern Recognition. Tampa (US), 07.12.2008-11.12.2008] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA ČR GA102/07/1594; GA ČR GA102/08/0593 Grant ostatní: GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : texture segmentation * image segmentation * benchmark Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2008/RO/haindl-texture segmentation benchmark.pdf

  8. Radiography benchmark 2014

    International Nuclear Information System (INIS)

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed

  9. Nuclear Scans

    Science.gov (United States)

    Nuclear scans use radioactive substances to see structures and functions inside your body. They use a special ... images. Most scans take 20 to 45 minutes. Nuclear scans can help doctors diagnose many conditions, including ...

  10. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of di?erent architectural and hyperparameter choices on performance. Significant ?ndings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperfor...

  11. Texture Fidelity Benchmark

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Kudělka, Miloš

    Los Alamitos, USA: IEEE Computer Society CPS, 2014. ISBN 978-1-4799-7971-4. [International Workshop on Computational Intelligence for Multimedia Understanding 2014 (IWCIM). Paris (FR), 01.11.2014-02.11.2014] R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : Benchmark testing * fidelity criteria * texture Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2014/RO/haindl-0439654.pdf

  12. Cloud benchmarking for performance

    OpenAIRE

    Varghese, Blesson; Akgun, Ozgur; Miguel, Ian; Thai, Long; Barker, Adam

    2014-01-01

    How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the applic...

  13. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  14. Final evaluation of the CB3+burnup credit benchmark addition

    International Nuclear Information System (INIS)

    In 1966 a series of benchmarks focused on the application of burnup credit in WWER spent fuel management system was launched by L.Markova (1). The four phases of the proposed benchmark series corresponded to the phases of the Burnup Credit Criticality Benchmark organised by the OECD/NEA.These phases referred as CB1, CB2, CB3 and CB4 benchmarks were designed to investigate the main features of burnup credit in WWER spent fuel management systems. In the CB1 step, the multiplication factor of an infinite array of spent fuel rods was calculated taking the burnup, cooling time and different group of nuclides as parameters. The fuel compositions was given in the benchmark specification (Authors)

  15. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  16. Reactor calculation benchmark PCA blind test results

    International Nuclear Information System (INIS)

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables

  17. A Benchmark of Lidar-Based Single Tree Detection Methods Using Heterogeneous Forest Data from the Alpine Space

    Directory of Open Access Journals (Sweden)

    Lothar Eysn

    2015-05-01

    Full Text Available In this study, eight airborne laser scanning (ALS-based single tree detection methods are benchmarked and investigated. The methods were applied to a unique dataset originating from different regions of the Alpine Space covering different study areas, forest types, and structures. This is the first benchmark ever performed for different forests within the Alps. The evaluation of the detection results was carried out in a reproducible way by automatically matching them to precise in situ forest inventory data using a restricted nearest neighbor detection approach. Quantitative statistical parameters such as percentages of correctly matched trees and omission and commission errors are presented. The proposed automated matching procedure presented herein shows an overall accuracy of 97%. Method based analysis, investigations per forest type, and an overall benchmark performance are presented. The best matching rate was obtained for single-layered coniferous forests. Dominated trees were challenging for all methods. The overall performance shows a matching rate of 47%, which is comparable to results of other benchmarks performed in the past. The study provides new insight regarding the potential and limits of tree detection with ALS and underlines some key aspects regarding the choice of method when performing single tree detection for the various forest types encountered in alpine regions.

  18. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  19. First CSNI numerical benchmark problem: comparison report

    International Nuclear Information System (INIS)

    In order to be able to make valid statements about a model's ability to describe a certain physical situation, it is indispensable that the numerical errors are much smaller than the modelling errors; otherwise, numerical errors could compensate or over pronounce model errors in an uncontrollable way. Therefore, knowledge about the numerical errors dependence on discretization parameters (e.g. size of spatial and temporal mesh) is required. In recognition of this need, numerical benchmark problems have been introduced. In the area of transient two-phase flow, numerical benchmarks are rather new. In June 1978, the CSNI Working Group on Emergency Core Cooling of Water Reactors has proposed to ICD /CSNI to sponsor a First CSNI Numerical Benchmark exercise. By the end of October 1979, results of the computation had been received from 10 organisations in 10 different countries. Based on these contributions, a preliminary comparison report has been prepared and distributed to the members of the CSNI Working Group on Emergency Core Cooling of Water Reactors, and to the contributors to the benchmark exercise. Comments on the preliminary comparison report by some contributors have subsequently been received. They have been considered in writing this final comparison report

  20. Quantitative consistency testing of thermal benchmark lattice experiments

    International Nuclear Information System (INIS)

    The paper sets forth a general method to demonstrate the quantitative consistency (or inconsistency) of results of thermal reactor lattice experiments. The method is of particular importance in selecting standard ''benchmark'' experiments for comparison testing of lattice analysis codes and neutron cross sections. ''Benchmark'' thermal lattice experiments are currently selected by consensus, which usually means the experiment is geometrically simple, well-documented, reasonably complete, and qualitatively consistent. A literature search has not revealed any general quantitative test that has been applied to experimental results to demonstrate consistency, although some experiments must have been subjected to some form or other of quantitative test. The consistency method is based on a two-group neutron balance condition that is capable of revealing the quantitative consistency (or inconsistency) of reported thermal benchmark lattice integral parameters. This equation is used in conjunction with a second equation in the following discussion to assess the consistency (or inconsistency) of: (1) several Cross Section Evaluation Working Group (CSEWG) defined thermal benchmark lattices, (2) SRL experiments on the Mark 5R and Mark 15 lattices, and (3) several D2O lattices encountered as proposed thermal benchmark lattices. Nineteen thermal benchmark lattice experiments were subjected to a quantitative test of consistency between the reported experimental integral parameters. Results of this testing showed only two lattice experiments to be generally useful as ''benchmarks,'' three lattice experiments to be of limited usefulness, three lattice experiments to be potentially useful, and 11 lattice experiments to be not useful. These results are tabulated with the lattices identified

  1. Visual information transfer. 1: Assessment of specific information needs. 2: The effects of degraded motion feedback. 3: Parameters of appropriate instrument scanning behavior

    Science.gov (United States)

    Comstock, J. R., Jr.; Kirby, R. H.; Coates, G. D.

    1984-01-01

    Pilot and flight crew assessment of visually displayed information is examined as well as the effects of degraded and uncorrected motion feedback, and instrument scanning efficiency by the pilot. Computerized flight simulation and appropriate physiological measurements are used to collect data for standardization.

  2. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano; Ferrara, Liberato; Krenzer, Knut; Mechtcherine, Viktor; Shyshko, Sergiy; Skocec, Jan; Spangenberg, Jon; Svec, Oldrich; Thrane, Lars Nyholm; Vasilic, Ksenija

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  3. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    DEFF Research Database (Denmark)

    Menze, Bjoern H.; Jakab, Andras; Bauer, Stefan;

    2015-01-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- a...

  4. Entropy-based benchmarking methods

    OpenAIRE

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  5. Benchmarking in the Semantic Web

    OpenAIRE

    García-Castro, Raúl; Gómez-Pérez, A.

    2009-01-01

    The Semantic Web technology needs to be thoroughly evaluated for providing objective results and obtaining massive improvement in its quality; thus, the transfer of this technology from research to industry will speed up. This chapter presents software benchmarking, a process that aims to improve the Semantic Web technology and to find the best practices. The chapter also describes a specific software benchmarking methodology and shows how this methodology has been used to benchmark the inter...

  6. Selecting benchmarks for reactor calculations

    OpenAIRE

    Alhassan, Erwin; Sjöstrand, Henrik; Duan, Junfeng; Helgesson, Petter; Pomp, Stephan; Österlund, Michael; Rochman, Dimitri; Koning, Arjan J.

    2014-01-01

    Criticality, reactor physics, fusion and shielding benchmarks are expected to play important roles in GENIV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used to validate computer codes and test nuclear data libraries. However the selection of these benchmarks are usually done by visual inspection which is dependent on the expertise and the experience of the user and there by resulting in a user...

  7. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  8. CT Scans

    Science.gov (United States)

    ... cross-sectional pictures of your body. Doctors use CT scans to look for Broken bones Cancers Blood clots Signs of heart disease Internal bleeding During a CT scan, you lie still on a table. The table ...

  9. Thyroid scan

    Science.gov (United States)

    ... PET scan Skin nodules Thyroid cancer Thyroid cancer - medullary carcinoma Thyroid cancer - papillary carcinoma Toxic nodular goiter ... Topics Hyperthyroidism Hypothyroidism Nuclear Scans Thyroid Cancer Thyroid Diseases Thyroid Tests Browse the Encyclopedia A.D.A. ...

  10. Measurement Methods in the field of benchmarking

    Directory of Open Access Journals (Sweden)

    István Szűts

    2004-05-01

    Full Text Available In benchmarking we often come across with parameters being difficultto measure while executing comparisons or analyzing performance, yet they haveto be compared and measured so as to be able to choose the best practices. Thesituation is similar in the case of complex, multidimensional evaluation as well,when the relative importance and order of different dimensions, parameters to beevaluated have to be determined or when the range of similar performanceindicators have to be decreased with regard to simpler comparisons. In suchcases we can use the ordinal or interval scales of measurement elaborated by S.S.Stevens.

  11. Proposal of an innovative benchmark for accuracy evaluation of dental crown manufacturing.

    Science.gov (United States)

    Atzeni, Eleonora; Iuliano, Luca; Minetola, Paolo; Salmi, Alessandro

    2012-05-01

    An innovative benchmark representing a dental arch with classic features corresponding to different kinds of prepared teeth is proposed. Dental anatomy and general rules for tooth preparation are taken into account. This benchmark includes tooth orientation and provides oblique surfaces similar to those of real prepared teeth. The benchmark is produced by additive manufacturing (AM) and subjected to digitization by a dental three-dimensional scanner. The evaluation procedure proves that the scan data can be used as reference model for crown restorations design. Therefore this benchmark is at the basis for comparative studies about different CAD/CAM and AM techniques for dental crowns. PMID:22364825

  12. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins, Robert

    2010-01-01

    Im Bereich der Regionalpolitik erfreuen sich Benchmarking-Untersuchungen wachsender Beliebtheit. In diesem Beitrag werden das Konzept des regionalen Benchmarking sowie seine Verbindungen mit den regionalpolitischen Gestaltungsprozessen analysiert. Ich entwickle eine Typologie der regionalen Benchmarking-Untersuchungen und Benchmarker und unterziehe die Literatur einer kritischen Uumlberpruumlfung. Ich argumentiere, dass die Kritiker des regionalen Benchmarking nicht die Vielfalt und Entwicklu...

  13. Benchmark calculations of sodium fast critical experiments

    International Nuclear Information System (INIS)

    The high expectations from fast critical experiments impose the additional requirements on reliability of final reconstructed values, obtained in experiments at critical facility. Benchmark calculations of critical experiments are characterized by impossibility of complete experiment reconstruction, the large amounts of input data (dependent and independent) with very different reliability. It should also take into account different sensitivity of the measured and appropriate calculated characteristics to the identical changes of geometry parameters, temperature, and isotopic composition of individual materials. The calculations of critical facility experiments are produced for the benchmark models, generated by the specific reconstructing codes with its features when adjusting model parameters, and using the nuclear data library. The generated benchmark model, providing the agreed calculated and experimental values for one or more neutronic characteristics can lead to considerable differences for other key characteristics. The sensitivity of key neutronic characteristics to the extra steel allocation in the core, and ENDF/B nuclear data sources is performed using a few calculated models of BFS-62-3A and BFS1-97 critical assemblies. The comparative analysis of the calculated effective multiplication factor, spectral indices, sodium void reactivity, and radial fission-rate distributions leads to quite different models, providing the best agreement the calculated and experimental neutronic characteristics. This fact should be considered during the refinement of computational models and code-verification purpose. (author)

  14. Shielding benchmark test

    International Nuclear Information System (INIS)

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  15. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  16. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  17. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  18. Benchmark experiments for nuclear data

    International Nuclear Information System (INIS)

    Benchmark experiments offer the most direct method for validation of nuclear data. Benchmark experiments for several areas of application of nuclear data were specified by CSEWG. These experiments are surveyed and tests of recent versions of ENDF/B are presented. (U.S.)

  19. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  20. MHTGR-350 Benchmark Analysis by MCS Code

    International Nuclear Information System (INIS)

    This benchmark contains various problems in three phases, which require the results for neutronics, thermal fluids solutions, transient calculation, and depletion calculation. The Phase-I exercise-1 problem was solved with MCS Monte Carlo (MC) code developed at UNIST. The global parameters and power distribution was compared with the results of McCARD MC code developed by SNU and a finite element method (FEM) - based diffusion code CAPP developed by KAERI. The MHTGR-350 benchmark Phase-I exercise 1 was solved with MCS. The results of MCS are compared with those of McCARD and CAPP. The results of MCS code showed good agreements with those of McCARD code while they showed considerable disagreements with those of CAPP code, which can be attributed to the fact that CAPP is a diffusion code while the others are MC transport codes

  1. Gd-2 fuel cycle Benchmark (version 1)

    International Nuclear Information System (INIS)

    The new benchmark based on Dukovany NPP Unit-3 history of Gd-2 fuel type utilisation is defined. The main goal of this benchmark is to compare results obtained by different codes used for neutron-physics calculation. Input data are described in this paper including initial state definition. Requested output data format for automatic processing is defined. This paper includes: a) fuel description b) definition of starting point and five fuel cycles with profiled fuel 3.82% only c) definition of four fuel cycles with fuel Gd-2 (enr.4.25%) d) recommendation for calculation e) list of parameters for comparison f) methodology of comparison g) an example of results comparison (Authors)

  2. Quantum benchmarks for Gaussian states

    CERN Document Server

    Chiribella, Giulio

    2014-01-01

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.

  3. Correlation between displacement of GTV and volumetric parameters of primary tumor in thoracic esophageal cancer based on repeated 4DCT scans during radiotherapy

    International Nuclear Information System (INIS)

    Objective: To investigate the correlation between the displacement of gross tumor volume (GTV) and the volume, length, and largest diameter of primary tumor in thoracic esophageal cancer based on repeated enhanced four-dimensional computed tomography (4DCT) scans during fractionated radiotherapy. Methods: Thirty enrolled patients with thoracic esophageal cancer underwent enhanced 4DCT scans before radiotherapy and every ten fractions during radiotherapy. The displacements of GTV in left-right (LR), anterior-posterior (AP), and superior-inferior (SI) directions in each scan were obtained, and then the correlation between the displacements and the volume, length, and largest diameter of primary tumor was analyzed. Results: In the 20th fraction,a significant positive correlation was observed between the displacement of GTV in LR direction and the volume of primary tumor for all patients and the patients with lower-thoracic esophageal cancer (P =0.012 and 0.040), between the displacement of GTV in SI direction and the length of primary tumor for all patients and the patients with upper-and middle-thoracic esophageal cancer (P =0.003, 0.031, and 0.044), and between the displacement of GTV in LR direction and the length of primary tumor for the patients with lower-thoracic esophageal cancer (P =0.027). At the first 4DCT scan before radiotherapy, a significant positive correlation was observed between the largest diameter of primary tumor and the displacement of GTV in LR and SI directions and three-dimensional vector for all patients (P=0.036, 0.033, and 0.018) and between the largest diameter of primary tumor and the displacement of GTV in LR direction for the patients with lower-thoracic esophageal cancer (P =0.011). Conclusions: In different fractions of radiotherapy, no significant correlation is found between the displacement of GTV in AP direction and the volume, length, and largest diameter of primary tumor in patients with lower-, middle-, or upper

  4. Developing scheduling benchmark tests for the Space Network

    Science.gov (United States)

    Moe, Karen L.; Happell, Nadine; Brady, Sean

    1993-01-01

    A set of benchmark tests were developed to analyze and measure Space Network scheduling characteristics and to assess the potential benefits of a proposed flexible scheduling concept. This paper discusses the role of the benchmark tests in evaluating alternative flexible scheduling approaches and defines a set of performance measurements. The paper describes the rationale for the benchmark tests as well as the benchmark components, which include models of the Tracking and Data Relay Satellite (TDRS), mission spacecraft, their orbital data, and flexible requests for communication services. Parameters which vary in the tests address the degree of request flexibility, the request resource load, and the number of events to schedule. Test results are evaluated based on time to process and schedule quality. Preliminary results and lessons learned are addressed.

  5. Selecting benchmarks for reactor calculations

    International Nuclear Information System (INIS)

    Criticality, reactor physics, fusion and shielding benchmarks are expected to play important roles in GENIV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used to validate computer codes and test nuclear data libraries. However the selection of these benchmarks are usually done by visual inspection which is dependent on the expertise and the experience of the user and thereby resulting in a user bias in the process. In this paper we present a method for the selection of these benchmarks for reactor applications and uncertainty reduction based on Total Monte Carlo (TMC) method. Similarities between an application case and one or several benchmarks are quantified using the correlation coefficient. Based on the method, we also propose two approaches for reducing nuclear data uncertainty using integral benchmark experiments as an additional constrain in the TMC method: a binary accept/reject method and a method of uncertainty reduction using weights. Finally, the methods were applied to a full Lead Fast Reactor core and a set of criticality benchmarks. (author)

  6. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  7. Cleanroom energy benchmarking results

    Energy Technology Data Exchange (ETDEWEB)

    Tschudi, William; Xu, Tengfang

    2001-09-01

    A utility market transformation project studied energy use and identified energy efficiency opportunities in cleanroom HVAC design and operation for fourteen cleanrooms. This paper presents the results of this work and relevant observations. Cleanroom owners and operators know that cleanrooms are energy intensive but have little information to compare their cleanroom's performance over time, or to others. Direct comparison of energy performance by traditional means, such as watts/ft{sup 2}, is not a good indicator with the wide range of industrial processes and cleanliness levels occurring in cleanrooms. In this project, metrics allow direct comparison of the efficiency of HVAC systems and components. Energy and flow measurements were taken to determine actual HVAC system energy efficiency. The results confirm a wide variation in operating efficiency and they identify other non-energy operating problems. Improvement opportunities were identified at each of the benchmarked facilities. Analysis of the best performing systems and components is summarized, as are areas for additional investigation.

  8. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  9. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  10. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing...

  11. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  12. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Catalina SITNIKOV; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  13. Gaia FGK benchmark stars: new candidates at low metallicities

    Science.gov (United States)

    Hawkins, K.; Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Casagrande, L.; Gilmore, G.; Lind, K.; Magrini, L.; Masseron, T.; Pancino, E.; Randich, S.; Worley, C. C.

    2016-07-01

    Context. We have entered an era of large spectroscopic surveys in which we can measure, through automated pipelines, the atmospheric parameters and chemical abundances for large numbers of stars. Calibrating these survey pipelines using a set of "benchmark stars" in order to evaluate the accuracy and precision of the provided parameters and abundances is of utmost importance. The recent proposed set of Gaia FGK benchmark stars has up to five metal-poor stars but no recommended stars within -2.0 http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/592/A70

  14. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking as a...... market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type...

  15. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  16. Benchmarking Developing Asia's Manufacturing Sector

    OpenAIRE

    Felipe, Jesus; Gemma ESTRADA

    2007-01-01

    This paper documents the transformation of developing Asia's manufacturing sector during the last three decades and benchmarks its share in GDP with respect to the international regression line by estimating a logistic regression.

  17. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  18. Determination of crystallization kinetics parameters of a Li1.5Al0.5Ge1.5(PO4)3 (LAGP) glass by differential scanning calorimetry

    OpenAIRE

    A.M. Rodrigues; J. L. Narváez-Semanate; A. A. Cabral; A. C. M. Rodrigues

    2013-01-01

    Crystallization kinetics parameters of a stoichiometric glass with the composition Li1.5Al0.5Ge1.5(PO4)3 were investigated by subjecting parallelepipedonal samples (3 × 3 × 1.5 mm) to heat treatment in a differential scanning calorimeter at different heating rates (3, 5, 8 and 10 °C/min). The data were analyzed using Ligero's and Kissinger's methods to determine the activation energy (E) of crystallization, which yielded, respectively, E = 415 ± 37 kJ/mol and 378 ± 19 kJ/mol. Ligero's method ...

  19. Benchmarking hypercube hardware and software

    Science.gov (United States)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  20. Strategic Behaviour under Regulation Benchmarking

    OpenAIRE

    Jamasb, Tooraj; Nillesen, Paul; Michael G. Pollitt

    2003-01-01

    Liberalisation of generation and supply activities in the electricity sectors is often followed by regulatory reform of distribution networks. In order to improve the efficiency of distribution utilities, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ?regulation game?, the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behav...

  1. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  2. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    International Nuclear Information System (INIS)

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  3. Benchmark Calculations of OECD/NEA Reactivity-Initiated Accidents

    International Nuclear Information System (INIS)

    The benchmark- Phase I was done from 2011 to 2013 with a consistent set of four experiments on very similar highly irradiated fuel rods tested under different experimental conditions: low temperature, low pressure, stagnant water coolant, very short power pulse (NSRR VA-1), high temperature, medium pressure, stagnant water coolant, very short power pulse (NSRR VA-3), high temperature, low pressure, flowing sodium coolant, larger power pulse (CABRI CIP0-1), high temperature, high pressure, flowing water coolant, medium width power pulse (CABRI CIP3-1). Based on the importance of the thermal-hydraulics aspects revealed during the Phase I, the specifications of the benchmark-Phase II was elaborated in 2014. The benchmark-Phase II focused on the deeper understanding of the differences in modeling of the different codes. The work on the benchmark- Phase II program will last the end of 2015. The benchmark cases for RIA are simulated with the code of FRAPTRAN 1.5, in order to understand the phenomena during RIA and to check the capacity of the code itself. The results of enthalpy, cladding strain and outside temperature among 21 parameters asked by the benchmark program are summarized, and they seem to reasonably reflect the actual phenomena, except for them of case 6

  4. General scan in flavor parameter space in the models with vector quark doublets and an enhancement in $B\\to X_s\\gamma$ process

    CERN Document Server

    Wang, Wenyu; Zhao, Xin-Yan

    2016-01-01

    In the models with vector like quark doublets, the mass matrices of up and down type quarks are related. Precise diagonalization for the mass matrices became an obstacle in the numerical studies. In this work we propose a diagonalization method at first. As its application, in the standard model with one vector like quark doublet we present quark mass spectrum, Feynman rules for the calculation of $B\\to X_s\\gamma$. We find that i) under the constraints of the CKM matrix measurements, the mass parameters in the bilinear term are constrained to a small value by the small deviation from unitarity; ii) compared with the fourth generation extension of the standard model, there is an enhancement to $B\\to X_s\\gamma$ process in the contribution of vector like quark, resulting a non-decoupling effect in such models.

  5. Prognostic role of metabolic parameters of 18F-FDG PET-CT scan performed during radiation therapy in locally advanced head and neck squamous cell carcinoma

    International Nuclear Information System (INIS)

    To evaluate the prognostic value of 18F-FDG PET-CT performed in the third week (iPET) of definitive radiation therapy (RT) in patients with newly diagnosed locally advanced mucosal primary head and neck squamous-cell-carcinoma (MPHNSCC). Seventy-two patients with MPHNSCC treated with radical RT underwent staging PET-CT and iPET. The maximum standardised uptake value (SUVmax), metabolic tumour volume (MTV) and total lesional glycolysis (TLG) of primary tumour (PT) and index node (IN) [defined as lymph node(s) with highest TLG] were analysed, and results were correlated with loco-regional recurrence-free survival (LRFS), disease-free survival (DFS), metastatic failure-free survival(MFFS) and overall survival (OS), using Kaplan-Meier analysis. Optimal cutoffs (OC) were derived from receiver operating characteristic curves: SUVmax-PT = 4.25 g/mL, MTVPT = 3.3 cm3, TLGPT = 9.4 g, for PT, and SUVmax-IN = 4.05 g/mL, MTVIN = 1.85 cm3 and TLGIN = 7.95 g for IN. Low metabolic values in iPET for PT below OC were associated with statistically significant better LRFS and DFS. TLG was the best predictor of outcome with 2-year LRFS of 92.7 % vs. 71.1 % [p = 0.005, compared with SUVmax (p = 0.03) and MTV (p = 0.022)], DFS of 85.9 % vs. 60.8 % [p = 0.005, compared with SUVmax (p = 0.025) and MTV (p = 0.018)], MFFS of 85.9 % vs. 83.7 % [p = 0.488, compared with SUVmax (p = 0.52) and MTV (p = 0.436)], and OS of 81.1 % vs. 75.0 % [p = 0.279, compared with SUVmax (p = 0.345) and MTV (p = 0.512)]. There were no significant associations between the percentage reduction of primary tumour metabolic parameters and outcomes. In patients with nodal disease, metabolic parameters below OC (for both PT and IN) were significantly associated with all oncological outcomes, while TLG was again the best predictor: LRFS of 84.0 % vs. 55.3 % (p = 0.017), DFS of 79.4 % vs. 38.6 % (p = 0.001), MFFS 86.4 % vs. 68.2 % (p = 0.034) and OS 80.4 % vs. 55.7 % (p = 0.045). The metabolic parameters of iPET can be

  6. Prognostic role of metabolic parameters of {sup 18}F-FDG PET-CT scan performed during radiation therapy in locally advanced head and neck squamous cell carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Min, Myo; Forstner, Dion [Liverpool Hospital, Cancer Therapy Centre, Liverpool, NSW (Australia); University of New South Wales, Sydney, NSW (Australia); Ingham Institute of Applied Medical Research, Liverpool, NSW (Australia); Lin, Peter; Shon, Ivan Ho; Lin, Michael [University of New South Wales, Sydney, NSW (Australia); Liverpool Hospital, Department of Nuclear Medicine and Positron Emission Tomography, Liverpool, NSW (Australia); University of Western Sydney, Sydney, NSW (Australia); Lee, Mark T. [Liverpool Hospital, Cancer Therapy Centre, Liverpool, NSW (Australia); University of New South Wales, Sydney, NSW (Australia); Bray, Victoria; Fowler, Allan [Liverpool Hospital, Cancer Therapy Centre, Liverpool, NSW (Australia); Chicco, Andrew [Liverpool Hospital, Department of Nuclear Medicine and Positron Emission Tomography, Liverpool, NSW (Australia); Tieu, Minh Thi [Calvary Mater Newcastle, Department of Radiation Oncology, Newcastle, NSW (Australia); University of Newcastle, Newcastle, NSW (Australia)

    2015-12-15

    To evaluate the prognostic value of {sup 18}F-FDG PET-CT performed in the third week (iPET) of definitive radiation therapy (RT) in patients with newly diagnosed locally advanced mucosal primary head and neck squamous-cell-carcinoma (MPHNSCC). Seventy-two patients with MPHNSCC treated with radical RT underwent staging PET-CT and iPET. The maximum standardised uptake value (SUV{sub max}), metabolic tumour volume (MTV) and total lesional glycolysis (TLG) of primary tumour (PT) and index node (IN) [defined as lymph node(s) with highest TLG] were analysed, and results were correlated with loco-regional recurrence-free survival (LRFS), disease-free survival (DFS), metastatic failure-free survival(MFFS) and overall survival (OS), using Kaplan-Meier analysis. Optimal cutoffs (OC) were derived from receiver operating characteristic curves: SUV{sub max-PT} = 4.25 g/mL, MTV{sub PT} = 3.3 cm{sup 3}, TLG{sub PT} = 9.4 g, for PT, and SUV{sub max-IN} = 4.05 g/mL, MTV{sub IN} = 1.85 cm{sup 3} and TLG{sub IN} = 7.95 g for IN. Low metabolic values in iPET for PT below OC were associated with statistically significant better LRFS and DFS. TLG was the best predictor of outcome with 2-year LRFS of 92.7 % vs. 71.1 % [p = 0.005, compared with SUV{sub max} (p = 0.03) and MTV (p = 0.022)], DFS of 85.9 % vs. 60.8 % [p = 0.005, compared with SUV{sub max} (p = 0.025) and MTV (p = 0.018)], MFFS of 85.9 % vs. 83.7 % [p = 0.488, compared with SUV{sub max} (p = 0.52) and MTV (p = 0.436)], and OS of 81.1 % vs. 75.0 % [p = 0.279, compared with SUV{sub max} (p = 0.345) and MTV (p = 0.512)]. There were no significant associations between the percentage reduction of primary tumour metabolic parameters and outcomes. In patients with nodal disease, metabolic parameters below OC (for both PT and IN) were significantly associated with all oncological outcomes, while TLG was again the best predictor: LRFS of 84.0 % vs. 55.3 % (p = 0.017), DFS of 79.4 % vs. 38.6 % (p = 0.001), MFFS 86.4 % vs. 68.2 % (p = 0

  7. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    Energy Technology Data Exchange (ETDEWEB)

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  8. CT scan

    Science.gov (United States)

    ... come from a CT scan. Some people have allergies to contrast dye. Let your doctor know if you have ... vein contains iodine. If you have an iodine allergy, a type of contrast may cause nausea or vomiting , sneezing , itching , or ...

  9. MRI Scans

    Science.gov (United States)

    Magnetic resonance imaging (MRI) uses a large magnet and radio waves to look at organs and structures inside your body. Health care professionals use MRI scans to diagnose a variety of conditions, from ...

  10. Closed-Loop Neuromorphic Benchmarks

    Science.gov (United States)

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  11. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  12. ZZ IHEAS-BENCHMARKS, High-Energy Accelerator Shielding Benchmarks

    International Nuclear Information System (INIS)

    Description of program or function: Six kinds of Benchmark problems were selected for evaluating the model codes and the nuclear data for the intermediate and high energy accelerator shielding by the Shielding Subcommittee in the Research Committee on Reactor Physics. The benchmark problems contain three kinds of neutron production data from thick targets due to proton, alpha and electron, and three kinds of shielding data for secondary neutron and photon generated by proton. Neutron and photo-neutron reaction cross section data are also provided for neutrons up to 500 MeV and photons up to 300 MeV, respectively

  13. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  14. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    OpenAIRE

    Dreher, Patrick; Byun, Chansup; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many...

  15. Structured Light 3 D Scanning System Based on Dynamic Parameter Control%基于动态参数控制的结构光三维扫描系统

    Institute of Scientific and Technical Information of China (English)

    沈杭锦; 吴以凡; 张桦; 吴燕萍

    2013-01-01

    Based on the dynamic parameter control , this paper designs a structured light 3 D scanning system using webcam and projector .Firstly, this paper uses ZHANG Zheng-you calibration algorithm to calculate the intrinsic parameters and distortion coefficients of webcam and projector , then uses triangulation and encoding and decoding of Gray code to calculate three-dimensional coordinate .Generally, 3D scanning system should be in a sealed opaque dark room , but in reality it may exist external illumination , thus the surface of the object may become high brightness and lead to photo overexposed .This problem will affect the result .This paper dynamic controls two parameters which are the webcam gain and the projector's projection brightness before scanning the object, which achieve a better result .Finally, it compares the experimental results .%该文利用摄像头和投影仪平台,设计并实现了基于动态参数控制的结构光三维扫描系统。首先采用张正友棋盘格标定算法计算摄像头和投影仪的内参矩阵及畸变系数,然后利用三角测量法原理及格雷码编解码技术计算被测物体表面的三维坐标值。由于结构光扫描系统一般要求实验场景是封闭不透光的暗室,但实际实验环境往往存在外界光照,导致物体表面光照强度过强,使拍摄照片曝光过度,进而影响扫描精度。为了消除外界光照对测量结果的影响,提出在进行结构光扫描之前采用动态调节摄像头的增益水平和投影仪的投射光强这两个动态参数,取得了较好的效果。

  16. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  17. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of Keff, control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  18. Scanning system

    International Nuclear Information System (INIS)

    An improved transversally cutting radionuclide scanning system is described which can be used for medical diagnosis and medical treatment of men, particularly, for brain investingations. 99mTc43 is named as a radionuclide. The device described is more sensitive, and displays results in a shorter period of time than devices known until now. By means of laser emitting diodes a continuous transmission and collection of signals is obtained, due to a rotating picture framework of offset and meshing detectors surrounding completely the scanning field around a single rotation axis - coaxialy with the axis of the head. Signals are processed and displayed by a connected computer. Description in detail, 7 figures. (UWI)

  19. Higgs pair production: choosing benchmarks with cluster analysis

    Science.gov (United States)

    Carvalho, Alexandra; Dall'Osso, Martino; Dorigo, Tommaso; Goertz, Florian; Gottardo, Carlo A.; Tosi, Mia

    2016-04-01

    New physics theories often depend on a large number of free parameters. The phenomenology they predict for fundamental physics processes is in some cases drastically affected by the precise value of those free parameters, while in other cases is left basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics predicted by different models; a clustering algorithm using that metric may allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmarks are then guaranteed to be sensitive to a large area of the parameter space.

  20. In-core fuel management benchmarks for PHWRs

    International Nuclear Information System (INIS)

    Under its in-core fuel management activities, the IAEA set up two co-ordinated research programmes (CRPs) on complete in-core fuel management code packages. At a consultant meeting in November 1988 the outline of the CRP on in-core fuel management benchmars for PHWRs was prepared, three benchmarks were specified and the corresponding parameters were defined. At the first research co-ordination meeting in December 1990, seven more benchmarks were specified. The objective of this TECDOC is to provide reference cases for the verification of code packages used for reactor physics and fuel management of PHWRs. 91 refs, figs, tabs

  1. Isospin-Violating Dark Matter Benchmarks for Snowmass 2013

    CERN Document Server

    Feng, Jonathan L; Marfatia, Danny; Sanford, David

    2013-01-01

    Isospin-violating dark matter (IVDM) generalizes the standard spin-independent scattering parameter space by introducing one additional parameter, the neutron-to-proton coupling ratio f_n/f_p. In IVDM the implications of direct detection experiments can be altered significantly. We review the motivations for considering IVDM and present benchmark models that illustrate some of the qualitatively different possibilities. IVDM strongly motivates the use of a variety of target nuclei in direct detection experiments.

  2. Scan Statistics

    CERN Document Server

    Glaz, Joseph

    2009-01-01

    Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.

  3. Benchmark models, planes lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  4. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  5. Benchmark models, planes, lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data. (orig.)

  6. 螺旋CT参数测量指导全髋关节置换术的可行性研究%Accuracy of three-dimensional CT scan parameters for guiding total hip arthroplasty

    Institute of Scientific and Technical Information of China (English)

    马纪坤; 朱凤臣; 张铭华

    2015-01-01

    目的 探讨螺旋CT相关参数测量在预防全髋关节置换术(THA)患者肢体不等长、术后脱位的指导作用. 方法 收治86例初次行单侧THA患者,按随机数字表法分成两组:研究组45例通过螺旋CT三维重建技术测量健侧髋臼前倾角、外展角、股骨颈截骨处与股骨头旋转中心距离、大转子尖与股骨头旋转中心距离,以此规划患侧THA;对照组41例不进行参数测量.对比手术前后疗效,验证螺旋CT三维重建术前相关参数测量在预防THA患者肢体不等长及术后脱位等的指导作用. 结果 86例患者均获随访,研究组随访时间(11.2±6.2)个月,对照组随访时间(11.6±6.2)个月.关节功能随访情况:研究组术后3个月Haris评分(87.2±5.4)分;对照组术后3个月Harris评分(80.9±7.9)分(P<0.05).研究组术后肢体不等长(0.4±0.2)cm;对照组为(1.1±0.4)cm(P<0.05).两组术后3个月内脱位各1例. 结论 术前CT三维重建参数测量在THA术中具有一定的指导意义.%Objective To investigate the clinical significance of preoperative three-dimensional CT scan parameters to restore postoperative limb length and reduce postoperative dislocation in patients with total hip arthroplasty (THA).Methods Clinical data of two groups involving 86 cases that had primary unilateral THA were included.In study group 45 cases were operated on with the measurement of contralateral acetabular anteversion, acetabular abduction angle, distance from femoral neck osteotomy to the center of rotation of the femoral head and distance from femoral trochanter tip to the center of rotation based on three-dimensional CT scan.Another 41 cases under conventional surgery which not used these parameters served as control.Surgical efficacy was compared to verify the role of CT scan parameters in restoring postoperative limb length and reducing postoperative dislocation.Results Period of follow-up was (11.2 ± 6.2) months in study group and (11.6 ± 6

  7. Evaluation of results of benchmark test on CCF

    International Nuclear Information System (INIS)

    A benchmark test on common cause failures (CCF) was performed giving interested institutions in Germany the opportunity to demonstrate and justify their interpretations of events, their methods and models for analyzing CCF. The task for the benchmark test was to analyze two typical groups of motor-operated valves in German nuclear power plants. In this report the progress and the results of the benchmark are summarized and assessed by GRS. Furthermore, resulting proposals are given for the ongoing development of the PSA-documents. The benchmark test was carried out in two steps. In the first step the participants were to assess in a qualitative way some 200 event-reports on isolation valves. They then were to establish, quantitatively, the reliability parameters for the CCF in the two groups of motor-operated valves, using their own methods and their own calculation models. In a second step the reliability parameters were to be recalculated on the basis of a common reference of well defined events, chosen from all given events, in order to analyze the influence of the calculation models on the reliability parameters. (orig.)

  8. 2009 South American benchmarking study: natural gas transportation companies

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Nathalie [Gas TransBoliviano S.A. (Bolivia); Walter, Juliana S. [TRANSPETRO, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In the current business environment large corporations are constantly seeking to adapt their strategies. Benchmarking is an important tool for continuous improvement and decision-making. Benchmarking is a methodology that determines which aspects are the most important to be improved upon, and it proposes establishing a competitive parameter in an analysis of the best practices and processes, applying continuous improvement driven by the best organizations in their class. At the beginning of 2008, GTB (Gas TransBoliviano S.A.) contacted several South American gas transportation companies to carry out a regional benchmarking study in 2009. In this study, the key performance indicators of the South American companies, whose reality is similar, for example, in terms of prices, availability of labor, and community relations, will be compared. Within this context, a comparative study of the results, the comparative evaluation among natural gas transportation companies, is becoming an essential management instrument to help with decision-making. (author)

  9. Benchmarking biodiversity performances of farmers

    NARCIS (Netherlands)

    Snoo, de G.R.; Lokhorst, A.M.; Dijk, van J.; Staats, H.; Musters, C.J.M.

    2010-01-01

    Farmers are the key players when it comes to the enhancement of farmland biodiversity. In this study, a benchmark system that focuses on improving farmers’ nature conservation was developed and tested among Dutch arable farmers in different social settings. The results show that especially tailored

  10. Benchmark calculations for EGS5

    International Nuclear Information System (INIS)

    In the past few years, EGS4 has undergone an extensive upgrade to EGS5, in particularly in the areas of low-energy electron physics, low-energy photon physics, PEGS cross section generation, and the coding from Mortran to Fortran programming. Benchmark calculations have been made to assure the accuracy, reliability and high quality of the EGS5 code system. This study reports three benchmark examples that show the successful upgrade from EGS4 to EGS5 based on the excellent agreements among EGS4, EGS5 and measurements. The first benchmark example is the 1969 Crannell Experiment to measure the three-dimensional distribution of energy deposition for 1-GeV electrons shower in water and aluminum tanks. The second example is the 1995 Compton-scattered spectra measurements for 20-40 keV, linearly polarized photon by Namito et. al., in KEK, which was a main part of the low-energy photon expansion work for both EGS4 and EGS5. The third example is the 1986 heterogeneity benchmark experiment by Shortt et. al., who used a monoenergetic 20-MeV electron beam to hit the front face of a water tank containing both air and aluminum cylinders and measured spatial depth dose distribution using a small solid-state detector. (author)

  11. Nominal GDP: Target or Benchmark?

    OpenAIRE

    Hetzel, Robert L.

    2015-01-01

    Some observers have argued that the Federal Reserve would best fulfill its mandate by adopting a target for nominal gross domestic product (GDP). Insights from the monetarist tradition suggest that nominal GDP targeting could be destabilizing. However, adopting benchmarks for both nominal and real GDP could offer useful information about when monetary policy is too tight or too loose.

  12. Monte Carlo photon benchmark problems

    International Nuclear Information System (INIS)

    Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems. 8 refs., 5 figs

  13. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  14. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  15. Benchmark results in vector atmospheric radiative transfer

    International Nuclear Information System (INIS)

    In this paper seven vector radiative transfer codes are inter-compared for the case of underlying black surface. They include three techniques based on the discrete ordinate method (DOM), two Monte-Carlo methods, the successive orders scattering method, and a modified doubling-adding technique. It was found that all codes give very similar results. Therefore, we were able to produce benchmark results for the Stokes parameters both for reflected and transmitted light in the cases of molecular, aerosol and cloudy multiply scattering media. It was assumed that the single scattering albedo is equal to one. Benchmark results have been provided by several studies before, including Coulson et al., Garcia and Siewert, Wauben and Hovenier, and Natraj et al. among others. However, the case of the elongated phase functions such as for a cloud and with a high angular resolution is presented here for the first time. Also in difference with other studies, we make inter-comparisons using several codes for the same input dataset, which enables us to quantify the corresponding errors more accurately.

  16. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  17. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  18. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Science.gov (United States)

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available. PMID:26335248

  19. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  20. A Framework for Urban Transport Benchmarking

    OpenAIRE

    Henning, Theuns; Essakali, Mohammed Dalil; Oh, Jung Eun

    2011-01-01

    This report summarizes the findings of a study aimed at exploring key elements of a benchmarking framework for urban transport. Unlike many industries where benchmarking has proven to be successful and straightforward, the multitude of the actors and interactions involved in urban transport systems may make benchmarking a complex endeavor. It was therefore important to analyze what has bee...

  1. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  2. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  3. Hybrid BN-600 core benchmark analyses

    International Nuclear Information System (INIS)

    Cross section library KAFAX used for BN-600 core benchmark calculations was based on nuclear data files ENDF-B/VI and JEF-2.2. Generation of effective cross sections were generated by a homogeneous cell model, group collapsing from 80 to 9 groups. Core neutron flux calculations were done by coarse mesh nodal diffusion approximation (DIF3D code), Nodal simplified P2 transport calculation (SOLTRAN code), and discrete SN approximation (TWODAT code). First order perturbation theory was used for reactivity parameter calculation. Calculation code applied for burnup calculation was the three-dimensional code REBUS-3 with 9 group cross section library from basic neutronic calculations. Results obtained include: multiplication factor k-eff at the beginning and at the end of cycled, reactivity burnup loss, fuel Doppler coefficient and sodium density coefficient. Results of heterogeneity calculations include k-eff, control rod worth and sodium density coefficient

  4. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    Science.gov (United States)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  5. Gaming in a benchmarking environment. A non-parametric analysis of benchmarking in the water sector

    OpenAIRE

    De Witte, Kristof; Marques, Rui

    2009-01-01

    This paper discusses the use of benchmarking in general and its application to the drinking water sector. It systematizes the various classifications on performance measurement, discusses some of the pitfalls of benchmark studies and provides some examples of benchmarking in the water sector. After presenting in detail the institutional framework of the water sector of the Belgian region of Flanders (without benchmarking experiences), Wallonia (recently started a public benchmark) and the Net...

  6. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past the doorstep and is now in early trials in the project and construction environment....

  7. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  8. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  9. Local Innovation Systems and Benchmarking

    OpenAIRE

    Cantner, Uwe

    2008-01-01

    This paper reviews approaches used for evaluating the performance of local or regional innovation systems. This evaluation is performed by a benchmarking approach in which a frontier production function can be determined, based on a knowledge production function relating innovation inputs and innovation outputs. In analyses on the regional level and especially when acknowledging regional innovation systems those approaches have to take into account cooperative invention and innovation - the c...

  10. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  11. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  12. Prismatic VHTR neutronic benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Connolly, Kevin John, E-mail: connolly@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA (United States); Rahnema, Farzad, E-mail: farzad@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA (United States); Tsvetkov, Pavel V. [Department of Nuclear Engineering, Texas A& M University, College Station, TX (United States)

    2015-04-15

    Highlights: • High temperature gas-cooled reactor neutronics benchmark problems. • Description of a whole prismatic VHTR core in its full heterogeneity. • Modeled using continuous energy nuclear data at a representative hot operating temperature. • Benchmark results for core eigenvalue, block-averaged power, and some selected pin fission density results. - Abstract: This paper aims to fill an apparent scarcity of benchmarks based on high temperature gas-cooled reactors. Within is a description of a whole prismatic VHTR core in its full heterogeneity and modeling using continuous energy nuclear data at a representative hot operating temperature. Also included is a core which has been simplified for ease in modeling while attempting to preserve as faithfully as possible the neutron physics of the core. Fuel and absorber pins have been homogenized from the particle level, however, the blocks which construct the core remain strongly heterogeneous. A six group multigroup (discrete energy) cross section set has been developed via Monte Carlo using the original heterogeneous core as a basis. Several configurations of the core have been solved using these two cross section sets; eigenvalue results, block-averaged power results, and some selected pin fission density results are presented in this paper, along with the six-group cross section data, so that method developers may use these problems as a standard reference point.

  13. Prismatic VHTR neutronic benchmark problems

    International Nuclear Information System (INIS)

    Highlights: • High temperature gas-cooled reactor neutronics benchmark problems. • Description of a whole prismatic VHTR core in its full heterogeneity. • Modeled using continuous energy nuclear data at a representative hot operating temperature. • Benchmark results for core eigenvalue, block-averaged power, and some selected pin fission density results. - Abstract: This paper aims to fill an apparent scarcity of benchmarks based on high temperature gas-cooled reactors. Within is a description of a whole prismatic VHTR core in its full heterogeneity and modeling using continuous energy nuclear data at a representative hot operating temperature. Also included is a core which has been simplified for ease in modeling while attempting to preserve as faithfully as possible the neutron physics of the core. Fuel and absorber pins have been homogenized from the particle level, however, the blocks which construct the core remain strongly heterogeneous. A six group multigroup (discrete energy) cross section set has been developed via Monte Carlo using the original heterogeneous core as a basis. Several configurations of the core have been solved using these two cross section sets; eigenvalue results, block-averaged power results, and some selected pin fission density results are presented in this paper, along with the six-group cross section data, so that method developers may use these problems as a standard reference point

  14. An introduction to benchmarking in healthcare.

    Science.gov (United States)

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related. PMID:10139084

  15. Benchmarking cloud performance for service level agreement parameters

    OpenAIRE

    Gillam, L.; B. Li; O'Loughlin, J

    2014-01-01

    Infrastructure as a Service (IaaS) Clouds offer capabilities for the high-availability of a wide range of systems, from individual virtual machines to large-scale high performance computing (HPC) systems. But it is argued that the widespread uptake for such systems will only happen if Cloud providers, or brokers, are able to offer bilateral service level agreements (SLAs). In this paper, we discuss how to measure and use quality of service (QoS) information to be able to predict availability,...

  16. Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity

    Directory of Open Access Journals (Sweden)

    Yu.V. Dvirko

    2013-03-01

    Full Text Available The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalization big companies, enterprises, organizations realize the necessity of the thorough and profound research of the best achievements of market subjects relations with their further using in their own activity. Benchmarking is the process of competitive advantages borrowing and competitiveness increasing of Consumer Cooperatives trade enterprises at the expense of research leaning and adapting the best methods of business processes realization with the purpose to increase their functioning affectivity and best satisfaction of societal needs.The main goals of benchmarking using in Consumer Cooperatives are the following: increasing of needs satisfaction level at the expense of products quality increasing, transportation goods term shortening, service quality increasing; enterprise potential strengthening, competitiveness strengthening, image improvement; generation and new ideas and innovative decisions implementation in trade enterprise activity. The advantages of benchmarking using in Consumer Cooperatives trade enterprises activity are the following: adapting the parameters of enterprise functioning to market demands; gradual defining and removing inadequacies which obstacle enterprise development; borrowing the best methods of further enterprise development; competitive advantages gaining; technological innovations; employees motivation. Authors classification of benchmarking is represented by the following components: by cycle durability strategic, operative

  17. Thyroid Scan and Uptake

    Science.gov (United States)

    ... News Physician Resources Professions Site Index A-Z Thyroid Scan and Uptake Thyroid scan and uptake uses ... the Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid scan is a ...

  18. Pelvic CT scan

    Science.gov (United States)

    ... axial tomography scan - pelvis; Computed tomography scan - pelvis; CT scan - pelvis ... Risks of CT scans include: Being exposed to radiation Allergic reaction to contrast dye CT scans do expose you to more radiation ...

  19. Cervical spine CT scan

    Science.gov (United States)

    ... cervical spine; Computed tomography scan of cervical spine; CT scan of cervical spine; Neck CT scan ... Risks of CT scans include: Being exposed to radiation Allergic reaction to contrast dye CT scans expose you to more radiation than ...

  20. Sinus CT scan

    Science.gov (United States)

    ... axial tomography scan - sinus; Computed tomography scan - sinus; CT scan - sinus ... Risks of a CT scan includes: Being exposed to radiation Allergic reaction to contrast dye CT scans expose you to more radiation than regular ...

  1. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  2. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  3. Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications

    Energy Technology Data Exchange (ETDEWEB)

    Leal, Luiz C [ORNL; Ivanov, E. [Institut de Radioprotection et de Surete Nucleaire

    2015-01-01

    The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.

  4. Assessment of a Subchannel Code MATRA for OECD/NRC PSBT Benchmark Exercises

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae Hyun; Kim, Seong Jin; Seo, Kyong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-05-15

    The OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark was organized on the basis of NUPEC database. The purposes of the benchmark are the encouragement to develop a theoretically-base microscopic approach as well as the comparison of currently available computational approaches. The benchmark consists of two separate phases: void distribution benchmark and DNB benchmark. Subchannel-grade void distribution data was employed for validation of a subchannel analysis code under steady-state and transient conditions. DNB benchmark provided subchannel fluid temperature data which can be used to determine the turbulent mixing parameter for a subchannel code. The NUPEC PWR test facility consists of high pressure and high temperature recirculation loop, a cooling loop, and data recording system. The void fraction was measured by two different methods: A gamma-ray beam CT scanner system was used to determine the distribution of density/void fraction over the subchannel at steady-state flow and to define the subchannel averaged void fraction with an accuracy by {+-}3%. A multi-beam system was used to measure chordal averaged subchannel void fraction in rod bundle with accuracies of {+-}4% and {+-}5% for steady-state and transient, respectively. The purpose of this study is to provide analysis results for PSBT benchmark problems for void distribution, subchannel mixing, and DNB, as well as to evaluate the applicability of some mechanistic DNB models to PSBT benchmark data with the aid of subchannel analysis results calculated by the MATRA code

  5. Assessment of a Subchannel Code MATRA for OECD/NRC PSBT Benchmark Exercises

    International Nuclear Information System (INIS)

    The OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark was organized on the basis of NUPEC database. The purposes of the benchmark are the encouragement to develop a theoretically-base microscopic approach as well as the comparison of currently available computational approaches. The benchmark consists of two separate phases: void distribution benchmark and DNB benchmark. Subchannel-grade void distribution data was employed for validation of a subchannel analysis code under steady-state and transient conditions. DNB benchmark provided subchannel fluid temperature data which can be used to determine the turbulent mixing parameter for a subchannel code. The NUPEC PWR test facility consists of high pressure and high temperature recirculation loop, a cooling loop, and data recording system. The void fraction was measured by two different methods: A gamma-ray beam CT scanner system was used to determine the distribution of density/void fraction over the subchannel at steady-state flow and to define the subchannel averaged void fraction with an accuracy by ±3%. A multi-beam system was used to measure chordal averaged subchannel void fraction in rod bundle with accuracies of ±4% and ±5% for steady-state and transient, respectively. The purpose of this study is to provide analysis results for PSBT benchmark problems for void distribution, subchannel mixing, and DNB, as well as to evaluate the applicability of some mechanistic DNB models to PSBT benchmark data with the aid of subchannel analysis results calculated by the MATRA code

  6. Benchmark calculations of thermal reaction rates. I - Quantal scattering theory

    Science.gov (United States)

    Chatfield, David C.; Truhlar, Donald G.; Schwenke, David W.

    1991-01-01

    The thermal rate coefficient for the prototype reaction H + H2 yields H2 + H with zero total angular momentum is calculated by summing, averaging, and numerically integrating state-to-state reaction probabilities calculated by time-independent quantum-mechanical scattering theory. The results are very carefully converged with respect to all numerical parameters in order to provide high-precision benchmark results for confirming the accuracy of new methods and testing their efficiency.

  7. Modelling the benchmark spot curve for the Serbian

    Directory of Open Access Journals (Sweden)

    Drenovak Mikica

    2010-01-01

    Full Text Available The objective of this paper is to estimate Serbian benchmark spot curves using the Svensson parametric model. The main challenges that we tackle are: sparse data, different currency denominations of short and longer term maturities, and infrequent transactions in the short-term market segment vs daily traded medium and long-term market segment. We find that the model is flexible enough to account for most of the data variability. The model parameters are interpreted in economic terms.

  8. (Invited) Microreactors for Characterization and Benchmarking of Photocatalysts

    DEFF Research Database (Denmark)

    Vesborg, Peter Christian Kjærgaard; Dionigi, Fabio; Trimarco, Daniel Bøndergaard;

    2015-01-01

    In the field of photocatalysis the batch-nature of the typical benchmarking experiment makes it very laborious to obtain good kinetic data as a function of parameters such as illumination wavelength, irradiance, catalyst temperature, reactant composition, etc. Microreactors with on-line mass spec......] Dionigi et al. Rev. Sci. Instr., 84, p. 103910 (2013) [6] Bøndergaard et al. "Fast and sensitive method for detecting volatile species in liquids", submitted...

  9. Isprs Benchmark for Multi-Platform Photogrammetry

    Science.gov (United States)

    Nex, F.; Gerke, M.; Remondino, F.; Przybilla, H.-J.; Bäumker, M.; Zurhorst, A.

    2015-03-01

    Airborne high resolution oblique imagery systems and RPAS/UAVs are very promising technologies that will keep on influencing the development of geomatics in the future years closing the gap between terrestrial and classical aerial acquisitions. These two platforms are also a promising solution for National Mapping and Cartographic Agencies (NMCA) as they allow deriving complementary mapping information. Although the interest for the registration and integration of aerial and terrestrial data is constantly increasing, only limited work has been truly performed on this topic. Several investigations still need to be undertaken concerning algorithms ability for automatic co-registration, accurate point cloud generation and feature extraction from multiplatform image data. One of the biggest obstacles is the non-availability of reliable and free datasets to test and compare new algorithms and procedures. The Scientific Initiative "ISPRS benchmark for multi-platform photogrammetry", run in collaboration with EuroSDR, aims at collecting and sharing state-of-the-art multi-sensor data (oblique airborne, UAV-based and terrestrial images) over an urban area. These datasets are used to assess different algorithms and methodologies for image orientation and dense matching. As ground truth, Terrestrial Laser Scanning (TLS), Aerial Laser Scanning (ALS) as well as topographic networks and GNSS points were acquired to compare 3D coordinates on check points (CPs) and evaluate cross sections and residuals on generated point cloud surfaces. In this paper, the acquired data, the pre-processing steps, the evaluation procedures as well as some preliminary results achieved with commercial software will be presented.

  10. Cross sections, benchmarks, etc.: What is data testing all about

    International Nuclear Information System (INIS)

    In order to determine the consistency of two distinct measurements of a physical quantity, the discrepancy d between the two should be compared with its own standard deviation, σ = √(σ/sub 1//sup 2/+σ/sub 2//sup 2/). To properly test a given cross-section library by a set of benchmark (integral) measurements, the quantity corresponding to (d/σ)/sup 2/ is the quadratic d/sup dagger/C/sup -1/d. Here d is the vector of which the components are the discrepancies between the calculated values of the integral parameters and their corresponding measured values, and C is the uncertainty matrix of these discrepancies. This quadratic form is the only true measure of the joint consistency of the library and benchmarks. On the other hand, the very matrix C is essentially all one needs to adjust the library by the benchmarks. Therefore, any argument against adjustment simultaneously disqualifies all serious attempts to test cross-section libraries against integral benchmarks

  11. Benchmarking of radiological departments. Starting point for successful process optimization

    International Nuclear Information System (INIS)

    Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)

  12. Benchmarking: A tool for conducting self-assessment

    International Nuclear Information System (INIS)

    There is more information on nuclear plant performance available than can reasonably be assimilated and used effectively by plant management or personnel responsible for self-assessment. Also, it is becoming increasingly more important that an effective self-assessment program uses internal parameters not only to evaluate performance, but to incorporate lessons learned from other plants. Because of the quantity of information available, it is important to focus efforts and resources in areas where safety or performance is a concern and where the most improvement can be realized. One of the techniques that is being used to effectively accomplish this is benchmarking. Benchmarking involves the use of various sources of information to self-identify a plant's strengths and weaknesses, identify which plants are strong performers in specific areas, evaluate what makes a top performer, and incorporate the success factors into existing programs. The formality with which benchmarking is being implemented varies widely depending on the objective. It can be as simple as looking at a single indicator, such as systematic assessment of licensee performance (SALP) in engineering and technical support, then surveying the top performers with specific questions. However, a more comprehensive approach may include the performance of a detailed benchmarking study. Both operational and economic indicators may be used in this type of evaluation. Some of the indicators that may be considered and the limitations of each are discussed

  13. MOCUM solutions to the 2-D hexagonal HTTR benchmark problems

    International Nuclear Information System (INIS)

    Highlights: ► Method of characteristic solutions of prismatic heterogeneous HTTR benchmarks are presented. ► Excellent agreements between MOCUM and MCNP5 results were achieved. ► HTTR benchmark problems are insensitive to azimuthal angle number. ► Leonard polar angle performs better than the Gauss–Legendre polar angles. ► Small zone size is required for accurate modeling of the strong flux gradient. - Abstract: The 2-D hexagonal HTTR benchmark problems were calculated by MOCUM code, whose fundamental methodologies are method of characteristics and unstructured meshing. MOCUM multiplication factors, fuel block and fuel pin fission rates, burnable poison and control rod absorption rates of three control rod configurations were compared with MCNP5 reference solutions. Excellent agreements were achieved. Maximum keff and reaction rate different among three cases are 0.01% and 0.26%, respectively. Sensitivity studies on parameter impacts are also presented. These benchmark problems are insensitive to the number of azimuthal angles. Small zone size is required for modeling the strong absorbers. Leonard type polar angles perform better than the Gauss–Legendre polar angles

  14. NFS Tricks and Benchmarking Traps

    OpenAIRE

    Seltzer, Margo; Ellard, Daniel

    2003-01-01

    We describe two modi cations to the FreeBSD 4.6 NFS server to increase read throughput by improving the read-ahead heuristic to deal with reordered requests and stride access patterns. We show that for some stride access patterns, our new heuristics improve end-to-end NFS throughput by nearly a factor of two. We also show that benchmarking and experimenting with changes to an NFS server can be a subtle and challenging task, and that it is often difficult to distinguish the impact of a new ...

  15. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    The experimental results of startup tests after reconstruction and modification of the TRIGA Mark II reactor in Ljubljana are presented. The experiments were performed with a completely fresh, compact, and uniform core. The operating conditions were well defined and controlled, so that the results can be used as a benchmark test case for TRIGA reactor calculations. Both steady-state and pulse mode operation were tested. In this paper, the following steady-state experiments are treated: critical core and excess reactivity, control rod worths, fuel element reactivity worth distribution, fuel temperature distribution, and fuel temperature reactivity coefficient

  16. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  17. Thermal fatigue benchmark final - research report

    International Nuclear Information System (INIS)

    DNV (Det Norske Veritas) has analysed a 3D mock-up, loaded with variable temperature. The load is applied to the internal of a pipe, and deviates from the axisymmetrical case. The calculations were performed in blind in an international benchmark project. DNV's contribution was funded by SKI. The calculations show the importance of taking the non-axisymmetry into account. An axisymmetrical analysis would underestimate the stresses in the pipe. The temperature field in the mock-up was measured at several locations in the pre-test condition. It turned out to be difficult to capture the measured field by applying only convection, adjusting heat transfer coefficients. The adjustment of the heat transfer coefficient proved to be a major problem. No standard estimation of these parameters were capable of satisfyingly capture the temperature fields. This highlights the complexity of this kind of problems. It was reported by CEA that modelling of radiation was required for accurately resolving the stresses. The time to crack initiation was computed, as well as crack propagation rates. The computed crack initiation time is significantly longer than the crack propagation time. All results by DNV in terms of maximum stress range, computed design life and crack propagation time are comparable to those obtained by other contributors to the benchmark project. The DNV computed maximum stress range is Δσ = 715 MPa (von Mises). The contribution by other members range from 507 to 805 MPa. The DNV computed fatigue life (from two mean curves, ASME and CEA) range from 100.000 to 1.000.000 depending on different assumptions

  18. Compilation report of VHTRC temperature coefficient benchmark calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yasuda, Hideshi; Yamane, Tsuyoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-11-01

    A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, `Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors` to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k{sub eff}, by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other`s ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.).

  19. Gaia FGK Benchmark Stars: New Candidates At Low-Metallicities

    CERN Document Server

    Hawkins, Keith; Heiter, Ulrike; Soubiran, Caroline; Blanco-Cuaresma, Sergi; Casagrande, Luca; Gilmore, Gerry; Lind, Karin; Magrini, Laura; Masseron, Thomas; Pancino, Elena; Randich, Sofia; Worley, Clare C

    2016-01-01

    We have entered an era of large spectroscopic surveys in which we can measure, through automated pipelines, the atmospheric parameters and chemical abundances for large numbers of stars. Calibrating these survey pipelines using a set of "benchmark stars" in order to evaluate the accuracy and precision of the provided parameters and abundances is of utmost importance. The recent proposed set of Gaia FGK benchmark stars of Heiter et al. (2015) has no recommended stars within the critical metallicity range of $-2.0 <$ [Fe/H] $< -1.0$ dex. In this paper, we aim to add candidate Gaia benchmark stars inside of this metal-poor gap. We began with a sample of 21 metal-poor stars which was reduced to 10 stars by requiring accurate photometry and parallaxes, and high-resolution archival spectra. The procedure used to determine the stellar parameters was similar to Heiter et al. (2015) and Jofre et al. (2014) for consistency. The effective temperature (T$_{\\mathrm{eff}}$) of all candidate stars was determined using...

  20. Instrumental fundamental parameters and selected applications of the microfocus X-ray fluorescence analysis at a scanning electron microscope; Instrumentelle Fundamentalparameter und ausgewaehlte Anwendungen der Mikrofokus-Roentgenfluoreszenzanalyse am Rasterelektronenmikroskop

    Energy Technology Data Exchange (ETDEWEB)

    Rackwitz, Vanessa

    2012-05-30

    For a decade X-ray sources have been commercially available for the microfocus X-ray fluorescence analysis ({mu}-XRF) and offer the possibility of extending the analytics at a scanning electron microscope (SEM) with an attached energy dispersive X-ray spectrometer (EDS). By using the {mu}-XRF it is possible to determine the content of chemical elements in a microscopic sample volume in a quantitative, reference-free and non-destructive way. For the reference-free quantification with the XRF the Sherman equation is referred to. This equation deduces the intensity of the detected X-ray intensity of a fluorescence peak to the content of the element in the sample by means of fundamental parameters. The instrumental fundamental parameters of the {mu}-XRF at a SEM/EDS system are the excitation spectrum consisting of X-ray tube spectrum and the transmission of the X-ray optics, the geometry and the spectrometer efficiency. Based on a calibrated instrumentation the objectives of this work are the development of procedures for the characterization of all instrumental fundamental parameters as well as the evaluation and reduction of their measurement uncertainties: The algorithms known from the literature for the calculation of X-ray tube spectrum are evaluated with regard to their deviations in the spectral distribution. Within this work a novel semi-empirical model is improved with respect to its uncertainties and enhanced in the low energy range as well as extended for another three anodes. The emitted X-ray tube spectrum is calculated from the detected one, which is measured at an especially developed setup for the direct measurement of X-ray tube spectra. This emitted X-ray tube spectrum is compared to the one calculated on base of the model of this work. A procedure for the determination of the most important parameters of an X-ray semi-lens in parallelizing mode is developed. The temporal stability of the transmission of X-ray full lenses, which have been in regular

  1. A Privacy-Preserving Benchmarking Platform

    OpenAIRE

    Kerschbaum, Florian

    2010-01-01

    A privacy-preserving benchmarking platform is practically feasible, i.e. its performance is tolerable to the user on current hardware while fulfilling functional and security requirements. This dissertation designs, architects, and evaluates an implementation of such a platform. It contributes a novel (secure computation) benchmarking protocol, a novel method for computing peer groups, and a realistic evaluation of the first ever privacy-preserving benchmarking platform.

  2. Rethinking benchmark dates in international relations

    OpenAIRE

    Buzan, Barry; Lawson, George

    2014-01-01

    International Relations (IR) has an ‘orthodox set’ of benchmark dates by which much of its research and teaching is organized: 1500, 1648, 1919, 1945 and 1989. This article argues that IR scholars need to question the ways in which these orthodox dates serve as internal and external points of reference, think more critically about how benchmark dates are established, and generate a revised set of benchmark dates that better reflects macro-historical international dynamics. The first part of t...

  3. WIPP benchmark II results using SANCHO

    International Nuclear Information System (INIS)

    Results of the second Benchmark problem in the WIPP code evaluation series using the finite element dynamic relaxation code SANCHO are presented. A description of SANCHO and its model for sliding interfaces is given, along with a discussion of the various small routines used for generating stress plot data. Conclusions and a discussion of this benchmark problem, as well as recommendations for a possible third benchmark problem are presented

  4. Benchmarking for Excellence and the Nursing Process

    Science.gov (United States)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  5. The design and analysis of benchmark experiments

    OpenAIRE

    Hothorn, Torsten; Leisch, Friedrich; Zeileis, Achim; Hornik, Kurt

    2003-01-01

    The assessment of the performance of learners by means of benchmark experiments is established exercise. In practice, benchmark studies are a tool to compare the performance of several competing algorithms for a certain learning problem. Cross-validation or resampling techniques are commonly used to derive point estimates of the performances which are compared to identify algorithms with good properties. For several benchmarking problems, test procedures taking the variability of those point ...

  6. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  7. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  8. Method and system for benchmarking computers

    Science.gov (United States)

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  9. Characterizing universal gate sets via dihedral benchmarking

    Science.gov (United States)

    Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph

    2015-12-01

    We describe a practical experimental protocol for robustly characterizing the error rates of non-Clifford gates associated with dihedral groups, including small single-qubit rotations. Our dihedral benchmarking protocol is a generalization of randomized benchmarking that relaxes the usual unitary 2-design condition. Combining this protocol with existing randomized benchmarking schemes enables practical universal gate sets for quantum information processing to be characterized in a way that is robust against state-preparation and measurement errors. In particular, our protocol enables direct benchmarking of the π /8 gate even under the gate-dependent error model that is expected in leading approaches to fault-tolerant quantum computation.

  10. Analysis of VENUS-3 benchmark experiment

    International Nuclear Information System (INIS)

    The paper presents the revision and the analysis of VENUS-3 benchmark experiment performed at CEN/SCK, Mol (Belgium). This benchmark was found to be particularly suitable for validation of current calculation tools like 3-D neutron transport codes, and in particular of the 3D sensitivity and uncertainty analysis code developed within the EFF project. The compilation of the integral experiment was integrated into the SINBAD electronic data base for storing and retrieving information about the shielding experiments for nuclear systems. SINBAD now includes 33 reviewed benchmark descriptions and several compilations waiting for the review, among them many benchmarks relevant for pressure vessel dosimetry system validation.(author)

  11. Data Assimilation of Benchmark Experiments for Homogenous Thermal / Epithermal Uranium Systems

    International Nuclear Information System (INIS)

    This presentation reports on the data assimilation of benchmark experiments for homogeneous thermal and epithermal uranium systems. The assimilation method is based on Kalman filters using integral parameters and sensitivity coefficients calculated with MONK9 and ENDF/B-VII data. The assimilation process results in an overall improvement of the calculation-benchmark agreement, and may help in the selection of nuclear data after analysis of adjustment trends

  12. Benchmarking the codes VORPAL, OSIRIS, and QuickPIC with Laser Wakefield Acceleration Simulations

    OpenAIRE

    Paul, Kevin

    2010-01-01

    Three-dimensional laser wakefield acceleration (LWFA) simulations have recently been performed to benchmark the commonly used particle-in-cell (PIC) codes VORPAL, OSIRIS, and QuickPIC. The simulations were run in parallel on over 100 processors, using parameters relevant to LWFA with ultra-short Ti-Sapphire laser pulses propagating in hydrogen gas. Both first-order and second-order particle shapes were employed. We present the results of this benchmarking exercise, and show that accelerating ...

  13. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  14. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  15. A Scanning Quantum Cryogenic Atom Microscope

    CERN Document Server

    Yang, Fan; Taylor, Stephen F; Turner, Richard W; Lev, Benjamin L

    2016-01-01

    Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity, high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented DC-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a noise floor of 300 pT and provides a 100x improvement in magnetic flux sensitivity over previous atomic scanning probe magnetometers. These capabilities are carefully benchmarked by imaging magnet...

  16. The research of the relativity between the scan parameters optimization and radiation dose at orbital helical CT%眼眶部低剂量螺旋CT扫描参数的优化

    Institute of Scientific and Technical Information of China (English)

    江时淦; 洪春凤; 王豪; 白雪冰; 张谭俊雄

    2013-01-01

    目的:探讨眼眶部低剂量螺旋CT扫描参数的优化.方法:320例被检者分成16组(每组20例),将管电流140、110、80、60mA,层厚2和3mm,螺距0.75和1.5,设计成16组螺旋CT扫描参数.记录每组容积CT剂量指数(CTD-Iovl)和剂量长度乘积(DLP)平均值,分析管电流、层厚、螺距与辐射量的关系.图像质量从影像层次、背景噪声、解剖结构及能否满足诊断要求等方面进行综合评价,图像质量等级采用秩和检验分析.结果:X线辐射量与管电流呈正相关,当管电流从140mA降至80mA时,CTDIovl和DLP分别下降42.84%、42.86%,图像质量均符合诊断要求,且差异无统计学意义(P>0.05);降至60mA时,图像质量差异有统计学意义(P0.05).X线辐射量与层厚关系不大,当层厚由3mm降至2mm时,CTDIovl和DLP分别降低9.90%、12.23%;图像质量差异无统计学意义(P>0.05),但2mm层厚的图像噪声较3mm大.结论:降低管电流,加大螺距是降低辐射量的有效途径;眼眶部螺旋CT扫描参数设置为管电流80mA、螺距1.5时,能兼顾图像质量和辐射量,扫描层厚根据检查要求选择.%Objective:To explore the scanning parameters optimization with low dose for orbital spiral CT. Methods: 320 cases were divided into 16 groups (n = 20 cases). mA was respectively 140mA,110mA,80mA,60mA; slice thickness: 2mm, 3mm; pitch :0. 75,1. 5 ; spiral CT scanning parameters designed for the 16 groups: mA, slice thickness, pitch 140,2, 0.75;140,2,l. 5; 140,3,0.75;140,3,1. 5;110,2,0. 75;110,2,1. 5;110,3,0. 75;110,3,1. 5;80,2,0. 75;80,2,1. 5;80,3, 0. 75 ; 80,3,1. 5 ; 60,2,0.75; 60,2,1. 5; 60, 3,0. 75; 60, 3,1. 5. CT dose index (CTDIovl) and dose length product (DLP) were recorded to analyze relativity among mA,slice thickness,pitch and radiation dose. Image quality was evaluated accord ing to its low density resolution,background noise,anatomic structure and whether it can meet the diagnostic requirements. The image data were analyzed by Rank sum

  17. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  18. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts. The blade shape is considered as a fixed parameter in the...... varied from 1 to 4. In order to keep the comparison fair among the different configurations, the solidity is kept constant and, therefore, the chord length reduced. A second comparison is conducted considering different blade profiles belonging to the symmetric NACA airfoil family. Finally, a chord...

  19. Arithmetic Data Cube as a Data Intensive Benchmark

    Science.gov (United States)

    Frumkin, Michael A.; Shabano, Leonid

    2003-01-01

    Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.

  20. Abdominal CT scan

    Science.gov (United States)

    Computed tomography scan - abdomen; CT scan - abdomen; CAT scan - abdomen; CT abdomen and pelvis ... An abdominal CT scan makes detailed pictures of the structures inside your belly (abdomen) very quickly. This test may be used to ...

  1. Shoulder CT scan

    Science.gov (United States)

    CAT scan - shoulder; Computed axial tomography scan - shoulder; Computed tomography scan - shoulder; CT scan - shoulder ... stopping.) A computer creates separate images of the shoulder area. These are called slices. These images can ...

  2. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  3. Fundamental modeling issues on benchmark structure for structural health monitoring

    Institute of Scientific and Technical Information of China (English)

    HU; Sau-Lon; James

    2009-01-01

    The IASC-ASCE Structural Health Monitoring Task Group developed a series of benchmark problems, and participants of the benchmark study were charged with using a 12-degree-of-freedom (DOF) shear building as their identification model. The present article addresses improperness, including the parameter and modeling errors, of using this particular model for the intended purpose of damage detec- tion, while the measurements of damaged structures are synthesized from a full-order finite-element model. In addressing parameter errors, a model calibration procedure is utilized to tune the mass and stiffness matrices of the baseline identification model, and a 12-DOF shear building model that preserves the first three modes of the full-order model is obtained. Sequentially, this calibrated model is employed as the baseline model while performing the damage detection under various damage scenarios. Numerical results indicate that the 12-DOF shear building model is an over-simplified identification model, through which only idealized damage situations for the benchmark structure can be detected. It is suggested that a more sophisticated 3-dimensional frame structure model should be adopted as the identification model, if one intends to detect local member damages correctly.

  4. Fundamental modeling issues on benchmark structure for structural health monitoring

    Institute of Scientific and Technical Information of China (English)

    LI HuaJun; ZHANG Min; WANG JunRong; HU Sau-Lon James

    2009-01-01

    The IASC-ASCE Structural Health Monitoring Task Group developed a series of benchmark problems,and participants of the benchmark study were charged with using a 12-degree-of-freedom (DOF) shear building as their identification model. The present article addresses improperness, including the parameter and modeling errors, of using this particular model for the intended purpose of damage detection, while the measurements of damaged structures are synthesized from a full-order finite-element model. In addressing parameter errors, a model calibration procedure is utilized to tune the mass and stiffness matrices of the baseline identification model, and a 12-DOF shear building model that preserves the first three modes of the full-order model is obtained. Sequentially, this calibrated model is employed as the baseline model while performing the damage detection under various damage scenarios. Numerical results indicate that the 12-DOF shear building model is an over-simplified identification model, through which only idealized damage situations for the benchmark structure can be detected. It is suggested that a more sophisticated 3-dimensional frame structure model should be adopted as the identification model, if one intends to detect local member damages correctly.

  5. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  6. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  7. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235U, 239Pu, 238U, and 237Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  8. Benchmark scenarios for the NMSSM

    CERN Document Server

    Djouadi, A; Ellwanger, U; Godbole, R; Hugonie, C; King, S F; Lehti, S; Moretti, S; Nikitenko, A; Rottlander, I; Schumacher, M; Teixeira, A

    2008-01-01

    We discuss constrained and semi--constrained versions of the next--to--minimal supersymmetric extension of the Standard Model (NMSSM) in which a singlet Higgs superfield is added to the two doublet superfields that are present in the minimal extension (MSSM). This leads to a richer Higgs and neutralino spectrum and allows for many interesting phenomena that are not present in the MSSM. In particular, light Higgs particles are still allowed by current constraints and could appear as decay products of the heavier Higgs states, rendering their search rather difficult at the LHC. We propose benchmark scenarios which address the new phenomenological features, consistent with present constraints from colliders and with the dark matter relic density, and with (semi--)universal soft terms at the GUT scale. We present the corresponding spectra for the Higgs particles, their couplings to gauge bosons and fermions and their most important decay branching ratios. A brief survey of the search strategies for these states a...

  9. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  10. Beyond Benchmarking: Value-Adding Metrics

    Science.gov (United States)

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  11. Evaluating software verification systems: benchmarks and competitions

    NARCIS (Netherlands)

    Beyer, Dirk; Huisman, Marieke; Klebanov, Vladimir; Monahan, Rosemary

    2014-01-01

    This report documents the program and the outcomes of Dagstuhl Seminar 14171 “Evaluating Software Verification Systems: Benchmarks and Competitions”. The seminar brought together a large group of current and future competition organizers and participants, benchmark maintainers, as well as practition

  12. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  13. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt...

  14. The Linked Data Benchmark Council Project

    NARCIS (Netherlands)

    Boncz, P.A.; Fundulaki, I.; Gubichev, A.; Larriba-Pey, J.; Neumann, T.

    2013-01-01

    Despite the fast growth and increasing popularity, the broad field of RDF and Graph database systems lacks an independent authority for developing benchmarks, and for neutrally assessing benchmark results through industry-strength auditing which would allow to quantify and compare the performance of

  15. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  16. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  17. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358. ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  18. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  19. BN-600 hybrid core benchmark analyses

    International Nuclear Information System (INIS)

    Benchmark analyses for the hybrid BN-600 reactor that contains three uranium enrichment zones and one plutonium zone in the core, have been performed within the frame of an IAEA sponsored Coordinated Research Project. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN-600 core were evaluated. The comparison of the diffusion and transport results obtained for the homogeneous representation generally shows good agreement for most parameters between the RZ and HEX-Z models. The burnup effect and the heterogeneity effect on most reactivity parameters also show good agreement for the HEX-Z diffusion and transport theory results. A large difference noticed for the sodium and steel density coefficients is mainly due to differences in the spatial coefficient predictions for non fuelled regions. The burnup reactivity loss was evaluated to be 0.025 (4.3 $) within ∼ 5.0% standard deviation. The heterogeneity effect on most reactivity coefficients was estimated to be small. The heterogeneity treatment reduced the control rod worth by 2.3%. The heterogeneity effect on the k-eff and control rod worth appeared to differ strongly depending on the heterogeneity treatment method. A substantial spread noticed for several reactivity coefficients did not give a significant impact on the transient behavior prediction. This result is attributable to compensating effects between several reactivity effects and the specific design of the partially MOX fuelled hybrid core. (author)

  20. Static benchmarking of the NESTLE advanced nodal code

    Energy Technology Data Exchange (ETDEWEB)

    Mosteller, R.D.

    1997-05-01

    Results from the NESTLE advanced nodal code are presented for multidimensional numerical benchmarks representing four different types of reactors, and predictions from NESTLE are compared with measured data from pressurized water reactors (PWRs). The numerical benchmarks include cases representative of PWRs, boiling water reactors (BWRs), CANDU heavy water reactors (HWRs), and high-temperature gas-cooled reactors (HTGRs). The measured PWR data include critical soluble boron concentrations and isothermal temperature coefficients of reactivity. The results demonstrate that NESTLE correctly solves the multigroup diffusion equations for both Cartesian and hexagonal geometries, that it reliably calculates k{sub eff} and reactivity coefficients for PWRs, and that--subsequent to the incorporation of additional thermal-hydraulic models--it will be able to perform accurate calculations for the corresponding parameters in BWRs, HWRs, and HTGRs as well.

  1. Synthetic benchmarks for machine olfaction: Classification, segmentation and sensor damage☆

    Science.gov (United States)

    Ziyatdinov, Andrey; Perera, Alexandre

    2015-01-01

    The design of the signal and data processing algorithms requires a validation stage and some data relevant for a validation procedure. While the practice to share public data sets and make use of them is a recent and still on-going activity in the community, the synthetic benchmarks presented here are an option for the researches, who need data for testing and comparing the algorithms under development. The collection of synthetic benchmark data sets were generated for classification, segmentation and sensor damage scenarios, each defined at 5 difficulty levels. The published data are related to the data simulation tool, which was used to create a virtual array of 1020 sensors with a default set of parameters [1]. PMID:26217732

  2. Benchmarking Memory Performance with the Data Cube Operator

    Science.gov (United States)

    Frumkin, Michael A.; Shabanov, Leonid V.

    2004-01-01

    Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.

  3. OECD/Nea benchmark calculations for accelerator driven systems

    International Nuclear Information System (INIS)

    In order to evaluate the performances of the codes and the nuclear data, the Nuclear Science Committee of the OECD/NEA organised in July 1999 a benchmark exercise on a lead-bismuth cooled sub-critical system driven by a beam of 1 GeV protons. The benchmark model is based on the ALMR reference design and is optimised to burn minor actinides using a 'double strata' fuel cycle strategy. Seven organisations (ANL, CIEMAT, KAERI, JAERI, PSI/CEA, RIT and SCK-CEN) have contributed to this exercise using different basic data libraries (ENDF/B-VI, JEF-2.2 and JENDL-3.2) and various reactor calculation methods. Significant discrepancies are observed in important neutronic parameters, such as keff, reactivity swing with burn-up and neutron flux distributions. (author)

  4. RESRAD benchmarking against six radiation exposure pathway models

    International Nuclear Information System (INIS)

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, input parameters such as occupancy, shielding, and consumption factors

  5. Static benchmarking of the NESTLE advanced nodal code

    International Nuclear Information System (INIS)

    Results from the NESTLE advanced nodal code are presented for multidimensional numerical benchmarks representing four different types of reactors, and predictions from NESTLE are compared with measured data from pressurized water reactors (PWRs). The numerical benchmarks include cases representative of PWRs, boiling water reactors (BWRs), CANDU heavy water reactors (HWRs), and high-temperature gas-cooled reactors (HTGRs). The measured PWR data include critical soluble boron concentrations and isothermal temperature coefficients of reactivity. The results demonstrate that NESTLE correctly solves the multigroup diffusion equations for both Cartesian and hexagonal geometries, that it reliably calculates keff and reactivity coefficients for PWRs, and that--subsequent to the incorporation of additional thermal-hydraulic models--it will be able to perform accurate calculations for the corresponding parameters in BWRs, HWRs, and HTGRs as well

  6. Design of Test Wrapper Scan Chain Based on Differential Evolution

    Directory of Open Access Journals (Sweden)

    Aijun Zhu

    2013-08-01

    Full Text Available Integrated Circuit has entered the era of design of the IP-based SoC (System on Chip, which makes the IP core reuse become a key issue. SoC test wrapper design for scan chain is a NP Hard problem, we propose an algorithm based on Differential Evolution (DE to design wrapper scan chain. Through group’s mutation, crossover and selection operations, the design of test wrapper scan chain is achieved. Experimental verification is carried out according to the international standard benchmark ITC’02. The results show that the algorithm can obtain shorter longest wrapper scan chains, compared with other algorithms.

  7. Theory of second optimization for scan experiment

    CERN Document Server

    Mo, X H

    2015-01-01

    The optimal design of scan experiment is of great significance both for scientific research and from economical viewpoint. Two approaches, one has recourse to the sampling technique and the other resorts to the analytical proof, are adopted to figure out the optimized scan scheme for the relevant parameters. The final results indicate that for $n$ parameters scan experiment, $n$ energy points are necessary and sufficient for optimal determination of these $n$ parameters; each optimal position can be acquired by single parameter scan (sampling method), or by analysis of auxiliary function (analytic method); the luminosity allocation among the points can be determined analytically with respect to the relative importance between parameters. By virtue of the second optimization theory established in this paper, it is feasible to accommodate the perfectly optimal scheme for any scan experiment.

  8. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  9. Benchmarking--Measuring and Comparing for Continuous Improvement.

    Science.gov (United States)

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  10. Geant4 Computing Performance Benchmarking and Monitoring

    Science.gov (United States)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-01

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  11. Benchmarking Calculations of Excitonic Couplings between Bacteriochlorophylls.

    Science.gov (United States)

    Kenny, Elise P; Kassal, Ivan

    2016-01-14

    Excitonic couplings between (bacterio)chlorophyll molecules are necessary for simulating energy transport in photosynthetic complexes. Many techniques for calculating the couplings are in use, from the simple (but inaccurate) point-dipole approximation to fully quantum-chemical methods. We compared several approximations to determine their range of applicability, noting that the propagation of experimental uncertainties poses a fundamental limit on the achievable accuracy. In particular, the uncertainty in crystallographic coordinates yields an uncertainty of about 20% in the calculated couplings. Because quantum-chemical corrections are smaller than 20% in most biologically relevant cases, their considerable computational cost is rarely justified. We therefore recommend the electrostatic TrEsp method across the entire range of molecular separations and orientations because its cost is minimal and it generally agrees with quantum-chemical calculations to better than the geometric uncertainty. Understanding these uncertainties can guard against striving for unrealistic precision; at the same time, detailed benchmarks can allow important qualitative questions-which do not depend on the precise values of the simulation parameters-to be addressed with greater confidence about the conclusions. PMID:26651217

  12. KENO-IV code benchmark calculation, (4)

    International Nuclear Information System (INIS)

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multi-group constants library MGCL. The present paper describes the results of a test using criticality experiments about slab-cylinder system of uranium nitrate solution. In all, 128 cases of experiments have been calculated for the slab-cylinder configuration with and without plexiglass reflector, having the various critical parameters such as the number of cylinders and height of the uranium nitrate solution. It is shown among several important results that the code and library gives a fairly good multiplication factor, that is, k sub(eff) -- 1.0 for heavily reflected cases, whereas k sub(eff) -- 0.91 for the unreflected ones. This suggests the necessity of more advanced treatment of the criticality calculation for the system where neutrons can easily leak out during slowing down process. (author)

  13. Validation of neutron-transport calculations in benchmark facilities for improved damage-fluence predictions

    International Nuclear Information System (INIS)

    An accurate determination of damage fluence accumulated by reactor pressure vessels (RPV) as a function of time is essential in order to evaluate the vessel integrity for both pressurized thermal shock (PTS) transients and end-of-life considerations. The desired accuracy for neutron exposure parameters such as displacements per atom or fluence (E > 1 MeV) is of the order of 20 to 30%. However, these types of accuracies can only be obtained realistically by validation of nuclear data and calculational methods in benchmark facilities. The purposes of this paper are to review the needs and requirements for benchmark experiments, to discuss the status of current benchmark experiments, to summarize results and conclusions obtained so far, and to suggest areas where further benchmarking is needed

  14. The verification of 3 dimensional nodal kinetics code ANCK using transient benchmark problems

    International Nuclear Information System (INIS)

    Three-dimensional (3D) neutronics and thermal-and-hydraulics (T/H) coupling code, ANCK/MIDAC, has been developed. ANCK/MIDAC consisted of the 3D nodal kinetic code ANCK and the 3D drift flux T/H code MIDAC. In order to verify the adequacy of ANCK that is a kinetics engine of this coupling code, several international benchmark problems have been performed. The calculation results of LMW (Langenbuch, Maurer and Werner) benchmark problems, PWR rod ejection benchmarks and PWR benchmarks on uncontrolled withdrawal of control rods at zero power are shown in this paper. The comparison of the results with the reference solutions shows very good agreements with the main core parameters. (author)

  15. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  16. Benchmark calculation of nuclear design code for HCLWR

    International Nuclear Information System (INIS)

    In the calculation of the lattice cell for High Conversion Light Water Reactors, big differences of nuclear design parameters appear between the results obtained by various methods and nuclear data libraries. The validity of the calculation can be verified by the critical experiment. The benchmark calculation is also efficient for the estimation of the validity in wide range of lattice parameters and burnup. As we do not have many measured data. The benchmark calculations were done by JAERI and MAPI, using SRAC and WIMS-E respectively. The problem covered the wide range of lattice parameters, i.e., from tight lattice to the current PWR lattice. The comparison was made on the effective multiplication factor, conversion ratio, and reaction rate of each nuclide, including burnup and void effects. The difference of the result is largest at the tightest lattice. But even at that lattice, the difference of the effective multiplication factor is only 1.4 %. The main cause of the difference is the neutron absorption rate U-238 in resonance energy region. The difference of other nuclear design parameters and their cause were also grasped. (author)

  17. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  18. Head CT scan

    Science.gov (United States)

    Brain CT; Cranial CT; CT scan - skull; CT scan - head; CT scan - orbits; CT scan - sinuses; Computed tomography - cranial ... The x-rays produced by the CT scan are painless. Some people may ... hard table. Contrast given through a vein may cause a: Slight ...

  19. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  20. Benchmarking in healthcare using aggregated indicators

    DEFF Research Database (Denmark)

    Traberg, Andreas; Jacobsen, Peter

    2010-01-01

    databases, the model is constructed as a comprehensive hierarchy of indicators. By aggregating the outcome of each indicator, the model is able to benchmark healthcare providing units. By assessing performance deeper in the hierarchy, a more detailed view of performance is obtained. The validity test of the...... model is performed at a Danish non-profit hospital, where four radiological sites are benchmarked against each other. Because of the multifaceted perspective on performance, the model proved valuable both as a benchmarking tool and as an internal decision support system....

  1. LAPUR-K BWR stability benchmark

    International Nuclear Information System (INIS)

    This paper documents the stability benchmark of the LAPUR-K code using the measurements taken at the Ringhals Unit 1 plant over four cycles of operation. This benchmark was undertaken to demonstrate the ability of LAPUR-K to calculate the decay ratios for both core-wide and regional mode oscillations. This benchmark contributes significantly to assuring that LAPUR-K can be used to define the exclusion region for the Monticello Plant in response to recent US Nuclear Regulatory Commission notices concerning oscillation observed at Boiling Water Reactor plants. Stability is part of Northern States Power Reload Safety Evaluation of the Monticello Plant

  2. Preliminary Neutronics Results for the OECD MHTGR-350 Benchmark

    International Nuclear Information System (INIS)

    The benchmark problem is based on the MHTGR-350 reactor designed by General Atomics (GA). Phase I of the problem has three steady state exercises : Exercise 1 for neutronics stand alone with fixed cross-sections, Exercise 2 for thermal-fluids stand alone and Exercise 3 for coupled steady state. Phase II is defined for coupled transient cases. Phase III is defined to test the depletion capabilities of lattice physics codes. Phase III has two exercises : Exercise 1 for cold state and Exercise 2 for hot state. In this paper, a preliminary results for Exercise 1 of Phase I obtained by using CAPP code and the results for Phase III by McCARD code are presented. In this paper, some preliminary neutronics results for the OECD/NEA MHTGR-350 neutronics/thermal fluids coupled benchmark problem were presented and some of the global parameters for Phase I Exercise 1 were compared with those presented by INL research group. They showed a good agreement with each other. The results for Phase III were also reasonable. The benchmark is ongoing and more comparisons with the results of other research groups will be made as soon as they are available

  3. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  4. RBC nuclear scan

    Science.gov (United States)

    An RBC nuclear scan uses small amounts of radioactive material to mark (tag) red blood cells (RBCs). Your body is then ... scanner does not give off any radiation. Most nuclear scans (including an RBC scan) are not recommended ...

  5. Heart PET scan

    Science.gov (United States)

    ... nuclear medicine scan; Heart positron emission tomography; Myocardial PET scan ... A PET scan requires a small amount of radioactive material (tracer). This tracer is given through a vein (IV), ...

  6. Coronary Calcium Scan

    Science.gov (United States)

    ... the NHLBI on Twitter. What Is a Coronary Calcium Scan? A coronary calcium scan is a test ... you have calcifications in your coronary arteries. Coronary Calcium Scan Figure A shows the position of the ...

  7. BN-600 hybrid core benchmark Phase III results

    International Nuclear Information System (INIS)

    The main objective of the CRP on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects, is to validate, verify and improve methodologies and computer codes used for the calculation of reactivity coefficients in fast reactors aiming at using weapons-grade plutonium for energy production in fast reactors. BN-600 hybrid reactor taken as benchmark. Earlier, two dimensional and three dimensional diffusion theory BN-600 benchmark calculations were done. This report describes the results of the burnup and heterogeneous calculations done for the proposed BN-600 hybrid core model as a part of Phase III benchmark. BN-600 benchmark has been analyzed at beginning of cycle (BOC) with XSET98 data set and 2-D and 3-D diffusion codes. The 2-D results are compared with the earlier results using the older CV2M data set. The core has been burnt for one cycle using 3-D burnup code FARCOBAB. The burnt core parameter has also been analyzed in 3-D. Heterogeneity effects on reactivity have been computed at BOC. Relative to the use of CV2M data, use of XSET98 data results in increased magnitudes of fuel Doppler worth and sodium density worth. Compared to 2-D results , in the 3-D results, the Keff is lower by about 220 pcm, sodium density worth is higher by about 30% and steel density worth becomes nearly zero or small positive from a negative value in 2-D. The conversion ratio at BOC is 0.669 as computed in 3-D. The burnup reactivity loss due to 140 days at full power (1470 MWt) is 0.0252. The conversion ratio at end of cycle (EOC) is 0.701. The other parameters have been estimated with SHR up condition as desired in the phase III benchmark specifications. Fuel Doppler worth is 7% more negative, sodium density worth is 16% less positive and steel density worth is more negative at EOC compared to BOC. Absorber rod (SHR) worth is higher by 4.9 % at EOC. Heterogeneity effect (core and SHR combined) on multiplication factor is small. For mid SHR

  8. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  9. DOE Commercial Building Benchmark Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  10. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  11. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and...

  12. Benchmarking Optimization Software with Performance Profiles

    OpenAIRE

    Dolan, Elizabeth D.; Moré, Jorge J.

    2001-01-01

    We propose performance profiles-distribution functions for a performance metric-as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.

  13. Benchmark studies of the gyro-Landau-fluid code and gyro-kinetic codes on kinetic ballooning modes

    Science.gov (United States)

    Tang, T. F.; Xu, X. Q.; Ma, C. H.; Bass, E. M.; Holland, C.; Candy, J.

    2016-03-01

    A Gyro-Landau-Fluid (GLF) 3 + 1 model has been recently implemented in BOUT++ framework, which contains full Finite-Larmor-Radius effects, Landau damping, and toroidal resonance [Ma et al., Phys. Plasmas 22, 055903 (2015)]. A linear global beta scan has been conducted using the JET-like circular equilibria (cbm18 series), showing that the unstable modes are kinetic ballooning modes (KBMs). In this work, we use the GYRO code, which is a gyrokinetic continuum code widely used for simulation of the plasma microturbulence, to benchmark with GLF 3 + 1 code on KBMs. To verify our code on the KBM case, we first perform the beta scan based on "Cyclone base case parameter set." We find that the growth rate is almost the same for two codes, and the KBM mode is further destabilized as beta increases. For JET-like global circular equilibria, as the modes localize in peak pressure gradient region, a linear local beta scan using the same set of equilibria has been performed at this position for comparison. With the drift kinetic electron module in the GYRO code by including small electron-electron collision to damp electron modes, GYRO generated mode structures and parity suggest that they are kinetic ballooning modes, and the growth rate is comparable to the GLF results. However, a radial scan of the pedestal for a particular set of cbm18 equilibria, using GYRO code, shows different trends for the low-n and high-n modes. The low-n modes show that the linear growth rate peaks at peak pressure gradient position as GLF results. However, for high-n modes, the growth rate of the most unstable mode shifts outward to the bottom of pedestal and the real frequency of what was originally the KBMs in ion diamagnetic drift direction steadily approaches and crosses over to the electron diamagnetic drift direction.

  14. Benchmarking carbon emissions performance in supply chains

    OpenAIRE

    Acquaye, Adolf; Genovese, Andrea; Barrett, John W.; Koh, Lenny

    2014-01-01

    Purpose – The paper aims to develop a benchmarking framework to address issues such as supply chain complexity and visibility, geographical differences and non-standardized data, ensuring that the entire supply chain environmental impact (in terms of carbon) and resource use for all tiers, including domestic and import flows, are evaluated. Benchmarking has become an important issue in supply chain management practice. However, challenges such as supply chain complexity and visibility, geogra...

  15. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  16. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  17. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  18. Benchmark Two-Good Utility Functions

    OpenAIRE

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticity. It is shown how each of these utility functions arises from a simple graphical construction based on a single given indifference curve. Also, it is shown that possessors of such utility function...

  19. Bundesländer-Benchmarking 2002

    OpenAIRE

    Blancke, Susanne; Hedrich, Horst; Schmid, Josef

    2002-01-01

    Das Bundesländer Benchmarking 2002 basiert auf einer Untersuchung ausgewählter Arbeitsmarkt- und Wirtschaftsindikatoren in den deutschen Bundesländern. Hierfür wurden drei Benchmarkings nach der Radar-Chart Methode vorgenommen: Eines welches nur Arbeitsmarktindikatoren betrachtet; eines, welches nur Wirtschaftsindikatoren betrachtet; und eines welches gemischte Arbeitsmarkt- und Wirtschaftsindikatoren beleuchtet. Verglichen wurden die Länder untereinander im Querschnitt zu zwei Zeitpunkten –...

  20. Benchmarking Deep Reinforcement Learning for Continuous Control

    OpenAIRE

    Duan, Yan; Chen, Xi; Houthooft, Rein; Schulman, John; Abbeel, Pieter

    2016-01-01

    Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suit...

  1. Distributional benchmarking in tax policy evaluations

    OpenAIRE

    Thor O. Thoresen; Zhiyang Jia; Peter J. Lambert

    2013-01-01

    Given an objective to exploit cross-sectional micro data to evaluate the distributional effects of tax policies over a time period, the practitioner of public economics will find that the relevant literature offers a wide variety of empirical approaches. For example, studies vary with respect to the definition of individual well-being and to what extent explicit benchmarking techniques are utilized to describe policy effects. The present paper shows how the concept of distributional benchmark...

  2. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  3. Features and technology of enterprise internal benchmarking

    OpenAIRE

    A. V. Dubodelova; Yurynets, O. V.

    2013-01-01

    The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard rese...

  4. Overview of CSEWG shielding benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Maerker, R.E.

    1979-01-01

    The fundamental philosophy behind the choosing of CSEWG shielding benchmarks is that the accuracy of a certain range of cross section data be adequately tested. The benchmarks, therefore, consist of measurements and calculations of these measurements. Calculations for which there are no measurements provide little information on the adequacy of the data, although they can perhaps indicate the sensitivity of results to variations in data.

  5. Dukovany NPP fuel cycle benchmark definition

    International Nuclear Information System (INIS)

    The new benchmark based on Dukovany NPP Unit-2 history is defined. The main goal of this benchmark is to compare results obtained by different codes used for neutron-physics calculation in organisations which are interested in this task. All needed are described in this paper or there are given references, where it is possible to obtain this information. Input data are presented in tables, requested output data format for automatic processing is described (Authors)

  6. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes......-related achievement. We attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  7. Under Pressure Benchmark for DDBMS Availability

    OpenAIRE

    Fior, Alessandro Gustavo; Meira, Jorge Augusto; Cunha De Almeida, Eduardo; Coelho, Ricardo Gonçalves; Didonet Del Fabro, Marcos; Le Traon, Yves

    2013-01-01

    The availability of Distributed Database Management Systems (DDBMS) is related to the probability of being up and running at a given point in time, and managing failures. One well-known and widely used mechanism to ensure availability is replication, which includes performance impact on maintaining data replicas across the DDBMS's machine nodes. Benchmarking can be used to measure such impact. In this article, we present a benchmark that evaluates the performance of DDBMS, considering availab...

  8. DWEB: A Data Warehouse Engineering Benchmark

    OpenAIRE

    Darmont, Jérôme; Bentayeb, Fadila; Boussaïd, Omar

    2005-01-01

    Data warehouse architectural choices and optimization techniques are critical to decision support query performance. To facilitate these choices, the performance of the designed data warehouse must be assessed. This is usually done with the help of benchmarks, which can either help system users comparing the performances of different systems, or help system engineers testing the effect of various design choices. While the TPC standard decision support benchmarks address the first point, they ...

  9. MPI Benchmarking Revisited: Experimental Design and Reproducibility

    OpenAIRE

    Hunold, Sascha; Carpen-Amarie, Alexandra

    2015-01-01

    The Message Passing Interface (MPI) is the prevalent programming model used on today's supercomputers. Therefore, MPI library developers are looking for the best possible performance (shortest run-time) of individual MPI functions across many different supercomputer architectures. Several MPI benchmark suites have been developed to assess the performance of MPI implementations. Unfortunately, the outcome of these benchmarks is often neither reproducible nor statistically sound. To overcome th...

  10. Influência de alguns parâmetros experimentais nos resultados de análises calorimétricas diferenciais - DSC Influence of some experimental parameters on the results of differential scanning calorimetry - DSC.

    Directory of Open Access Journals (Sweden)

    Cláudia Bernal

    2002-09-01

    Full Text Available A series of experiments were performed in order to demonstrate to undergraduate students or users of the differential scanning calorimetry (DSC, that several factors can influence the qualitative and quantitative aspects of DSC results. Saccharin, an artificial sweetner, was used as a probe and its thermal behavior is also discussed on the basis of thermogravimetric (TG and DSC curves.

  11. Scan BIST with biased scan test signals

    Institute of Scientific and Technical Information of China (English)

    XIANG Dong; CHEN MingJing; SUN JiaGuang

    2008-01-01

    The conventional test-per-scan built-in self-test (BIST) scheme needs a number of shift cycles followed by one capture cycle.Fault effects received by the scan flip-flops are shifted out while shifting in the next test vector like scan testing.Unlike deterministic testing,it is unnecessary to apply a complete test vector to the scan chains.A new scan-based BIST scheme is proposed by properly controlling the test signals of the scan chains,Different biased random values are assigned to the test signals of scan flip-flops in separate scan chains.Capture cycles can be inserted at any clock cycle if necessary.A new testability estimation procedure according to the proposed testing scheme is presented.A greedy procedure is proposed to select a weight for each scan chain.Experimental results show that the proposed method can improve test effectiveness of scan-based BIST greatly,and most circuits can obtain complete fault coverage or very close to complete fault coverage.

  12. Benchmark assemblies of the Los Alamos Critical Assemblies Facility

    International Nuclear Information System (INIS)

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  13. Karma1.1 benchmark calculations for the numerical benchmark problems and the critical experiments

    International Nuclear Information System (INIS)

    The transport lattice code KARMA 1.1 has been developed at KAERI for the reactor physics analysis of the pressurized water reactor. This program includes the multi-group library processed from ENDF/B-VI R8 and also utilizes the macroscopic cross sections for the benchmark problems. Benchmark calculations were performed for the C5G7 and the KAERI benchmark problems given with seven group cross sections, for various fuels loaded in the operating pressurized water reactors in South Korea, and for the critical experiments including CE, B and W and KRITZ. Benchmark results show that KARMA 1.1 is working reasonably. (author)

  14. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  15. Clinically meaningful performance benchmarks in MS

    Science.gov (United States)

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (10 seconds) ranges of performance. PMID:24174581

  16. ''FULL-CORE'' VVER-440 calculation benchmark

    International Nuclear Information System (INIS)

    Because of the difficulties with experimental validation of power distribution predicted by macro-code on the pin by pin level we decided to prepare a calculation benchmark named ''FULL-CORE'' VVER-440. This benchmark is a two-dimensional (2D) calculation benchmark based on the VVER-440 reactor core cold state geometry with taking into account the geometry of explicit radial reflector. The main task of this benchmark is to test the pin by pin power distribution in fuel assemblies predicted by macro-codes that are used for neutron-physics calculations especially for VVER-440 reactors. The proposal of this benchmark was presented at the 21st Symposium of AER in 2011. The reference solution has been calculated by MCNP code using Monte Carlo method and the results have been published in the AER community. The results of reference calculation were presented at the 22nd Symposium of AER in 2012. In this paper we will compare the available macro-codes results of this calculation benchmark.

  17. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  18. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  19. Analysis of PSBT benchmark exercises for void distribution and DNB using a subchannel code MATRA

    International Nuclear Information System (INIS)

    In the framework of OECD/NRC PSBT benchmark, the subchannel grade void distribution data and DNB data were evaluated by a subchannel code MATRA. The zone-averaged void fraction at the central region of the 5x5 test bundle was compared with the benchmark data. Optimum values of turbulent mixing parameter, which is an input parameter for MATRA code, were evaluated by employing subchannel fluid temperature data. The influence of mixing vanes on the subchannel flow distribution was examined through a CFD analysis. The steady-state DNB benchmark data with uniform and non-uniform axial power shapes were evaluated by several DNB prediction models including an empirical correlation, CHF lookup table, and representative mechanistic DNB models with subchannel cross-sectional averaged local properties. (author)

  20. Benchmarks and statistics of entanglement dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tiersch, Markus

    2009-09-04

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  1. Benchmarks and statistics of entanglement dynamics

    International Nuclear Information System (INIS)

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  2. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  3. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  4. Higher education information technology management benchmarking in Europe

    OpenAIRE

    Juult, Janne

    2013-01-01

    Objectives of the Study: This study aims to facilitate the rapprochement of the European higher education benchmarking projects towards a unified European benchmarking project. Total of four higher education IT benchmarking projects are analysed by comparing their categorisation of benchmarking indicators and their data manipulation processes. Four select benchmarking projects are compared in this fashion for the first time. The focus is especially on the Finnish Bencheit project's point o...

  5. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying and...... questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend the...

  6. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  7. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  8. Extraction of electron beam dose parameters from EBT2 film data scored in a mini phantom.

    Science.gov (United States)

    O'Reilly, Dedri; Smit, Cobus J L; du Plessis, Freek C P

    2013-09-01

    Quality assurance of medical linear accelerators includes dosimetric parameter measurement of therapeutic electron beams e.g. relative dose at a depth of 80% (R₈₀). This parameter must be within a tolerance of 0.2 cm of the declared value. Cumbersome water tank measurements can be regarded as a benchmark to measure electron depth dose curves. A mini-phantom was designed and built, in which a strip of GAFCHROMIC® EBT2 film could be encased tightly for electron beam depth dose measurement. Depth dose data were measured for an ELEKTA Sl25 MLC, ELEKTA Precise, and ELEKTA Synergy (Elekta Oncology Systems, Crawley, UK) machines. The electron beam energy range was between 4 and 22 MeV among the machines. A 10 × 10 cm² electron applicator with 95 cm source-surface-distance was used on all the machines. 24 h after irradiation, the EBT2 film strips were scanned on Canon CanoScan N670U scanner. Afterwards, the data were analysed with in-house developed software that entailed optical density to dose conversion, and optimal fitting of the PDD data to de-noise the raw data. From the PDD data R₈₀ values were solved for and compared with acceptance values. A series of tests were also carried out to validate the use of the scanner for film Dosimetry. These tests are presented in this study. It was found that this method of R₈₀ evaluation was reliable with good agreement with benchmark water tank measurements using a commercial parallel plate ionization chamber as the radiation detector. The EBT2 film data yielded R₈₀ values that were on average 0.06 cm different from benchmark water tank measured R₈₀ values. PMID:23794059

  9. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  10. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  11. Studies of thermal-reactor benchmark-data interpretation: experimental corrections

    International Nuclear Information System (INIS)

    Experimental values of integral parameters of the lattices studied in this report, i.e., the MIT(D2O) and TRX benchmark lattices have been re-examined and revised. The revisions correct several systematic errors that have been previously ignored or considered insignificant. These systematic errors are discussed in detail. The final corrected values are presented

  12. Benchmark analyses for BN-600 MOX core with minor actinides

    International Nuclear Information System (INIS)

    Full text: The IAEA has initiated in 1999 a Coordinated Research Project (CRP) on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects'. The general objective of the CRP is to validate, verify and improve methodologies and computer codes used for calculation of reactivity coefficients in fast reactors aiming at enhancing the utilization of plutonium and minor actinides (MAs). For this purpose, three benchmark models representing different modifications of the BN-600 reactor UOX core have been sequentially established and analyzed,the benchmark specifications being provided by IPPE. The first benchmark model is a hybrid UOX/MOX core, with UOX fuel in the inner core part and MOX fuel in the outer one, the fresh MOX fuel containing depleted uranium and weapons grade plutonium. The second model is a full MOX core, similar MOX fuel composition being assumed; a sodium plenum being introduced above the core to improve the core safety. The third model is analyzed in the paper. The model represents a similar full MOX core, but with plutonium and MAs from 60 GWd/t LWR spent fuel after 50 years cooling (thus assuming a so-called homogeneous recycling of MAs in a fast system). This option is the most challenging one (compared to those analyzed earlier in the CRP) as concerns the reactor safety since an increased content of MAs, in particular americium, and higher (than Pu239) isotopes of Pu leads to less favourable safety parameters. On the other hand, existing uncertainties in nuclear data for MAs and higher Pu isotopes may lead to relatively high uncertainties in the computation results for the considered model. The benchmark results include core criticality at the beginning and end of the equilibrium fuel cycle, kinetics parameters, spatial distributions of power and reactivity coefficients provided by CRP participants and obtained by employing different computation models and nuclear data. Sensitivity studies were performed at

  13. Standardized benchmarking in the quest for orthologs.

    Science.gov (United States)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  14. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador;

    2016-01-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision......-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  15. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich; Rolle, Massimo

    2015-01-01

    been published to date. This contribution provides a set of three benchmark problems that demonstrate the effect of electric coupling during multicomponent diffusion and electrochemical migration and at the same time facilitate the intercomparison of solutions from existing reactive transport codes...... considered in solute transport problems, electromigration can strongly affect mass transport processes. The number of reactive transport models that consider electromigration has been growing in recent years, but a direct model intercomparison that specifically focuses on the role of electromigration has not....... The first benchmark focuses on the 1D transient diffusion of HNO3 (pH = 4) in a NaCl solution into a fixed concentration reservoir, also containing NaCl—but with lower HNO3 concentrations (pH = 6). The second benchmark describes the 1D steady-state migration of the sodium isotope 22Na triggered by...

  16. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment. PMID:23656950

  17. AGENT code - neutron transport benchmark examples

    International Nuclear Information System (INIS)

    The paper focuses on description of representative benchmark problems to demonstrate the versatility and accuracy of the AGENT (Arbitrary Geometry Neutron Transport) code. AGENT couples the method of characteristics and R-functions allowing true modeling of complex geometries. AGENT is optimized for robustness, accuracy, and computational efficiency for 2-D assembly configurations. The robustness of R-function based geometry generator is achieved through the hierarchical union of the simple primitives into more complex shapes. The accuracy is comparable to Monte Carlo codes and is obtained by following neutron propagation through true geometries. The computational efficiency is maintained through a set of acceleration techniques introduced in all important calculation levels. The selected assembly benchmark problems discussed in this paper are: the complex hexagonal modular high-temperature gas-cooled reactor, the Purdue University reactor and the well known C5G7 benchmark model. (author)

  18. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (Pij, Sn, Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  19. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  20. Shielding Integral Benchmark Archive and Database (SINBAD)

    International Nuclear Information System (INIS)

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  1. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  2. Benchmark field study of deep neutron penetration

    Science.gov (United States)

    Morgan, J. F.; Sale, K.; Gold, R.; Roberts, J. H.; Preston, C. C.

    1991-06-01

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry.

  3. Computational benchmark for deep penetration in iron

    International Nuclear Information System (INIS)

    A benchmark for calculation of neutron transport through iron is now available based upon a rigorous Monte Carlo treatment of ENDF/B-IV and ENDF/B-V cross sections. The currents, flux, and dose (from monoenergetic 2, 14, and 40 MeV sources) have been tabulated at various distances through the slab using a standard energy group structure. This tabulation is available in a Los Alamos Scientific Laboratory report. The benchmark is simple to model and should be useful for verifying the adequacy of one-dimensional transport codes and multigroup libraries for iron. This benchmark also provides useful insights regarding neutron penetration through iron and displays differences in fluxes calculated with ENDF/B-IV and ENDF/B-V data bases

  4. SP2Bench: A SPARQL Performance Benchmark

    CERN Document Server

    Schmidt, Michael; Lausen, Georg; Pinkel, Christoph

    2008-01-01

    Recently, the SPARQL query language for RDF has reached the W3C recommendation status. In response to this emerging standard, the database community is currently exploring efficient storage techniques for RDF data and evaluation strategies for SPARQL queries. A meaningful analysis and comparison of these approaches necessitates a comprehensive and universal benchmark platform. To this end, we have developed SP$^2$Bench, a publicly available, language-specific SPARQL performance benchmark. SP$^2$Bench is settled in the DBLP scenario and comprises both a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. As a proof of concept, we apply SP$^2$Bench to existing engines and discuss ...

  5. Rapid Frequency Scan EPR

    OpenAIRE

    Tseitlin, Mark; Rinard, George A.; Quine, Richard W.; Eaton, Sandra S.; Eaton, Gareth R.

    2011-01-01

    In rapid frequency scan EPR with triangular scans, sufficient time must be allowed to insure that the magnetization in the x,y plane decays to baseline at the end of the scan, which typically is about 5 T2 after the spins are excited. To permit relaxation of signals excited toward the extremes of the scan the total scan time required may be much longer than 5 T2. However, with periodic, saw-tooth excitation, the slow-scan EPR spectrum can be recovered by Fourier deconvolution of data recorded...

  6. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    Science.gov (United States)

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible

  7. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  8. 不同CT扫描条件模拟定位对放射治疗计划的影响%Evaluation of scanning parameters in CT simulation on radiotherapy planning

    Institute of Scientific and Technical Information of China (English)

    李定杰; 刘如; 毛荣虎; 吴慧; 雷宏昌; 王建华

    2012-01-01

    目的:利用不均匀等效模体,探讨不同CT扫描条件对CT值及照射跳数(MU)的影响.方法:采用大小2种CT孔径(80和70 cm)、2种模体几何摆放顺序及2种扫描电压(120和140 KV),测量不同组合,比较各自的CT值,建立相应的CT-电子密度(ED)转换曲线,选取盆腔、胸部、头颈部各10例患者的CT图像模拟适形计划(CRT)和调强计划(IMRT),分析照射MU数值的偏差.结果:对于小孔径CT,无论扫描电压、模体几何位置如何变化,其MU数值相差均≤0.1%;大孔径CT扫描电压改变对MU值无影响,模体几何摆放顺序的改变有影响,但<0.3%;CRT计划和IMRT计划各自偏差值相同.结论:对于精确放疗计划系统,CT扫描条件改变和测量模体位置改变均可引起照射MU数值误差.若使用大孔径CT进行模拟定位,需根据其特性建立合理的电子密度曲线,并应用在计划制定中.%OBJECTIVE : To study the Influence of different CT scanning conditions on the CT value and radiation doses by using heterogeneous equivalent phantoms. METHODS: Two kinds of CT bore (80 cm and 70 cm),geometrical arrangements of phantom and scanning voltage (120 KV and 140 KV) were adopted respectively. The CT values of different combinations were measured and compared respectively. The CT-electron density (CT-ED) conversion curve was established. The CT images from thirty patients composed of ten cases of pelvic cavity,ten cases of thorax and ten eases of head and neck region respectively were used to make the CRT treatment planning and the IMRT planning respectively. The MU deviations of the treatment plannings were analyzed, RESULTS : For the smaller bore CT with any changes of scanning voltage and geometrical arrangements of phantom,the MU deviations of was less than 0.1%. For the larger bore CT,the change of scanning voltage had not influence on the MU value, the arrangements of phantom affected the MU value and the MU deviations was less than 0. 3 %. Both

  9. Reactor group constants and benchmark test

    International Nuclear Information System (INIS)

    The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)

  10. Benchmarking of European power network companies

    International Nuclear Information System (INIS)

    A European benchmark has been conducted among 63 grid companies to obtain insight in the degree of efficiency of these companies and to identify the main cost drivers. The benchmark shows that, based on the full distribution cost, the performance differs greatly from company to company. The cost of the companies with the worst performers is five times higher than that of the best performer. Dutch grid operators turn out to work relatively efficient compared to other European companies. Consumers benefit from the consequently lower energy bills. [mk

  11. Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    SINBAD (Shielding integral benchmark archive and database) is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity. It has been designed to be able to include data from nuclear reactor shielding, fusion blankets and accelerator shielding experiments. (authors)

  12. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  13. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li17Pb83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li17Pb83 blankets. (author)

  14. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...

  15. Benchmark testing of 233U evaluations

    International Nuclear Information System (INIS)

    In this paper we investigate the adequacy of available 233U cross-section data (ENDF/B-VI and JENDL-3) for calculation of critical experiments. An ad hoc revised 233U evaluation is also tested and appears to give results which are improved relative to those obtained with either ENDF/B-VI or JENDL-3 cross sections. Calculations of keff were performed for ten fast benchmarks and six thermal benchmarks using the three cross-section sets. Central reaction-rate-ratio calculations were also performed

  16. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  17. Benchmarking Simulation of Long Term Station Blackout Events

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Kyum; Lee, John C. [POSTECH, Pohang (Korea, Republic of); Fynan, Douglas A.; Lee, John C. [Univ. of Michigan, Ann Arbor (United States)

    2013-05-15

    The importance of passive cooling systems has emerged since the SBO events. Turbine-driven auxiliary feedwater (TD-AFW) system is the only passive cooling system for steam generators (SGs) in current PWRs. During SBO events, all alternating current (AC) and direct current (DC) are interrupted and then the water levels of steam generators become high. In this case, turbine blades could be degraded and cannot cool down the SGs anymore. To prevent this kind of degradations, improved TD-AFW system should be installed for current PWRs, especially OPR 1000 plants. A long-term station blackout (LTSBO) scenario based on the improved TD-AFW system has been benchmarked as a reference input file. The following task is a safety analysis in order to find some important parameters causing the peak cladding temperature (PCT) to vary. This task has been initiated with the benchmarked input deck applying to the State-of-the-Art Reactor Consequence Analyses (SOARCA) Report. The point of the improved TD-AFW is to control the water level of the SG by using the auxiliary battery charged by a generator connected with the auxiliary turbine. However, this battery also could be disconnected from the generator. To analyze the uncertainties of the failure of the auxiliary battery, the simulation for the time-dependent failure of the TD-AFW has been performed. In addition to the cases simulated in the paper, some valves (e. g., pressurizer safety valve), available during SBO events in the paper, could be important parameters to assess uncertainties in PCTs estimated. The results for these parameters will be included in a future study in addition to the results for the leakage of the RCP seals. After the simulation of several transient cases, alternating conditional expectation (ACE) algorithm will be used to derive functional relationships between the PCT and several system parameters.

  18. Design of a pre-collimator system for neutronics benchmark experiment

    International Nuclear Information System (INIS)

    Benchmark experiment is an important means to inspect the reliability and accuracy of the evaluated nuclear data, the effect/background ratios are the important parameters to weight the quality of experimental data. In order to obtain higher effect/background ratios, a pre-collimator system was designed for benchmark experiment. This system mainly consists of a pre-collimator and a shadow cone, The MCNP-4C code was used to simulate the background spectra under various conditions, from the results we found that with the pre-collimator system have a very marked improvement in the effect/background ratios. (authors)

  19. Benchmarking the codes VORPAL, OSIRIS, and QuickPIC with Laser Wakefield Acceleration Simulations

    International Nuclear Information System (INIS)

    Three-dimensional laser wakefield acceleration (LWFA) simulations have recently been performed to benchmark the commonly used particle-in-cell (PIC) codes VORPAL, OSIRIS, and QuickPIC. The simulations were run in parallel on over 100 processors, using parameters relevant to LWFA with ultra-short Ti-Sapphire laser pulses propagating in hydrogen gas. Both first-order and second-order particle shapes were employed. We present the results of this benchmarking exercise, and show that accelerating gradients from full PIC agree for all values of a0 and that full and reduced PIC agree well for values of a0 approaching 4.

  20. Gaia Benchmark stars and their twins in the Gaia-ESO Survey

    CERN Document Server

    Jofre, Paula

    2015-01-01

    The Gaia benchmark stars are stars with very precise stellar parameters that cover a wide range in the HR diagram at various metallicities. They are meant to be good representative of typical FGK stars in the Milky Way. Currently, they are used by several spectroscopic surveys to validate and calibrate the methods that analyse the data. I review our recent activities done for these stars. Additionally, by applying our new method to find stellar twins on the Gaia-ESO Survey, I discuss how good representatives of Milky Way stars the benchmark stars are and how they distribute in space.

  1. IKE contribution to the one-dimensional LWR shielding benchmark of ANS

    Energy Technology Data Exchange (ETDEWEB)

    Al Malah, K.

    1982-04-01

    The IKE computational methodology of solving radiation transport problems is applied to determine the radiation levels at specific locations in a one dimensional LWR representation. Solutions are submitted for two variations of a PWR problem. They contain detailed descriptions of approach and appropriate calculational parameters. The objective of the benchmark problem is: to provide a documented specification to permit intercomparisons of computational techniques tested with this benchmark, to determine fluence levels at reactor pressure vessel, to calculate radiation induced changes in the mechanical properties and to evaluate adequacy of specific cross section data sets.

  2. BN-600 fully MOX fuelled core benchmark analyses (Phase 4). Draft synthesis report - Revision 1

    International Nuclear Information System (INIS)

    A benchmark analysis of a BN-600 fully mixed oxide (MOX) fuelled core design with sodium plenum above the core has been performed as an extension to the study of the BN-600 hybrid uranium oxide (UOX)/MOX fuelled core carried out during 1999-2001. This work was carried out within the the IAEA sponsored Co-ordinated Research Project (CRP) on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects'. This benchmark analysis retains the general objective of the CRP which is to validate, verify and improve methodologies and computer codes used for the calculation of reactivity coefficients in fast reactors aiming at enhancing the utilization of plutonium and minor actinides. The scope of the benchmark is to reduce the uncertainties of safety relevant reactor physics parameter calculations of MOX fuelled fast reactors and hence to validate and improve data and methods involved in such analyses. In previous benchmark analyses of the BN-600 hybrid core that closely conforms to a traditional configuration, the comparative analyses showed that sufficient accuracy is achieved using the diffusion theory approximation, widely applied in fast reactor physics calculations. With the purpose of investigating a core configuration of full MOX fuel loading, a core model of the BN-600 type reactor, designed to reduce the sodium void effect by installing a sodium plenum above the core, was newly defined for the next benchmark study. The specifications and input data for the benchmark neutronics calculations were prepared by EPPE (Russia). The specifications given for the benchmark describe only a preliminary core model variant and represent only one conceptual approach to BN-600 full MOX core designs. The organizations participating in the BN-600 fully MOX fuelled core benchmark analysis are: ANL from the USA, CEA and SA from EU (France and the UK, respectively), CIAE from China, FZK/IKET from Germany, IGCAR from India, JNC from Japan, KAERI

  3. Lung PET scan

    Science.gov (United States)

    Chest PET scan; Lung positron emission tomography; PET - chest; PET - lung; PET - tumor imaging ... A PET scan requires a small amount of tracer. The tracer is given through a vein (IV), usually on ...

  4. RBC nuclear scan

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003835.htm RBC nuclear scan To use the sharing features on this page, please enable JavaScript. An RBC nuclear scan uses small amounts of radioactive material to ...

  5. Status of the international criticality safety benchmark evaluation project (ICSBEP)

    International Nuclear Information System (INIS)

    Since ICNC'99, four new editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments have been published. The number of benchmark specifications in the Handbook has grown from 2157 in 1999 to 3073 in 2003, an increase of nearly 1000 specifications. These benchmarks are used to validate neutronics codes and nuclear cross-section data. Twenty evaluations representing 192 benchmark specifications were added to the Handbook in 2003. The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) is provided in this paper along with a summary of the newly added benchmark specifications that appear in the 2003 Edition of the Handbook. (author)

  6. Atlas of duplex scanning

    International Nuclear Information System (INIS)

    This book presents the first atlas devoted entirely to duplex scanning. It details the uses of this important ''up-and-coming'' diagnostic tool for vascular and general surgeons and radiologists. It also covers scanning of the extremities, as well as the carotoids. The topics also covered are correlative line drawings elaborate and clarify the excellent scan images; the principles of duplex scanning or arteries and veins, techniques, and results; pictures normal anatomy; venous thromboses, arterial occlusion, true and false aneurysms, graft stenoses

  7. Preliminary study of the optimization of abdominal CT scanning parameters on 64-slice spiral CT%64层螺旋CT腹部扫描参数优化的初步研究

    Institute of Scientific and Technical Information of China (English)

    胡敏霞; 赵心明; 宋俊峰; 周纯武; 赵红枫

    2011-01-01

    Objective To investigate the appropriate low tube current of abdominal CT on a 64-slice spiral CT. Methods (1) Phantom study:The phantom Catphan500R was scanned with a fixed 120 kVp,and 450,400,380,360,340,320,300,280 mA, respectively. 15, 9, 8, 7, 6 mm diameter low-contrast objects with 1% contrast were scanned for evaluating image quality. CT images were graded in terms of lowcontrast conspicuity by using a five-point scale. Statistical analyses were performed to determine the appropriate tube current and the interval leading to the qualitative change. (2) Clinical study: 3 groups of 45 patients who had 2 examinations of non-enhanced abdominal CT within 3 months were enrolled. All patients were scanned with 450 mA at first scanning. For the second scanning, group-1 was scanned with optimal tube current, group-2 was scanned with optimal tube current plus interval, group-3 was scanned with optimal tube current sinus interval. CT images were graded in terms of the diagnostic acceptability at three anatomic levels including porta hepatis, pancreas and the upper pole kidney, and the image noises of eight organs including abdominal aorta, portal vein, liver, spleen, gallbladder, pancreas, renal cortex, renal medulla were graded by using a five-point scale. The image quality was compared with non-parametric rank sum test,and the individual factors of the patients were compared with the A VONA. Results (1) The optimal tube current and interval leading to the qualitative change were 340 mA and 40 mA respectively. (2) There were no significant differences in image quality between 340 mA and 450 mA in group-1, between 380 mA and 450 mA in group-2 (P > 0. 05). There was significant difference in image quality between 300 mA and 450 mA in group-3 (the mean scores for 300 mA were 2. 92 ± 0. 62,2.92 ± 0. 62,2.64 ± 0. 84,2. 72 ±0.82,2.63 ±0.71,2.51 ±0.84,3.04 ±0.72,3.04 ±0.72,2.63 ±0.71,2.52 ±0.73,2.93 ±0.81respectively; for 450 mA were 3.93 ± 0. 72,3.94 ± 0. 72

  8. Pulmonary ventilation/perfusion scan

    Science.gov (United States)

    V/Q scan; Ventilation/perfusion scan; Lung ventilation/perfusion scan ... A pulmonary ventilation/perfusion scan is actually two tests. They may be done separately or together. During the perfusion scan, a health ...

  9. Analysis of the OECD main steam line break benchmark using ANC-K/MIDAC code

    International Nuclear Information System (INIS)

    A three-dimensional (3D) neutronics and thermal-and-hydraulics (T/H) coupling code ANC-K/MIDAC has been developed. It is the combination of the 3D nodal kinetic code ANC-K and the 3D drift flux thermal hydraulic code MIDAC. In order to verify the adequacy of this code, we have performed several international benchmark problems. In this paper, we show the calculation results of ''OECD Main Steam Line Break Benchmark (MSLB benchmark)'', which gives the typical local power peaking problem. And we calculated the return-to-power scenario of the Phase II problem. The comparison of the results shows the very good agreement of important core parameters between the ANC-K/MIDAC and other participant codes. (author)

  10. Prague texture segmentation data generator and benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal

    2006-01-01

    Roč. 2006, č. 64 (2006), s. 67-68. ISSN 0926-4981 R&D Projects: GA MŠk(CZ) 1M0572; GA AV ČR(CZ) 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * texture * benchmark * web Subject RIV: BD - Theory of Information

  11. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  12. Benchmarking 2010: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  13. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  14. What Is the Impact of Subject Benchmarking?

    Science.gov (United States)

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  15. A protein–DNA docking benchmark

    NARCIS (Netherlands)

    van Dijk, M.; Bonvin, A.M.J.J.

    2008-01-01

    We present a protein–DNA docking benchmark containing 47 unbound–unbound test cases of which 13 are classified as easy, 22 as intermediate and 12 as difficult cases. The latter shows considerable structural rearrangement upon complex formation. DNA-specific modifications such as flipped out bases an

  16. Resolution for the Loviisa benchmark problem

    International Nuclear Information System (INIS)

    In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations

  17. Comparative benchmarks of full QCD algorithms

    International Nuclear Information System (INIS)

    We report performance benchmarks for several algorithms that we have used to simulate the Schroedinger functional with two flavors of dynamical quarks. They include hybrid and polynomial hybrid Monte Carlo with preconditioning. An appendix describes a method to deal with autocorrelations for nonlinear functions of primary observables as they are met here due to reweighting. (orig.)

  18. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    Angles Rojas, R.; Pham, M.D.; Boncz, P.A.

    2014-01-01

    With inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics in industrial-st

  19. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  20. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  1. Indoor Modelling Benchmark for 3D Geometry Extraction

    Science.gov (United States)

    Thomson, C.; Boehm, J.

    2014-06-01

    A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.

  2. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    International Nuclear Information System (INIS)

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of keff. Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for keff, f28/f25, c28/f25, and βeff. These limited results demonstrate the importance of studying other integral parameters in addition to keff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files

  3. Regression Tree-Based Methodology for Customizing Building Energy Benchmarks to Individual Commercial Buildings

    Science.gov (United States)

    Kaskhedikar, Apoorva Prakash

    According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI

  4. Rapid frequency scan EPR.

    Science.gov (United States)

    Tseitlin, Mark; Rinard, George A; Quine, Richard W; Eaton, Sandra S; Eaton, Gareth R

    2011-08-01

    In rapid frequency scan EPR with triangular scans, sufficient time must be allowed to insure that the magnetization in the x, y plane decays to baseline at the end of the scan, which typically is about 5T(2) after the spins are excited. To permit relaxation of signals excited toward the extremes of the scan the total scan time required may be much longer than 5T(2). However, with periodic, saw-tooth excitation, the slow-scan EPR spectrum can be recovered by Fourier deconvolution of data recorded with a total scan period of 5T(2), even if some spins are excited later in the scan. This scan time is similar to polyphase excitation methods. The peak power required for either polyphase excitation or rapid frequency scans is substantially smaller than for pulsed EPR. The use of an arbitrary waveform generator (AWG) and cross loop resonator facilitated implementation of the rapid frequency scan experiments reported here. The use of constant continuous low B(1), periodic excitation waveform, and constant external magnetic field is similar to polyphase excitation, but could be implemented without the AWG that is required for polyphase excitation. PMID:21664848

  5. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    the visible human CT scans from the National Library of Medicine, are essential for producing realistic images. Sets of test cases with systematic and random errors in selected setup parameters and anatomic volumes are suitable for use as standard benchmarks by the radiotherapy community. In addition to serving as an aid to research and development, benchmark images may also be useful for evaluation of commercial systems and as part of a quality assurance program for clinical systems. Test cases and software are available upon request

  6. Line-scanning, stage scanning confocal microscope

    Science.gov (United States)

    Carucci, John A.; Stevenson, Mary; Gareau, Daniel

    2016-03-01

    We created a line-scanning, stage scanning confocal microscope as part of a new procedure: video assisted micrographic surgery (VAMS). The need for rapid pathological assessment of the tissue on the surface of skin excisions very large since there are 3.5 million new skin cancers diagnosed annually in the United States. The new design presented here is a confocal microscope without any scanning optics. Instead, a line is focused in space and the sample, which is flattened, is physically translated such that the line scans across its face in a direction perpendicular to the line its self. The line is 6mm long and the stage is capable of scanning 50 mm, hence the field of view is quite large. The theoretical diffraction-limited resolution is 0.7um lateral and 3.7um axial. However, in this preliminary report, we present initial results that are a factor of 5-7 poorer in resolution. The results are encouraging because they demonstrate that the linear array detector measures sufficient signal from fluorescently labeled tissue and also demonstrate the large field of view achievable with VAMS.

  7. Validation of IRBURN calculation code system through burnup benchmark analysis

    International Nuclear Information System (INIS)

    Assessment of the reactor fuel composition during the irradiation time, fuel management and criticality safety analysis require the utilization of a validated burnup calculation code system. In this work a newly developed burnup calculation code system, IRBURN, is introduced for the estimation and analysis of the fuel burnup in LWR reactors. IRBURN provides the full capabilities of the Monte Carlo neutron and photon transport code MCNP4C as well as the versatile code for calculating the buildup and decay of nuclides in nuclear materials, ORIGEN2.1, along with other data processing and linking subroutines. This code has the capability of using different depletion calculation schemes. The accuracy and precision of the implemented algorithms to estimate the eigenvalue and spent fuel isotope concentrations are demonstrated by validation against reliable benchmark problem analyses. A comparison of IRBURN results with experimental data demonstrates that the code predicts the spent fuel concentrations within 10% accuracy. Furthermore, standard deviations of the average values for isotopic concentrations including IRBURN data decreases considerably in comparison with the same parameter excluding IRBURN results, except for a few sets of isotopes. The eigenvalue comparison between our results and the benchmark problems shows a good prediction of the k-inf values during the entire burnup history with the maximum difference of 1% at 100 MWd/kgU.

  8. PWR experimental benchmark analysis using WIMSD and PRIDE codes

    International Nuclear Information System (INIS)

    Highlights: • PWR experimental benchmark calculations were performed using WIMSD and PRIDE codes. • Various models for lattice cell homogenization were used. • Multiplication factors, power distribution and reaction rates were studied. • The effect of cross section libraries on these parameters was analyzed. • The results were compared with experimental and reported results. - Abstract: The PWR experimental benchmark problem defined by ANS was analyzed using WIMSD and PRIDE codes. Different modeling methodologies were used to calculate the infinite and effective multiplication factors. Relative pin power distributions were calculated for infinite lattice and critical core configurations, while reaction ratios were calculated for infinite lattice only. The discrete ordinate method (DSN) and collision probability method (PERSEUS) were used in each calculation. Different WIMSD cross-section libraries based on ENDF/B-VI.8, ENDF/B-VII.0, IAEA, JEF-2.2, JEFF-3.1 and JENDL-3.2 nuclear data files were also employed in the analyses. Comparison was made with experimental data and other reported results in order to find a suitable strategy for PWR analysis

  9. Benchmark exercise on expert judgment techniques in PSA Level 2

    International Nuclear Information System (INIS)

    This article summarizes objectives and aims of the concerted action 'Benchmark Exercise on Expert Judgment Techniques in PSA Level 2' and the results obtained within the project. The project was organized in three phases, namely a survey phase (pre-phase), a first phase devoted to parameter estimation assessment and a second phase devoted to benchmarking expert judgment methods on a scenario development case. The paper is focused on the first phase and on the results obtained by the application of five structured Expert Judgment (EJ) methodologies to the problem at hand. The results of the comparison of EJ methodologies are also provided; they are based on the use of some metrics suitably designed during the project. The context of Phase 2 and the issue to be tackled in this phase are briefly described; since this phase has been carried out only at a preliminary level (mainly after the end of the project), the results obtained are not reported here in detail but are only briefly commented on

  10. Benchmark analyses for BN-600 MOX core with minor actinides

    International Nuclear Information System (INIS)

    In 1999 the IAEA has initiated a Co-ordinated Research Project on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects'. Three benchmark models representing different modifications of the BN-600 reactor UOX core have been sequentially established and analyzed, including a hybrid UOX/MOX core, a full MOX core with weapons-grade plutonium and a MOX core with plutonium and minor actinides coming from spent LWR fuel. The paper describes studies for the latter MOX core model. The benchmark results include core criticality at the beginning and end of the equilibrium fuel cycle, kinetics parameters, spatial distributions of power and reactivity coefficients obtained by employing different computation tools and nuclear data. Sensitivity studies were performed to better understand in particular the influence of variations in different nuclear data libraries on the computed results. Transient simulations were done to investigate consequences of employing a few different sets of power and reactivity distributions on the system behavior at the initial phase of ULOF. The obtained results are analyzed in the paper. (author)

  11. Benchmark-experiments for Pb and Bi neutron data testing

    International Nuclear Information System (INIS)

    The expedience of accurate estimation of neutron data for Pb and Bi has increased recently in connection with the Accelerator-driven system (ADS) projects and the new generation fast reactors under development, which shall use lead or lead-bismuth coolant. Still the significant difference (10%) in the energy range of 100 keV - 500 keV, for the σtot from various data sets has been observed. The differences found are associated with the energy range, for which experimental information is lacking. The situation with Bi data is not better. In this connection, several benchmarks were assembled at BFS with uranium and plutonium fuel and lead or lead-bismuth coolant. The scope of the investigations included the measurements of the spectral indexes, distributions of the fission rates of the main isotopes, small samples worths and coolant voiding. The special program was connected with minor actinides. The influence of the plutonium isotope composition was investigated at the assemblies with reactor and weapon grade Pu. Calculations of the measured parameters were carried out using the most modern versions of nuclear data libraries. All the results of these experiments and their analysis have prepared for the construction of the benchmarks and planed as the candidates for the International data base IRPhEP. (authors)

  12. Benchmarking computational fluid dynamics models for lava flow simulation

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi

    2016-04-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.

  13. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  14. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking as...... it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This...... paper addresses these issues, and describes how effects are closely connected to the perception of benchmarking, the intended users of the system and the application of the benchmarking results. The fundamental basis of this paper is taken from the development of benchmarking in the Danish construction...

  15. BN-600 hybrid core benchmark analyses (phases 1, 2 and 3) (draft synthesis report)

    International Nuclear Information System (INIS)

    UK, respectively), CIAE from China, IGCAR from India, JNC from Japan, KAERI from Rep. of Korea, IPPE and OKBM from the Russian Federation. The benchmark analyses consist of three Phases during 1999 - 2001 : RZ homogeneous benchmark (Phase 1), Hex-Z homogeneous benchmark (Phase 2), and Hex-Z heterogeneous and burnup benchmark (Phase 3). This report presents the results of benchmark analyses of a hybrid UOX/MOX fuelled core of the BN-600 reactor. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN- 600 core were evaluated. Contributions of the participants in the benchmark analyses is shown. This report first addresses the benchmark definitions and specifications given for each Phase and briefly introduces the basic data, computer codes, and methodologies applied to the benchmark analyses by various participants. Then, the results obtained by the participants in terms of calculational uncertainty and their effect on the core transient behavior are intercompared. Finally it addresses some conclusions drawn in the benchmarks

  16. Provenance and depositional environment of epi-shelf lake sediment from Schirmacher Oasis, East Antarctica, vis-à-vis scanning electron microscopy of quartz grain, size distribution and chemical parameters

    Science.gov (United States)

    Shrivastava, Prakash K.; Asthana, Rajesh; Roy, Sandip K.; Swain, Ashit K.; Dharwadkar, Amit

    2012-07-01

    The scientific study of quartz grains is a powerful tool in deciphering the depositional environment and mode of transportation of sediments, and ultimately the origin and classification of sediments. Surface microfeatures, angularity, chemical features, and grain-size analysis of quartz grains, collectively reveal the sedimentary and physicochemical processes that acted on the grains during different stages of their geological history. Here, we apply scanning electron microscopic (SEM) analysis to evaluating the sedimentary provenance, modes of transport, weathering characteristics, alteration, and sedimentary environment of selected detrital quartz grains from the peripheral part of two epi-shelf lakes (ESL-1 and ESL-2) of the Schirmacher Oasis of East Antarctica. Our study reveals that different styles of physical weathering, erosive signatures, and chemical precipitation variably affected these quartz grains before final deposition as lake sediments. Statistical analysis (central tendencies, sorting, skewness, and kurtosis) indicates that these quartz-bearing sediments are poorly sorted glaciofluvial sediments. Saltation and suspension seem to have been the two dominant modes of transportation, and chemical analysis of these sediments indicates a gneissic provenance.

  17. Revaluering benchmarking - A topical theme for the construction industry

    OpenAIRE

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in...

  18. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  19. Destination benchmarking: facilities, customer satisfaction and levels of tourist expenditure

    OpenAIRE

    Metin KOZAK

    2000-01-01

    An extensive review of past benchmarking literature showed that there have been a substantial number of both conceptual and empirical attempts to formulate a benchmarking approach, particularly in the manufacturing industry. However, there has been limited investigation and application of benchmarking in tourism and particularly in tourist destinations. The aim of this research is to further develop the concept of benchmarking for application within tourist destinations and to evaluate its...

  20. On the Extrapolation with the Denton Proportional Benchmarking Method

    OpenAIRE

    Marco Marini; Tommaso Di Fonzo

    2012-01-01

    Statistical offices have often recourse to benchmarking methods for compiling quarterly national accounts (QNA). Benchmarking methods employ quarterly indicator series (i) to distribute annual, more reliable series of national accounts and (ii) to extrapolate the most recent quarters not yet covered by annual benchmarks. The Proportional First Differences (PFD) benchmarking method proposed by Denton (1971) is a widely used solution for distribution, but in extrapolation it may suffer when the...