WorldWideScience

Sample records for benchmark parameter scan

  1. Development of a benchmark parameter scan for Higgs bosons in the NMSSM Model and a study of the sensitivity for H{yields}AA{yields}4{tau} in vector boson fusion with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Rottlaender, Iris

    2008-08-15

    An evaluation of the discovery potential for NMSSM Higgs bosons of the ATLAS experiment at the LHC is presented. For this purpose, seven two-dimensional benchmark planes in the six-dimensional parameter space of the NMSSM Higgs sector are defined. These planes include different types of phenomenology for which the discovery of NMSSM Higgs bosons is especially challenging and which are considered typical for the NMSSM. They are subsequently used to give a detailed evaluation of the Higgs boson discovery potential based on Monte Carlo studies from the ATLAS collaboration. Afterwards, the possibility of discovering NMSSM Higgs bosons via the H{sub 1}{yields}A{sub 1}A{sub 1}{yields}4{tau}{yields}4{mu}+8{nu} decay chain and with the vector boson fusion production mode is investigated. A particular emphasis is put on the mass reconstruction from the complex final state. Furthermore, a study of the jet reconstruction performance at the ATLAS experiment which is of crucial relevance for vector boson fusion searches is presented. A good detectability of the so-called tagging jets that originate from the scattered partons in the vector boson fusion process is of critical importance for an early Higgs boson discovery in many models and also within the framework of the NMSSM. (orig.)

  2. Multi-parameters scanning in HTI media

    KAUST Repository

    Masmoudi, Nabil

    2014-08-05

    Building credible anisotropy models is crucial in imaging. One way to estimate anisotropy parameters is to relate them analytically to traveltime, which is challenging in inhomogeneous media. Using perturbation theory, we develop traveltime approximations for transversely isotropic media with horizontal symmetry axis (HTI) as explicit functions of the anellipticity parameter η and the symmetry axis azimuth ϕ in inhomogeneous background media. Specifically, our expansion assumes an inhomogeneous elliptically anisotropic background medium, which may be obtained from well information and stacking velocity analysis in HTI media. This formulation has advantages on two fronts: on one hand, it alleviates the computational complexity associated with solving the HTI eikonal equation, and on the other hand, it provides a mechanism to scan for the best fitting parameters η and ϕ without the need for repetitive modeling of traveltimes, because the traveltime coefficients of the expansion are independent of the perturbed parameters η and ϕ. The accuracy of our expansion is further enhanced by the use of shanks transform. We show the effectiveness of our scheme with tests on a 3D model and we propose an approach for multi-parameters scanning in TI media.

  3. Scanning anisotropy parameters in complex media

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-03-21

    Parameter estimation in an inhomogeneous anisotropic medium offers many challenges; chief among them is the trade-off between inhomogeneity and anisotropy. It is especially hard to estimate the anisotropy anellipticity parameter η in complex media. Using perturbation theory and Taylor’s series, I have expanded the solutions of the anisotropic eikonal equation for transversely isotropic (TI) media with a vertical symmetry axis (VTI) in terms of the independent parameter η from a generally inhomogeneous elliptically anisotropic medium background. This new VTI traveltime solution is based on a set of precomputed perturbations extracted from solving linear partial differential equations. The traveltimes obtained from these equations serve as the coefficients of a Taylor-type expansion of the total traveltime in terms of η. Shanks transform is used to predict the transient behavior of the expansion and improve its accuracy using fewer terms. A homogeneous medium simplification of the expansion provides classical nonhyperbolic moveout descriptions of the traveltime that are more accurate than other recently derived approximations. In addition, this formulation provides a tool to scan for anisotropic parameters in a generally inhomogeneous medium background. A Marmousi test demonstrates the accuracy of this approximation. For a tilted axis of symmetry, the equations are still applicable with a slightly more complicated framework because the vertical velocity and δ are not readily available from the data.

  4. T3PS: Tool for Parallel Processing in Parameter Scans

    CERN Document Server

    Maurer, Vinzenz

    2015-01-01

    T3PS is a program that can be used to quickly design and perform parameter scans while easily taking advantage of the multi-core architecture of current processors. It takes an easy to read and write parameter scan definition file format as input. Based on the parameter ranges and other options contained therein, it distributes the calculation of the parameter space over multiple processes and possibly computers. The derived data is saved in a plain text file format readable by most plotting software. The supported scanning strategies include: grid scan, random scan, Markov Chain Monte Carlo, numerical optimization. Several example parameter scans are shown and compared with results in the literature.

  5. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  6. Optimal z-axis scanning parameters for gynecologic cytology specimens

    OpenAIRE

    Amber D Donnelly; Mukherjee, Maheswari S.; Lyden, Elizabeth R.; Bridge, Julia A.; Subodh M Lele; Najia Wright; Mary F McGaughey; Culberson, Alicia M.; Adam J. Horn; Whitney R Wedel; Stanley J Radio

    2013-01-01

    Background: The use of virtual microscopy (VM) in clinical cytology has been limited due to the inability to focus through three dimensional (3D) cell clusters with a single focal plane (2D images). Limited information exists regarding the optimal scanning parameters for 3D scanning. Aims: The purpose of this study was to determine the optimal number of the focal plane levels and the optimal scanning interval to digitize gynecological (GYN) specimens prepared on SurePath™ glass slides while m...

  7. Clusters as benchmarks for measuring fundamental stellar parameters

    CERN Document Server

    Bell, Cameron P M

    2016-01-01

    In this contribution I will discuss fundamental stellar parameters as determined from young star clusters; specifically those with ages less than or approximately equal to that of the Pleiades. I will focus primarily on the use of stellar evolutionary models to determine the ages and masses of stars, as well as discuss the limitations of such models using a combination of both young clusters and eclipsing binary systems. In addition, I will also highlight a few interesting recent results from large on-going spectroscopic surveys (specifically Gaia-ESO and APOGEE/IN-SYNC) which are continuing to challenge our understanding of the formation and early evolutionary stages of young clusters.

  8. Optimization of Voxelization Parameters in Geant4 Tracking and Improvement of the Shooter Benchmarking Program

    CERN Document Server

    Siegel, Zachary

    2013-01-01

    The geometry-based tracking of the ubiquitous particle physics simulation toolkit Geant4 utilizes the idea of voxels, which effectively partition regions into multi-dimensional slices that can decrease simulation time. The extent of voxelization and the size of the voxels is determined by a set of parameters, which until now, defaulted to arbitrary numbers. In this report I document how I tested different values for these parameters and determined which values should be the default. I modified the existing G01 Geant4 example program to get an initial look at how the performance depended on the parameters. Then I modified the Shooter benchmark program, which lacks extraneous physics processes, to collect more refined data and to provide a tool for future testers to perform comprehensive benchmarks. To this end, I created a new geometry, added features to aid in testing over ranges of parameters, and setup the default tests to provide a good sampling of different simulation scenarios.

  9. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2015-01-01

    to generate optimized cellular scanning strategies and processing parameters, with an objective of reducing thermal asymmetries and mechanical deformations. The optimized scanning strategies are used for selective laser melting of the standard samples, and experimental and numerical results are compared....... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths...

  10. Multipinhole SPECT helical scan parameters and imaging volume

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao [Department of Nuclear Medicine, State University of New York at Buffalo, Buffalo, New York 14214 (United States); Wei, Qingyang; Dai, Tiantian; Ma, Tianyu [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Lecomte, Roger [Department of Nuclear Medicine and Radiobiology, Sherbrooke Molecular Imaging Center, Université de Sherbrooke, Sherbrooke, Quebec J1H 5N4 (Canada)

    2015-11-15

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluated by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.

  11. Simultaneous Thermodynamic and Kinetic Parameters Determination Using Differential Scanning Calorimetry

    Directory of Open Access Journals (Sweden)

    Nader Frikha

    2011-01-01

    Full Text Available Problem statement: The determination of reaction kinetics is of major importance, as for industrial reactors optimization as for environmental reasons or energy limitations. Although calorimetry is often used for the determination of thermodynamic parameters alone, the question that arises is: how can we apply the Differential Scanning Calorimetry for the determination of kinetic parameters. The objective of this study consists to proposing an original methodology for the simultaneous determination of thermodynamic and kinetic parameters, using a laboratory scale Differential Scanning Calorimeter (DSC. The method is applied to the dichromate-catalysed hydrogen peroxide decomposition. Approach: The methodology is based on operating of experiments carried out with a Differential Scanning Calorimeter. The interest of this approach proposed is that it requires very small quantities of reactants (about a few grams to be implemented. The difficulty lies in the fact that, using such microcalorimeters, the reactants temperature cannot directly be measured and a particular calibration procedure has thus to be developed, to determine the media temperature in an indirect way. The proposed methodology for determination of kinetics parameters is based on resolution of the coupled heat and mass balances. Results: A complete kinetic law is proposed. The Arrhenius parameters are determined as frequency factor k0 = 1.39×109 s−1 and activation energy E = 54.9 kJ mol−1. The measured enthalpy of reaction is ΔrH=−94 kJ mol−1. Conclusion: The comparison of the results obtained by such an original methodology with those obtained using a conventional laboratory scale reactor calorimetry, for the kinetics determination of, shows that this new approach is very relevant.

  12. Effects of cross sections library parameters on the OECD/NEA Oskarshamn-2 benchmark solution

    International Nuclear Information System (INIS)

    Highlights: • A 3D NK–TH model was developed using RELAP5-3D© for studying BWR instability events. • A cross section library was generated using the available CASMO format data. • To evaluate reactor stability parameters a tool was developed and validated. • The effect of some neutronic parameters on the reactor stability was investigated. • The Oskarshamn-2 1999 event stability parameters were properly reproduced. - Abstract: The OECD/NEA proposes a new international benchmark based on data collected during an instability transient occurred at the Oskarshamn-2 NPP. This benchmark is aimed at testing the coupled 3D Neutron Kinetic–Thermal Hydraulic (3D NK–TH) codes on challenging situations. The ENEA “Casaccia” Research Center, is participating to this benchmark, developing a computational model using the RELAP5-3D© code. The 3DNK model has been already developed from the cross sections dataset calculated by OKG, the Oskarshamn-2 licensee, through the CASMO lattice code. In order to use this neutron cross sections database in RELAP5-3D© a n-dimensional polynomials data fitting and base cross sections values calculations are required. An ad-hoc tool, named PROMETHEUS, has been developed for automatically generating RELAP5-3D©-compatible cross sections libraries. This tool allows at easily visualizing the complex structure of the neutronic datasets; moreover it is exploited for deriving different cross sections libraries needed to evaluate neutronic parameters effects on the reactor instability prediction. Thus, the effects of the fuel temperature and control rod histories, of the discontinuity factors (averaged/not averaged) and of the neutron poisons has been assessed. A ranking table has been produced, demonstrating the relevance of the not-averaged discontinuity factors and of the on-transient neutron poisons calculations for the correct prediction of the Oskarshamn-2 event

  13. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    Energy Technology Data Exchange (ETDEWEB)

    Kljenak, Ivo, E-mail: ivo.kljenak@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Kuznetsov, Mikhail, E-mail: mike.kuznetsov@kit.edu [Karlsruhe Institute of Technology, Kaiserstraße 12, 76131 Karlsruhe (Germany); Kostka, Pal, E-mail: kostka@nubiki.hu [NUBIKI Nuclear Safety Research Institute, Konkoly-Thege Miklós út 29-33, 1121 Budapest (Hungary); Kubišova, Lubica, E-mail: lubica.kubisova@ujd.gov.sk [Nuclear Regulatory Authority of the Slovak Republic, Bajkalská 27, 82007 Bratislava (Slovakia); Maltsev, Mikhail, E-mail: maltsev_MB@aep.ru [JSC Atomenergoproekt, 1, st. Podolskykh Kursantov, Moscow (Russian Federation); Manzini, Giovanni, E-mail: giovanni.manzini@rse-web.it [Ricerca sul Sistema Energetico, Via Rubattino 54, 20134 Milano (Italy); Povilaitis, Mantas, E-mail: mantas.p@mail.lei.lt [Lithuania Energy Institute, Breslaujos g.3, 44403 Kaunas (Lithuania)

    2015-03-15

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description.

  14. Benchmark experiment for physics parameters of metallic-fueled LMFBR at FCA

    Energy Technology Data Exchange (ETDEWEB)

    Iijima, S.; Oigawa, H.; Sakurai, T.; Nemoto, T.; Okajima, S. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-09-01

    The calculated prediction for reactor physics parameters in a metallic-fueled LMFBR was tested using the benchmark experiments performed at FCA. The reactivity feedback parameters such as sodium void worth, Doppler reactivity worth and {sup 238}U-capture-to-{sup 239}Pu -fission ratio have been measured. The fuel expansion reactivity has also measured. Direct comparison with the results from similar oxide fuel assembly was made. Analysis was done with the JENDL-2 cross section library and JENDL-3.2. Prediction of reactor physics parameters with JENDL-3.2 in the metallic-fueled core agreed reasonably well with the measured values and showed similar trend to the results in the oxide fuel core. (author)

  15. Optimal z-axis scanning parameters for gynecologic cytology specimens

    Directory of Open Access Journals (Sweden)

    Amber D Donnelly

    2013-01-01

    Full Text Available Background: The use of virtual microscopy (VM in clinical cytology has been limited due to the inability to focus through three dimensional (3D cell clusters with a single focal plane (2D images. Limited information exists regarding the optimal scanning parameters for 3D scanning. Aims: The purpose of this study was to determine the optimal number of the focal plane levels and the optimal scanning interval to digitize gynecological (GYN specimens prepared on SurePath™ glass slides while maintaining a manageable file size. Subjects and Methods: The iScanCoreo Au scanner (Ventana, AZ, USA was used to digitize 192 SurePath™ glass slides at three focal plane levels at 1 μ interval. The digitized virtual images (VI were annotated using BioImagene′s Image Viewer. Five participants interpreted the VI and recorded the focal plane level at which they felt confident and later interpreted the corresponding glass slide specimens using light microscopy (LM. The participants completed a survey about their experiences. Inter-rater agreement and concordance between the VI and the glass slide specimens were evaluated. Results: This study determined an overall high intra-rater diagnostic concordance between glass and VI (89-97%, however, the inter-rater agreement for all cases was higher for LM (94% compared with VM (82%. Survey results indicate participants found low grade dysplasia and koilocytes easy to diagnose using three focal plane levels, the image enhancement tool was useful and focusing through the cells helped with interpretation; however, the participants found VI with hyperchromatic crowded groups challenging to interpret. Participants reported they prefer using LM over VM. This study supports using three focal plane levels and 1 μ interval to expand the use of VM in GYN cytology. Conclusion: Future improvements in technology and appropriate training should make this format a more preferable and practical option in clinical cytology.

  16. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  17. Effects of cross sections libraries parameters on the OECD/NEA Oskarshamn-2 benchmark solution

    International Nuclear Information System (INIS)

    The OECD/NEA proposes a new international benchmark based on the data collected from an instability transient occurred at the Oskarshamn-2 NPP with the aim to test the coupled 3D Neutron Kinetic/Thermal Hydraulic codes on challenging situations. The ENEA 'Casaccia' Research Center is participating to this benchmark, developing a computational model using RELAP5-3D code. The 3DNK model was developed starting from the cross sections datasets calculated by OKG, the Oskarshamn-2 licensee, using the CASMO lattice code. Integration of neutron cross sections database in RELAP5-3D required data fitting by a n-dimensional polynomials, calculations of the various polynomial coefficients and of the base cross sections values. An ad-hoc tool named PROMETHEUS has been developed for automatically generate the RELAP5-3D-compatible cross sections libraries. Thanks to this software it has been easily possible to visualize the complex structure of the neutronic data sets and to derive different cross sections libraries for evaluating the effects of some neutronic parameters on the prediction of the reactor instability. Thus, the effects of the fuel temperature and control rod history, of the discontinuity factors (averaged/not averaged), and of the neutron poisons has been assessed. A ranking table has been produced, demonstrating the relevance of the not-averaged discontinuity factors and of the on-transient neutron poisons calculations for the correct prediction of the Oskarshamn-2 event. (author)

  18. Adaptive Matching of the Scanning Aperture of the Environment Parameter

    Science.gov (United States)

    Choni, Yu. I.; Yunusov, N. N.

    2016-04-01

    We analyze a matching system for the scanning aperture antenna radiating through a layer with unpredictably changing parameters. Improved matching has been achieved by adaptive motion of a dielectric plate in the gap between the aperture and the radome. The system is described within the framework of an infinite layered structure. The validity of the model has been confirmed by numerical simulation using CST Microwave Studio software and by an experiment. It is shown that the reflection coefficient at the input of some types of a matching device, which is due to the deviation of the load impedance from the nominal value, is determined by a compact and versatile formula. The potential efficiency of the proposed matching system is shown by a specific example, and its dependence on the choice of the starting position of the dielectric plate is demonstrated.

  19. T3PS v1.0: Tool for Parallel Processing in Parameter Scans

    Science.gov (United States)

    Maurer, Vinzenz

    2016-01-01

    T3PS is a program that can be used to quickly design and perform parameter scans while easily taking advantage of the multi-core architecture of current processors. It takes an easy to read and write parameter scan definition file format as input. Based on the parameter ranges and other options contained therein, it distributes the calculation of the parameter space over multiple processes and possibly computers. The derived data is saved in a plain text file format readable by most plotting software. The supported scanning strategies include: grid scan, random scan, Markov Chain Monte Carlo, numerical optimization. Several example parameter scans are shown and compared with results in the literature.

  20. Benchmarking the Performance of Mobile Laser Scanning Systems Using a Permanent Test Field

    Directory of Open Access Journals (Sweden)

    Hannu Hyyppä

    2012-09-01

    Full Text Available The performance of various mobile laser scanning systems was tested on an established urban test field. The test was connected to the European Spatial Data Research (EuroSDR project “Mobile Mapping—Road Environment Mapping Using Mobile Laser Scanning”. Several commercial and research systems collected laser point cloud data on the same test field. The system comparisons focused on planimetric and elevation errors using a filtered digital elevation model, poles, and building corners as the reference objects. The results revealed the high quality of the point clouds generated by all of the tested systems under good GNSS conditions. With all professional systems properly calibrated, the elevation accuracy was better than 3.5 cm up to a range of 35 m. The best system achieved a planimetric accuracy of 2.5 cm over a range of 45 m. The planimetric errors increased as a function of range, but moderately so if the system was properly calibrated. The main focus on mobile laser scanning development in the near future should be on the improvement of the trajectory solution, especially under non-ideal conditions, using both improvements in hardware and software. Test fields are relatively easy to implement in built environments and they are feasible for verifying and comparing the performance of different systems and also for improving system calibration to achieve optimum quality.

  1. Effects of Exciting Evaluated Nuclear Date Files on Nuclear Parameters of the BFS-62-3A Assembly Benchmark Model

    OpenAIRE

    Mikhail

    2002-01-01

    This report is continuation of studying of the experiments performed on BFS-62-3A critical assembly in Russia. The objective of work is definition of the cross section uncertainties on reactor neutronics parameters as applied to the hybrid core of the BN-600 reactor of Beloyarskaya NPP. Two-dimensional benchmark model of BFS-62-3A was created specially for these purposes and experimental values were reduced to it. Benchmark characteristics for this assembly are (1)criticality; (2)central fiss...

  2. Parameter estimation for slit-type scanning sensors

    Science.gov (United States)

    Fowler, J. W.; Rolfe, E. G.

    1981-01-01

    The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.

  3. Using synthetic peptides to benchmark peptide identification software and search parameters for MS/MS data analysis

    Directory of Open Access Journals (Sweden)

    Andreas Quandt

    2014-12-01

    Full Text Available Tandem mass spectrometry and sequence database searching are widely used in proteomics to identify peptides in complex mixtures. Here we present a benchmark study in which a pool of 20,103 synthetic peptides was measured and the resulting data set was analyzed using around 1800 different software and parameter set combinations. The results indicate a strong relationship between the performance of an analysis workflow and the applied parameter settings. We present and discuss strategies to optimize parameter settings in order to significantly increase the number of correctly assigned fragment ion spectra and to make the analysis method robust.

  4. Precision and Accuracy Parameters in Structured Light 3-D Scanning

    DEFF Research Database (Denmark)

    Eiríksson, Eyþór Rúnar; Wilm, Jakob; Pedersen, David Bue;

    2016-01-01

    Structured light systems are popular in part because they can be constructed from off-the-shelf low cost components. In this paper we quantitatively show how common design parameters affect precision and accuracy in such systems, supplying a much needed guide for practitioners. Our quantitative m...

  5. Precision and Accuracy Parameters in Structured Light 3-D Scanning

    DEFF Research Database (Denmark)

    Eiríksson, Eyþór Rúnar; Wilm, Jakob; Pedersen, David Bue;

    2016-01-01

    measure is the established VDI/VDE 2634 (Part 2) guideline using precision made calibration artifacts. Experiments are performed on our own structured light setup, consisting of two cameras and a projector. We place our focus on the influence of calibration design parameters, the calibration procedure...

  6. Identification of critical parameters for PEMFC stack performance characterization and control strategies for reliable and comparable stack benchmarking

    DEFF Research Database (Denmark)

    Mitzel, Jens; Gülzow, Erich; Kabza, Alexander;

    2016-01-01

    This paper is focused on the identification of critical parameters and on the development of reliable methodologies to achieve comparable benchmark results. Possibilities for control sensor positioning and for parameter variation in sensitivity tests are discussed and recommended options for the ......This paper is focused on the identification of critical parameters and on the development of reliable methodologies to achieve comparable benchmark results. Possibilities for control sensor positioning and for parameter variation in sensitivity tests are discussed and recommended options...... for the control strategy are summarized. This ensures result comparability as well as stable test conditions. E.g., the stack temperature fluctuation is minimized to about 1 °C. The experiments demonstrate that reactants pressures differ up to 12 kPa if pressure control positions are varied, resulting...... in an average cell voltage deviation of 21 mV. Test parameters simulating different stack applications are summarized. The stack demonstrated comparable average cell voltage of 0.63 V for stationary and portable conditions. For automotive conditions, the voltage increased to 0.69 V, mainly caused by higher...

  7. Estimation of forest parameters using airborne laser scanning data

    Directory of Open Access Journals (Sweden)

    J. Cohen

    2015-12-01

    Full Text Available Methods for the estimation of forest characteristics by airborne laser scanning (ALS data have been introduced by several authors. Tree height (TH and canopy closure (CC describing the forest properties can be used in forest, construction and industry applications, as well as research and decision making. The National Land Survey has been collecting ALS data from Finland since 2008 to generate a nationwide high resolution digital elevation model. Although this data has been collected in leaf-off conditions, it still has the potential to be utilized in forest mapping. A method where this data is used for the estimation of CC and TH in the boreal forest region is presented in this paper. Evaluation was conducted in eight test areas across Finland by comparing the results with corresponding Multi-Source National Forest Inventory (MS-NFI datasets. The ALS based CC and TH maps were generally in a good agreement with the MS-NFI data. As expected, deciduous forests caused some underestimation in CC and TH, but the effect was not major in any of the test areas. The processing chain has been fully automated enabling fast generation of forest maps for different areas.

  8. Estimation of forest parameters using airborne laser scanning data

    Science.gov (United States)

    Cohen, J.

    2015-12-01

    Methods for the estimation of forest characteristics by airborne laser scanning (ALS) data have been introduced by several authors. Tree height (TH) and canopy closure (CC) describing the forest properties can be used in forest, construction and industry applications, as well as research and decision making. The National Land Survey has been collecting ALS data from Finland since 2008 to generate a nationwide high resolution digital elevation model. Although this data has been collected in leaf-off conditions, it still has the potential to be utilized in forest mapping. A method where this data is used for the estimation of CC and TH in the boreal forest region is presented in this paper. Evaluation was conducted in eight test areas across Finland by comparing the results with corresponding Multi-Source National Forest Inventory (MS-NFI) datasets. The ALS based CC and TH maps were generally in a good agreement with the MS-NFI data. As expected, deciduous forests caused some underestimation in CC and TH, but the effect was not major in any of the test areas. The processing chain has been fully automated enabling fast generation of forest maps for different areas.

  9. FAITH: Scanning of Rich Web Applications for Parameter Tampering Vulnerabilities

    CERN Document Server

    Fung, Adonis P H; Wong, T Y

    2012-01-01

    Modern HTML forms are designed to generate form controls dynamically and submit over AJAX as a result of recent advances in Javascript programming techniques. Existing scanners are constrained by interacting only with traditional forms, and vulnerabilities are often left undetected even after scrutiny. In this paper, we overcome a number of client-side challenges that used to make automated fuzzing of form submissions difficult and unfaithful. We build FAITH, a pragmatic scanner for uncovering parameter tampering vulnerabilities in real-world rich web applications. It is the first scanner that enables fuzzing in most kinds of form submissions while faithfully preserving the required user actions, HTML 5, AJAX, anti-CSRF tokens and dynamic form updates. The importance of this work is demonstrated by the severe vulnerabilities uncovered, including a way to bypass the most-trusted One-Time Password (OTP) in one of the largest multinational banks. These vulnerabilities cannot be detected by existing scanners.

  10. Parameter scan for the CLIC Damping rings under the infleunce of intrabeam scattering

    CERN Document Server

    Antoniou, F; Papaphilippou, Y; Vivoli, A

    2010-01-01

    Due to the high bunch density, the output emittances of the CLIC Damping Rings (DR) are strongly dominated by the effect of Intrabeam Scattering (IBS). In an attempt to optimize the ring design, the bench-marking of the multiparticle tracking code SIRE with the classical IBS formalisms and approximations is first considered. The scaling of the steady state emittances and IBS growth rates is also studied, with respect to several ring parameters including energy, bunch charge and wiggler characteristics.

  11. Combining Total Monte Carlo and Benchmarks for nuclear data uncertainty propagation on an LFRs safety parameters

    CERN Document Server

    Alhassan, Erwin; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael

    2013-01-01

    Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.

  12. Benchmarking environmental and operational parameters through eco-efficiency criteria for dairy farms.

    Science.gov (United States)

    Iribarren, Diego; Hospido, Almudena; Moreira, María Teresa; Feijoo, Gumersindo

    2011-04-15

    Life Cycle Assessment (LCA) is often used for the environmental evaluation of agri-food systems due to its holistic perspective. In particular, the assessment of milk production at farm level requires the evaluation of multiple dairy farms to guarantee the representativeness of the study when a regional perspective is adopted. This article shows the joint implementation of LCA and Data Envelopment Analysis (DEA) in order to avoid the formulation of an average farm, therefore preventing standard deviations associated with the use of average inventory data while attaining the characterization and benchmarking of the operational and environmental performance of dairy farms. Within this framework, 72 farms located in Galicia (NW Spain) were subject to an LCA+DEA study which led to identify those farms with an efficient operation. Furthermore, target input consumption levels were benchmarked for each inefficient farm, and the corresponding target environmental impacts were calculated so that eco-efficiency criteria were verified. Thus, average reductions of up to 38% were found for input consumption levels, leading to impact reductions above 20% for every environmental impact category. Finally, the economic savings arising from efficient farming practices were also estimated. Economic savings of up to 0.13€ per liter of raw milk were calculated, which means extra profits of up to 40% of the final raw milk price.

  13. Benchmarking a new closed-form thermal analysis technique against a traditional lumped parameter, finite-difference method

    Energy Technology Data Exchange (ETDEWEB)

    Huff, K. D.; Bauer, T. H. (Nuclear Engineering Division)

    2012-08-20

    A benchmarking effort was conducted to determine the accuracy of a new analytic generic geology thermal repository model developed at LLNL relative to a more traditional, numerical, lumped parameter technique. The fast-running analytical thermal transport model assumes uniform thermal properties throughout a homogenous storage medium. Arrays of time-dependent heat sources are included geometrically as arrays of line segments and points. The solver uses a source-based linear superposition of closed form analytical functions from each contributing point or line to arrive at an estimate of the thermal evolution of a generic geologic repository. Temperature rise throughout the storage medium is computed as a linear superposition of temperature rises. It is modeled using the MathCAD mathematical engine and is parameterized to allow myriad gridded repository geometries and geologic characteristics [4]. It was anticipated that the accuracy and utility of the temperature field calculated with the LLNL analytical model would provide an accurate 'birds-eye' view in regions that are many tunnel radii away from actual storage units; i.e., at distances where tunnels and individual storage units could realistically be approximated as physical lines or points. However, geometrically explicit storage units, waste packages, tunnel walls and close-in rock are not included in the MathCAD model. The present benchmarking effort therefore focuses on the ability of the analytical model to accurately represent the close-in temperature field. Specifically, close-in temperatures computed with the LLNL MathCAD model were benchmarked against temperatures computed using geometrically-explicit lumped-parameter, repository thermal modeling technique developed over several years at ANL using the SINDAG thermal modeling code [5]. Application of this numerical modeling technique to underground storage of heat generating nuclear waste streams within the proposed YMR Site has been widely

  14. Data Based Parameter Estimation Method for Circular-scanning SAR Imaging

    Directory of Open Access Journals (Sweden)

    Chen Gong-bo

    2013-06-01

    Full Text Available The circular-scanning Synthetic Aperture Radar (SAR is a novel working mode and its image quality is closely related to the accuracy of the imaging parameters, especially considering the inaccuracy of the real speed of the motion. According to the characteristics of the circular-scanning mode, a new data based method for estimating the velocities of the radar platform and the scanning-angle of the radar antenna is proposed in this paper. By referring to the basic conception of the Doppler navigation technique, the mathematic model and formulations for the parameter estimation are firstly improved. The optimal parameter approximation based on the least square criterion is then realized in solving those equations derived from the data processing. The simulation results verified the validity of the proposed scheme.

  15. The prediction of Fe Mössbauer parameters by the density functional theory: a benchmark study.

    Science.gov (United States)

    Bochevarov, Arteum D; Friesner, Richard A; Lippard, Stephen J

    2010-11-01

    We report the performance of eight density functionals (B3LYP, BPW91, OLYP, O3LYP, M06, M06-2X, PBE, and SVWN5) in two Gaussian basis sets (Wachters and Partridge-1 on iron atoms; cc-pVDZ on the rest of atoms) for the prediction of the isomer shift (IS) and the quadrupole splitting (QS) parameters of Mössbauer spectroscopy. Two sources of geometry (density functional theory-optimized and X-ray) are used. Our data set consists of 31 iron-containing compounds (35 signals), the Mössbauer spectra of which were determined at liquid helium temperature and where the X-ray geometries are known. Our results indicate that the larger and uncontracted Partridge-1 basis set produces slightly more accurate linear correlations of electronic density used for the prediction of IS and noticeably more accurate results for the QS parameter. We confirm and discuss the earlier observation of Noodleman and co-workers that different oxidation states of iron produce different IS calibration lines. The B3LYP and O3LYP functionals have the lowest errors for either IS or QS. BPW91, OLYP, PBE, and M06 have a mixed success whereas SVWN5 and M06-2X demonstrate the worst performance. Finally, our calibrations and conclusions regarding the best functional to compute the Mössbauer characteristics are applied to candidate structures for the peroxo and Q intermediates of the enzyme methane monooxygenase hydroxylase (MMOH), and compared to experimental data in the literature. PMID:21258606

  16. TORT solutions to the NEA suite of benchmarks for 3D transport methods and codes over a range in parameter space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B. [Department of Mechanical and Nuclear Engineering, Penn State University, University Park, PA 16802 (United States)], E-mail: bekarkb@ornl.gov; Azmy, Yousry Y. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States)], E-mail: yyazmy@ncsu.edu

    2009-04-15

    We present the TORT solutions to the 3D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40 x 40 x 40, 200 angles) to the finest model (160 x 160 x 160, 800 angles). The MCNP reference solution is used for evaluating the effect of model-refinement on the accuracy of the TORT solutions. The presented results show that the majority of benchmark quantities are computed with good accuracy by TORT, and that the accuracy improves with model refinement. However, this deliberately severe test has exposed some deficiencies in both deterministic and stochastic solution approaches. Specifically, TORT fails to converge the inner iterations in some benchmark configurations while MCNP produces zero tallies, or drastically poor statistics for some benchmark quantities. We conjecture that TORT's failure to converge is driven by ray effects in configurations with low scattering ratio and/or highly skewed computational cells, i.e. aspect ratio far from unity. The failure of MCNP occurs in quantities tallied over a very small area or volume in physical space, or quantities tallied many ({approx}25) mean free paths away from the source. Hence automated, robust, and reliable variance reduction techniques are essential for obtaining high quality reference values of the benchmark quantities. Preliminary results of the benchmark exercise indicate that the occasionally poor performance of TORT is shared with other deterministic codes. Armed with this information, method developers can now direct their attention to regions in parameter space where such failures occur and design alternative solution approaches for such instances.

  17. A simulation study on proton computed tomography (CT) stopping power accuracy using dual energy CT scans as benchmark

    DEFF Research Database (Denmark)

    Hansen, David Christoffer; Seco, Joao; Sørensen, Thomas Sangild;

    2015-01-01

    development) have both been proposed as methods for obtaining patient stopping power maps. The purpose of this work was to assess the accuracy of proton CT using dual energy CT scans of phantoms to establish reference accuracy levels. Material and methods. A CT calibration phantom and an abdomen cross section...... of detectors and the corresponding noise characteristics. Stopping power maps were calculated for all three scans, and compared with the ground truth stopping power from the phantoms. Results. Proton CT gave slightly better stopping power estimates than the dual energy CT method, with root mean square errors...

  18. Study on the parameters of the scanning system for the 300 keV electron accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Leo, K. W.; Chulan, R. M., E-mail: leo@nm.gov.my; Hashim, S. A.; Baijan, A. H.; Sabri, R. M.; Mohtar, M.; Glam, H.; Lojius, L.; Zahidee, M.; Azman, A.; Zaid, M. [Malaysian Nuclear Agency, Bangi, 43000 Kajang. Selangor (Malaysia)

    2016-01-22

    This paper describes the method to identify the magnetic coil parameters of the scanning system. This locally designed low energy electron accelerator with the present energy of 140 keV will be upgraded to 300 keV. In this accelerator, scanning system is required to deflect the energetic electron beam across a titanium foil in vertical and horizontal direction. The excitation current of the magnetic coil is determined by the energy of the electron beam. Therefore, the magnetic coil parameters must be identified to ensure the matching of the beam energy and excitation coil current. As the result, the essential parameters of the effective lengths for X-axis and Y-axis have been found as 0.1198 m and 0.1134 m and the required excitation coil currents which is dependenton the electron beam energies have be identified.

  19. Optimization of scanning parameters for multi-slice CT colonography: Experiments with synthetic and animal phantoms

    International Nuclear Information System (INIS)

    AIM: To determine the optimal collimation, pitch, tube current and reconstruction interval for multi-slice computed tomography (CT) colonography with regard to attaining satisfactory image quality while minimizing patient radiation dose. MATERIALS AND METHODS: Multi-slice CT was performed on plastic, excised pig colon and whole pig phantoms to determine optimal settings. Performance was judged by detection of simulated polyps and statistical measures of the image parameters. Fat and muscle conspicuity was measured from images of dual tube-current prone/supine patient data to derive a measure of tube current effects on tissue contrast. RESULTS: A collimation of 4x2.5 mm was sufficient for detection of polyps 4 mm and larger, provided that a reconstruction interval of 1.25 mm was used. A pitch of 1.5 allowed faster scanning and reduced radiation dose without resulting in a loss of important information, i.e. detection of small polyps, when compared with a pitch of 0.75. Tube current and proportional radiation dose could be lowered substantially without deleterious effects on the detection of the air-mucosal interface, however, increased image noise substantially reduced conspicuity of different tissues. CONCLUSION: An optimal image acquisition set-up of 4x2.5 mm collimation, reconstruction interval of 1.25 mm, pitch of 1.5 and dual prone/supine scan of 40/100 mA tube current is proposed for our institution for scanning symptomatic patients. Indications are that where CT colonography is used for colonic polyp screening in non-symptomatic patients, a 40 mA tube current could prove satisfactory for both scans

  20. Optimization of scanning parameters for multi-slice CT colonography: Experiments with synthetic and animal phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Embleton, K.V. E-mail: k.embleton@man.ac.uk; Nicholson, D.A.; Hufton, A.P.; Jackson, A

    2003-12-01

    AIM: To determine the optimal collimation, pitch, tube current and reconstruction interval for multi-slice computed tomography (CT) colonography with regard to attaining satisfactory image quality while minimizing patient radiation dose. MATERIALS AND METHODS: Multi-slice CT was performed on plastic, excised pig colon and whole pig phantoms to determine optimal settings. Performance was judged by detection of simulated polyps and statistical measures of the image parameters. Fat and muscle conspicuity was measured from images of dual tube-current prone/supine patient data to derive a measure of tube current effects on tissue contrast. RESULTS: A collimation of 4x2.5 mm was sufficient for detection of polyps 4 mm and larger, provided that a reconstruction interval of 1.25 mm was used. A pitch of 1.5 allowed faster scanning and reduced radiation dose without resulting in a loss of important information, i.e. detection of small polyps, when compared with a pitch of 0.75. Tube current and proportional radiation dose could be lowered substantially without deleterious effects on the detection of the air-mucosal interface, however, increased image noise substantially reduced conspicuity of different tissues. CONCLUSION: An optimal image acquisition set-up of 4x2.5 mm collimation, reconstruction interval of 1.25 mm, pitch of 1.5 and dual prone/supine scan of 40/100 mA tube current is proposed for our institution for scanning symptomatic patients. Indications are that where CT colonography is used for colonic polyp screening in non-symptomatic patients, a 40 mA tube current could prove satisfactory for both scans.

  1. High-resolution MRI of the labyrinth. Optimization of scan parameters with 3D-FSE

    International Nuclear Information System (INIS)

    The aim of our study was to optimize the parameters of high-resolution MRI of the labyrinth with a 3D fast spin-echo (3D-FSE) sequence. We investigated repetition time (TR), echo time (TE), Matrix, field of view (FOV), and coil selection in terms of CNR (contrast-to-noise ratio) and SNR (signal-to-noise ratio) by comparing axial images and/or three-dimensional images. The optimal 3D-FSE sequence parameters were as follows: 1.5 Tesla MR unit (Signa LX, GE Medical Systems), 3D-FSE sequence, dual 3-inch surface coil, acquisition time=12.08 min, TR=5000 msec, TE=300 msec, 3 number of excitations (NEX), FOV=12 cm, matrix=256 x 256, slice thickness=0.5 mm/0.0 sp, echo train=64, bandwidth=±31.5 kHz. High-resolution MRI of the labyrinth using the optimized 3D-FSE sequence parameters permits visualization of important anatomic details (such as scala tympani and scala vestibuli), making it possible to determine inner ear anomalies and the patency of cochlear turns. To obtain excellent heavily T2-weighted axial and three-dimensional images in the labyrinth, high CNR, SNR, and spatial resolution are significant factors at the present time. Furthermore, it is important not only to optimize the scan parameters of 3D-FSE but also to select an appropriate coil for high-resolution MRI of the labyrinth. (author)

  2. Algorithm for the Automatic Estimation of Agricultural Tree Geometric Parameters Using Airborne Laser Scanning Data

    Science.gov (United States)

    Hadaś, E.; Borkowski, A.; Estornell, J.

    2016-06-01

    The estimation of dendrometric parameters has become an important issue for the agricultural planning and management. Since the classical field measurements are time consuming and inefficient, Airborne Laser Scanning (ALS) data can be used for this purpose. Point clouds acquired for orchard areas allow to determine orchard structures and geometric parameters of individual trees. In this research we propose an automatic method that allows to determine geometric parameters of individual olive trees using ALS data. The method is based on the α-shape algorithm applied for normalized point clouds. The algorithm returns polygons representing crown shapes. For points located inside each polygon, we select the maximum height and the minimum height and then we estimate the tree height and the crown base height. We use the first two components of the Principal Component Analysis (PCA) as the estimators for crown diameters. The α-shape algorithm requires to define the radius parameter R. In this study we investigated how sensitive are the results to the radius size, by comparing the results obtained with various settings of the R with reference values of estimated parameters from field measurements. Our study area was the olive orchard located in the Castellon Province, Spain. We used a set of ALS data with an average density of 4 points m-2. We noticed, that there was a narrow range of the R parameter, from 0.48 m to 0.80 m, for which all trees were detected and for which we obtained a high correlation coefficient (> 0.9) between estimated and measured values. We compared our estimates with field measurements. The RMSE of differences was 0.8 m for the tree height, 0.5 m for the crown base height, 0.6 m and 0.4 m for the longest and shorter crown diameter, respectively. The accuracy obtained with the method is thus sufficient for agricultural applications.

  3. Investigation of scanning parameters for thyroid fine needle aspiration cytology specimens: A pilot study

    Directory of Open Access Journals (Sweden)

    Maheswari S Mukherjee

    2015-01-01

    Full Text Available Background: Interest in developing more feasible and affordable applications of virtual microscopy in the field of cytology continues to grow. Aims: The aim of this study was to investigate the scanning parameters for the thyroid fine needle aspiration (FNA cytology specimens. Subjects and Methods: A total of twelve glass slides from thyroid FNA cytology specimens were digitized at ×40 with 1 micron (μ interval using seven focal plane (FP levels (Group 1, five FP levels (Group 2, and three FP levels (Group 3 using iScan Coreo Au scanner (Ventana, AZ, USA producing 36 virtual images (VI. With an average wash out period of 2 days, three participants diagnosed the preannotated cells of Groups 1, 2, and 3 using BioImagene′s Image Viewer (version 3.1 (Ventana, Inc., Tucson, AZ, USA, and the corresponding 12 glass slides (Group 4 using conventional light microscopy. Results: All three raters correctly identified and showed complete agreement on the glass and VI for: 86% of the cases at FP Level 3, 83% of the cases at both the FP Levels 5 and 7. The intra-observer concordance between the glass slides and VI for all three raters was highest (97% for Level 3 and glass, same (94% for Level 5 and glass; and Level 7 and glass. The inter-rater reliability was found to be highest for the glass slides, and three FP levels (77%, followed by five FP levels (69.5%, and seven FP levels (69.1%. Conclusions: This pilot study found that among the three different FP levels, the VI digitized using three FP levels had slightly higher concordance, intra-observer concordance, and inter-rater reliability. Scanning additional levels above three FP levels did not improve concordance. We believe that there is no added benefit of acquiring five FP levels or more especially when considering the file size, and storage costs. Hence, this study reports that FP level three and 1 μ could be the potential scanning parameters for the thyroid FNA cytology specimens.

  4. Analysis of Left Ventricular Functional Parameters in Normal Korean Subjects by ECG Gated Blood Pool Scan

    International Nuclear Information System (INIS)

    The demand for refinement in noninvasive and quantitative assessment of left ventricular (LU) function is increasing. To assess normal values of left ventricular functional parameters during both systole and diastole by scintigraphic method using computerized triple-head gamma camera and to evaluate correlations between these parameters. ECG gated blood pool scan with 99mTc-Human serum albumin was performed in 94 normal Korean subjects. Ejection fraction (EF), systolic parameters [peak emptying rate (PER), average emptying rate (AER), time to peak emptying rate (TPER)], and diastolic parameters [peak filling rate (PFR), average filling rate (AFR), time to peak filling rate (TPFR)] were obtained by analysis of LV time-activity curve, the correlation of these parameters to the age and sex, and the correlation between these parameters were evaluated. 1) Mean value of ejection fraction in study subjects was 59.6 ± 5.25% and showed no significant correlation to age (r=0.08) and sex but showed most pronounced correlation to PFR (r=0.46, p<0.001), PER (r=0.41, p<0.001), AFR (r=0.34, p<0.001) and AER (r=0.28, p<0.01). 2) Mean values of systolic parameters were as follows: PER=3.22 + 0.50 end-diastolic volume/sec, AER =2.22 + 0.45 end-diastolic volume/sec, TPER=103.5 + 29.30 msec. They showed no significant correlation to age and sex. 3) Mean values of diastolic parameters were as follows: PFR=2.71+0.51 end-diastolic volume/sec, AFR=1.830.44 end-diastolic volume/sec, TPFR=132.1 + 33.45 msec. They showed strong correlation to age (r=-0/70, -0.64, 0.37, p<0.001). Left ventricular functional parameters in normal Korean subjects were obtained reliably by computerized scintigraphic method and may be applied to the evaluation of cardiac function in diseased patients.

  5. Validation of CENDL and JEFF evaluated nuclear data files for TRIGA calculations through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors

    Energy Technology Data Exchange (ETDEWEB)

    Uddin, M.N. [Department of Physics, Jahangirnagar University, Dhaka (Bangladesh); Sarker, M.M. [Reactor Physics and Engineering Division, Institute of Nuclear Science and Technology, Atomic Energy Research Establishment, Savar, GPO Box 3787, Dhaka 1000 (Bangladesh); Khan, M.J.H. [Reactor Physics and Engineering Division, Institute of Nuclear Science and Technology, Atomic Energy Research Establishment, Savar, GPO Box 3787, Dhaka 1000 (Bangladesh)], E-mail: jahirulkhan@yahoo.com; Islam, S.M.A. [Department of Physics, Jahangirnagar University, Dhaka (Bangladesh)

    2009-10-15

    The aim of this paper is to present the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through the analysis of the integral parameters of TRX and BAPL benchmark lattices of thermal reactors for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. In this process, the 69-group cross-section library for lattice code WIMS was generated using the basic evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 with the help of nuclear data processing code NJOY99.0. Integral measurements on the thermal reactor lattices TRX-1, TRX-2, BAPL-UO{sub 2}-1, BAPL-UO{sub 2}-2 and BAPL-UO{sub 2}-3 served as standard benchmarks for testing nuclear data files and have also been selected for this analysis. The integral parameters of the said lattices were calculated using the lattice transport code WIMSD-5B based on the generated 69-group cross-section library. The calculated integral parameters were compared to the measured values as well as the results of Monte Carlo Code MCNP. It was found that in most cases, the values of integral parameters show a good agreement with the experiment and MCNP results. Besides, the group constants in WIMS format for the isotopes U-235 and U-238 between two data files have been compared using WIMS library utility code WILLIE and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through benchmarking the integral parameters of TRX and BAPL lattices and can also be essential to implement further neutronic analysis of TRIGA Mark-II research reactor at AERE, Dhaka, Bangladesh.

  6. Benchmarks on effective delayed neutron parameters and reactivity: a Brazilian IPEN/MB-01 contribution to the IRPhE project

    Energy Technology Data Exchange (ETDEWEB)

    Dos Santos, Adimir; Yoichi Ribeiro Kuramoto, Renato; Diniz, Ricardo; Jereza Graciete Simoes de Andrade e Silva, Rogerio; Yamaguchi, Mitsuo [Instituto de Pesquisas Energeticas e Nucleares, IPEN - CNEN/SP, Sao Paulo (Brazil)

    2008-07-01

    The purpose of this work is to present the experimental results of the in-pile experiments performed at the IPEN/MB-01 Reactor for the determination of the effective delayed neutron parameters and reactivity. The methodologies employed were the macroscopic noise in the frequency domain, where the very low frequency range (< 1.0 Hz) was also exploited and analyzed, and the microscopic noise, which is based on the measurement of Rossi-alpha and Feynmann-alpha distributions at several subcritical levels. In this last case, a Two- Region Model was developed. The main advantage of these methodologies is to obtain the effective delayed neutron parameters in a purely experimental way, eliminating all parameters that are difficult to measure or calculate. Consequently, the uncertainties associated with these parameters are eliminated and the accuracy in the effective delayed neutron parameters is improved. Both techniques are claimed to be well defined and produce experimental data of very high quality. Finally, it is proposed to assign benchmark-values to beta{sub eff} (the effective delayed neutron fraction), to LAMBDA (the prompt neutron generation time), to their ratio (beta{sub eff}/LAMBDA) and also for the first time to the reactivity by means of the in-hour equation. It is concluded that the experiments are acceptable benchmarks. (authors)

  7. Comparison of nuclear parameters for a LMFBR heterogenous Benchmark core. Influence of different basic data sets and processing codes

    International Nuclear Information System (INIS)

    A LMFBR heterogenous core model was a few years ago proposed by CEA as a Benchmark core for comparative calculations. The geometrical RZ model consists of three radial fissile zones of the same enrichment divided at the midplane by an axial slice of internal breeder material. The fissile zones are separated by three internal breeder zones, one central zone and two breeder rings. The core has been studied with 2D diffusion codes in 10 to 25 energy groups. Comparisons have been made between CEA (CARNAVAL III) INTERATOM (KEKINR) and STUDSVIK (ENDF IV) solutions. THe spread in k (sub)eff is 1.7 percent with the lowest value for STUDSVIK (ENDF IV) and the highest value for INTERATOM (KFKINR). The spread in breeding ratio is 0.03 with the highest value for STUDSVIK and lowest for INTERATOM. This spread in k (sub) eff and BR is of the same magnitude as for homogenous benchmark core. The variations in the sodium void effect between CARNAVAL III, KFKINR and ENDF IV solutions are rather similar for the heterogenous and homogenous benchmark cores. Comparison of one-group core fission and capture cross sections indicate a dominating influence of the processing codes. The influence on k (sub) eff seem to be smaller due to cancelling effects. (author)

  8. The quality of reconstructed 3D images in multidetector-row helical CT: experimental study involving scan parameters

    International Nuclear Information System (INIS)

    To determine which multidetector-row helical CT scanning technique provides the best-quality reconstructed 3D images, and to assess differences in image quality according to the levels of the scanning parameters used. Four objects with different surfaces and contours were scanned using multidetector-row helical CT at three detector-row collimations (1.25, 2.50, 5.00 mm), two pitches (3.0, 6.0), and three different degrees of overlap between the reconstructed slices (0%, 25%, 50%). Reconstructed 3D images of the resulting 72 sets of data were produced using volumetric rendering. The 72 images were graded on a scale from 1 (worst) to 5 (best) for each of four rating criteria, giving a mean score for each criterion and an overall mean score. Statistical analysis was used to assess differences in image quality according to scanning parameter levels. The mean score for each rating criterion, and the overall mean score, varied significantly according to the scanning parameter levels used. With regard to detector-row collimation and pitch, all levels of scanning parameters gave rise to significant differences, while in the degree of overlap of reconstructed slices, there were significant differences between overlap of 0% and of 50% in all levels of scanning parameters, and between overlap of 25% and of 50% in overall accuracy and overall mean score. Among the 18 scanning sequences, the highest score (4.94) was achieved with 1.25 mm detector-row collimation, 3.0 pitch, and 50% overlap between reconstructed slices. Comparison of the quality of reconstructed 3D images obtained using multidetector-row helical CT and various scanning techniques indicated that the 1.25 mm, 3.0, 50% scanning sequence was best. Quality improved as detector-row collimation decreased; as pitch was reduced from 6.0 to 3.0; and as overlap between reconstructed slices increased

  9. Chemical freeze-out parameters in Beam Energy Scan Program of STAR at RHIC

    CERN Document Server

    ,

    2014-01-01

    The STAR experiment at RHIC has completed its first phase of the Beam Energy Scan (BES-I) program to understand the phase structure of the quantum chromodynamics (QCD). The bulk properties of the system formed in Au+Au collisions at different center of mass energy $\\sqrt{s_{NN}} = $ 7.7, 11.5, 19.6, 27, and 39 GeV have been studied from the data collected in the year 2010 and 2011. The centrality and energy dependence of mid-rapidity ($|y|$ < 0.1) particle yields, and ratios are presented here. The chemical freeze-out parameters are extracted using measured particle ratios within the framework of a statistical model.

  10. Chemical freeze-out parameters in Beam Energy Scan Program of STAR at RHIC

    Directory of Open Access Journals (Sweden)

    Das Sabita

    2015-01-01

    Full Text Available The STAR experiment at RHIC has completed its first phase of the Beam Energy Scan (BES-I program to understand the phase structure of the quantum chromodynamics (QCD. The bulk properties of the system formed in Au+Au collisions at different center of mass energy √sNN = 7.7, 11.5, 19.6, 27, and 39 GeV have been studied from the data collected in the year 2010 and 2011. The centrality and energy dependence of mid-rapidity (|y| < 0.1 particle yields, and ratios are presented here. The chemical freeze-out parameters are extracted using measured particle ratios within the framework of a statistical model.

  11. Chemical freeze-out parameters in Beam Energy Scan Program of STAR at RHIC

    Science.gov (United States)

    Das, Sabita

    2015-03-01

    The STAR experiment at RHIC has completed its first phase of the Beam Energy Scan (BES-I) program to understand the phase structure of the quantum chromodynamics (QCD). The bulk properties of the system formed in Au+Au collisions at different center of mass energy √sNN = 7.7, 11.5, 19.6, 27, and 39 GeV have been studied from the data collected in the year 2010 and 2011. The centrality and energy dependence of mid-rapidity (|y| < 0.1) particle yields, and ratios are presented here. The chemical freeze-out parameters are extracted using measured particle ratios within the framework of a statistical model.

  12. THE BUBNOV–GALERKIN PROCEDURE IN PROBLEMS OF MOBILE (SCANNING CONTROL FOR SYSTEMS WITH DISTRIBUTED PARAMETERS

    Directory of Open Access Journals (Sweden)

    Arakelyan Sh. Kh.

    2015-09-01

    Full Text Available We suggest to apply the Bubnov–Galerkin procedure to solve scanning control problems for systems with distributed parameters. The algorithm is described in details for three-dimensional linear heat equation It allows to reduce the solution of control problem to finite-dimensional nonlinear moments problem. The procedure of derivation of moments problem is illustrated in details on the example of one-dimensional equation of thermal conductivity. The solution of obtained moments problem is found in a particular case. Based on obtained results a computer simulation is done using COMSOL Multiphysics platform in one-dimensional case for a rod. The main dependences of control function against input data of the problem are revealed. The state of the rod for several (constant values of the source intensity is expressed in terms of graphs and illustrations. Corresponding illustrations are brought in case of control absence (null-power source for comparison. An effective numerical scheme for solving the obtained system of nonlinear constraints is suggested in the case of extended class of admissible controls. Calculation of control parameters is reduced to the simplest problem of nonlinear programming.

  13. Parametric modeling and optimization of laser scanning parameters during laser assisted machining of Inconel 718

    Science.gov (United States)

    Venkatesan, K.; Ramanujam, R.; Kuppan, P.

    2016-04-01

    This paper presents a parametric effect, microstructure, micro-hardness and optimization of laser scanning parameters (LSP) on heating experiments during laser assisted machining of Inconel 718 alloy. The laser source used for experiments is a continuous wave Nd:YAG laser with maximum power of 2 kW. The experimental parameters in the present study are cutting speed in the range of 50-100 m/min, feed rate of 0.05-0.1 mm/rev, laser power of 1.25-1.75 kW and approach angle of 60-90°of laser beam axis to tool. The plan of experiments are based on central composite rotatable design L31 (43) orthogonal array. The surface temperature is measured via on-line measurement using infrared pyrometer. Parametric significance on surface temperature is analysed using response surface methodology (RSM), analysis of variance (ANOVA) and 3D surface graphs. The structural change of the material surface is observed using optical microscope and quantitative measurement of heat affected depth that are analysed by Vicker's hardness test. The results indicate that the laser power and approach angle are the most significant parameters to affect the surface temperature. The optimum ranges of laser power and approach angle was identified as 1.25-1.5 kW and 60-65° using overlaid contour plot. The developed second order regression model is found to be in good agreement with experimental values with R2 values of 0.96 and 0.94 respectively for surface temperature and heat affected depth.

  14. Derivation of tree stem structural parameters from static terrestrial laser scanning data

    Science.gov (United States)

    Tian, Wei; Lin, Yi; Liu, Yajing; Niu, Zheng

    2014-11-01

    Accurate tree-level characteristic information is increasingly demanded for forest management and environment protection. The cutting-edge remote sensing technique of terrestrial laser scanning (TLS) shows the potential of filling this gap. This study focuses on exploring the methods for deriving various tree stem structural parameters, such as stem position, diameter at breast height (DBH), the degree of stem shrinkage, and the elevation angle and azimuth angle of stem inclination. The data for test was collected with a Leica HDS6100 TLS system in Seurasaari, Southern Finland in September 2010. In the field, the reference positions and DBHs of 100 trees were measured manually. The isolation of individual trees is based on interactive segmentation of point clouds. The estimation of stem position and DBH is based on the schematic of layering and then least-square-based circle fitting in each layer. The slope of robust fit line between the height of each layer and DBH is used to characterize the stem shrinkage. The elevation angle of stem inclination is described by the angle between the ground plane and the fitted stem axis. The angle between the north direction and the fitted stem axis gives the azimuth angle of stem inclination. The estimation of the DBHs performed with R square (R2) of 0.93 and root mean square error (RMSE) of 0.038m.The average angle corresponding to stem shrinkage is -1.86°. The elevation angles of stem inclinations are ranged from 31° to 88.3°. The results have basically validated TLS for deriving multiple structural parameters of stem, which help better grasp tree specialties.

  15. The effect of scan parameters on cone beam CT trabecular bone microstructural measurements of the human mandible

    OpenAIRE

    Ibrahim, N; Parsa, A.; Hassan, B.; van der Stelt, P; Aartman, I.H.A.; Wismeijer, D.

    2014-01-01

    The objective of this study was to investigate the effect of different cone beam CT scan parameters on trabecular bone microstructure measurements. A human mandibular cadaver was scanned using a cone beam CT (3D Accuitomo 170; J.Morita, Kyota, Japan). 20 cone beam CT images were obtained using 5 different fields of view (4X4 cm, 6x6 cm, 8X8 cm, 10x10 cm and 10X5 cm), 2 types of rotation steps (180 degrees and 360 degrees) and 2 scanning resolutions (standard and high). Image analysis software...

  16. Validation study of SRAC2006 code system based on evaluated nuclear data libraries for TRIGA calculations by benchmarking integral parameters of TRX and BAPL lattices of thermal reactors

    International Nuclear Information System (INIS)

    Highlights: ► To validate the SRAC2006 code system for TRIGA neutronics calculations. ► TRX and BAPL lattices are treated as standard benchmarks for this purpose. ► To compare the calculated results with experiment as well as MCNP values in this study. ► The study demonstrates a good agreement with the experiment and the MCNP results. ► Thus, this analysis reflects the validation study of the SRAC2006 code system. - Abstract: The goal of this study is to present the validation study of the SRAC2006 code system based on evaluated nuclear data libraries ENDF/B-VII.0 and JENDL-3.3 for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. This study is achieved through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors. In integral measurements, the thermal reactor lattices TRX-1, TRX-2, BAPL-UO2-1, BAPL-UO2-2 and BAPL-UO2-3 are treated as standard benchmarks for validating/testing the SRAC2006 code system as well as nuclear data libraries. The integral parameters of the said lattices are calculated using the collision probability transport code PIJ of the SRAC2006 code system at room temperature 20 °C based on the above libraries. The calculated integral parameters are compared to the measured values as well as the MCNP values based on the Chinese evaluated nuclear data library CENDL-3.0. It was found that in most cases, the values of integral parameters demonstrate a good agreement with the experiment and the MCNP results. In addition, the group constants in SRAC format for TRX and BAPL lattices in fast and thermal energy range respectively are compared between the above libraries and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation study of the SRAC2006 code system based on evaluated nuclear data libraries JENDL-3.3 and ENDF/B-VII.0 and can also be essential to implement further neutronics calculations of

  17. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  18. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  19. Modelling anaerobic co-digestion in Benchmark Simulation Model No. 2: Parameter estimation, substrate characterisation and plant-wide integration.

    Science.gov (United States)

    Arnell, Magnus; Astals, Sergi; Åmand, Linda; Batstone, Damien J; Jensen, Paul D; Jeppsson, Ulf

    2016-07-01

    Anaerobic co-digestion is an emerging practice at wastewater treatment plants (WWTPs) to improve the energy balance and integrate waste management. Modelling of co-digestion in a plant-wide WWTP model is a powerful tool to assess the impact of co-substrate selection and dose strategy on digester performance and plant-wide effects. A feasible procedure to characterise and fractionate co-substrates COD for the Benchmark Simulation Model No. 2 (BSM2) was developed. This procedure is also applicable for the Anaerobic Digestion Model No. 1 (ADM1). Long chain fatty acid inhibition was included in the ADM1 model to allow for realistic modelling of lipid rich co-substrates. Sensitivity analysis revealed that, apart from the biodegradable fraction of COD, protein and lipid fractions are the most important fractions for methane production and digester stability, with at least two major failure modes identified through principal component analysis (PCA). The model and procedure were tested on bio-methane potential (BMP) tests on three substrates, each rich on carbohydrates, proteins or lipids with good predictive capability in all three cases. This model was then applied to a plant-wide simulation study which confirmed the positive effects of co-digestion on methane production and total operational cost. Simulations also revealed the importance of limiting the protein load to the anaerobic digester to avoid ammonia inhibition in the digester and overloading of the nitrogen removal processes in the water train. In contrast, the digester can treat relatively high loads of lipid rich substrates without prolonged disturbances. PMID:27088248

  20. Optimized treatment parameters to account for interfractional variability in scanned ion beam therapy of lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Brevet, Romain

    2015-02-04

    Scanned ion beam therapy of lung tumors is severely limited in its clinical applicability by intrafractional organ motion, interference effects between beam and tumor motion (interplay) as well as interfractional anatomic changes. To compensate for dose deterioration by intrafractional motion, motion mitigation techniques, such as gating have been developed. The latter confines the irradiation to a predetermined breathing state, usually the stable end-exhale phase. However, optimization of the treatment parameters is needed to further improve target dose coverage and normal tissue sparing. The aim of the study presented in this dissertation was to determine treatment planning parameters that permit to recover good target coverage and homogeneity during a full course of lung tumor treatments. For 9 lung tumor patients from MD Anderson Cancer Center (MDACC), a total of 70 weekly time-resolved computed tomography (4DCT) datasets were available, which depict the evolution of the patient anatomy over the several fractions of the treatment. Using the GSI in-house treatment planning system (TPS) TRiP4D, 4D simulations were performed on each weekly 4DCT for each patient using gating and optimization of a single treatment plan based on a planning CT acquired prior to treatment. It was found that using a large beam spot size, a short gating window (GW), additional margins and multiple fields permitted to obtain the best results, yielding an average target coverage (V95) of 96.5%. Two motion mitigation techniques, one approximating the rescanning process (multiple irradiations of the target with a fraction of the planned dose) and one combining the latter and gating, were then compared to gating. Both did neither show an improvement in target dose coverage nor in normal tissue sparing. Finally, the total dose delivered to each patient in a simulation of a fractioned treatment was calculated and clinical requirements in terms of target coverage and normal tissue sparing were

  1. Stripping chronopotentiometry at scanned deposition potential (SSCP). Part 2. Determination of metal ion speciation parameters

    NARCIS (Netherlands)

    Leeuwen, van H.P.; Town, R.M.

    2003-01-01

    Stripping chronopotentiometry at scanned deposition potential (SSCP) generates curves that are fundamentally different in form from classical polarographic waves. Still, despite their steeper slope and non-linear log plot, the shift in the SSCP half-wave deposition potential can be interpreted in a

  2. Influence of cone beam CT scanning parameters on grey value measurements at an implant site

    NARCIS (Netherlands)

    A. Parsa; N. Ibrahim; B. Hassan; A. Motroni; P. van der Stelt; D. Wismeijer

    2013-01-01

    Objectives: The aim of this study was to determine the grey value variation at the implant site with different scan settings, including field of view (FOV), spatial resolution, number of projections, exposure time and dose selections in two cone beam CT (CBCT) systems and to compare the results with

  3. Body Tumor CT Perfusion Protocols: Optimization of Acquisition Scan Parameters in a Rat Tumor Model

    OpenAIRE

    Tognolini, Alessia; Schor-Bardach, Rachel; Pianykh, Oleg S.; Wilcox, Carol J.; Raptopoulos, Vassilios; Goldberg, S. Nahum

    2009-01-01

    Purpose: To evaluate the effects of total scanning time (TST), interscan delay (ISD), inclusion of image at peak vascular enhancement (IPVE), and selection of the input function vessel on the accuracy of tumor blood flow (BF) calculation with computed tomography (CT) in an animal model.

  4. A benchmark on the calculation of kinetic parameters based on reactivity effect experiments in the CROCUS reactor

    Energy Technology Data Exchange (ETDEWEB)

    Paratte, J.M. [Laboratory for Reactor Physics and Systems Behaviour (LRS), Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Frueh, R. [Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Kasemeyer, U. [Laboratory for Reactor Physics and Systems Behaviour (LRS), Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Kalugin, M.A. [Kurchatov Institute, 123182 Moscow (Russian Federation); Timm, W. [Framatome-ANP, D-91050 Erlangen (Germany); Chawla, R. [Laboratory for Reactor Physics and Systems Behaviour (LRS), Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland)

    2006-05-15

    Measurements in the CROCUS reactor at EPFL, Lausanne, are reported for the critical water level and the inverse reactor period for several different sets of delayed supercritical conditions. The experimental configurations were also calculated by four different calculation methods. For each of the supercritical configurations, the absolute reactivity value has been determined in two different ways, viz.: (i) through direct comparison of the multiplication factor obtained employing a given calculation method with the corresponding value for the critical case (calculated reactivity: {rho} {sub calc}); (ii) by application of the inhour equation using the kinetic parameters obtained for the critical configuration and the measured inverse reactor period (measured reactivity: {rho} {sub meas}). The calculated multiplication factors for the reference critical configuration, as well as {rho} {sub calc} for the supercritical cases, are found to be in good agreement. However, the values of {rho} {sub meas} produced by two of the applied calculation methods differ appreciably from the corresponding {rho} {sub calc} values, clearly indicating deficiencies in the kinetic parameters obtained from these methods.

  5. Benchmarking of a treatment planning system for spot scanning proton therapy: Comparison and analysis of robustness to setup errors of photon IMRT and proton SFUD treatment plans of base of skull meningioma

    Energy Technology Data Exchange (ETDEWEB)

    Harding, R., E-mail: ruth.harding2@wales.nhs.uk [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF, United Kingdomand Abertawe Bro Morgannwg University Health Board, Medical Physics and Clinical Engineering, Swansea SA2 8QA (United Kingdom); Trnková, P.; Lomax, A. J. [Paul Scherrer Institute, Centre for Proton Therapy, Villigen 5232 (Switzerland); Weston, S. J.; Lilley, J.; Thompson, C. M.; Cosgrove, V. P. [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF (United Kingdom); Short, S. C. [Leeds Institute of Molecular Medicine, Oncology and Clinical Research, Leeds LS9 7TF, United Kingdomand St James’s Institute of Oncology, Oncology, Leeds LS9 7TF (United Kingdom); Loughrey, C. [St James’s Institute of Oncology, Oncology, Leeds LS9 7TF (United Kingdom); Thwaites, D. I. [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF, United Kingdomand Institute of Medical Physics, School of Physics, University of Sydney, Sydney NSW 2006 (Australia)

    2014-11-01

    Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was to benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.

  6. Highest performance in 3D metal cutting at smallest footprint: benchmark of a robot based system vs. parameters of gantry systems

    Science.gov (United States)

    Scheller, Torsten; Bastick, André; Michel-Triller, Robert; Manzella, Christon

    2014-02-01

    In the automotive industry as well as in other industries ecological aspects regarding energy savings are driving new technologies and materials, e.g. lightweight materials as aluminium or press hardened steels. Processing such parts especially complex 3D shaped parts laser manufacturing has become the key process offering highest efficiency. The most established systems for 3D cutting applications are based on gantry systems. The disadvantage of those systems is their huge footprint to realize the required stability and work envelope. Alternatively a robot based system might be of advantage if accuracy, speed and overall performance would be capable processing automotive parts. With the BIM "beam in motion" system, JENOPTIK Automatisierungstechnik GmbH has developed a modular robot based laser processing machine, which meets all OEM specs processing press hardened steel parts. A benchmark of the BIM versus a gantry system was done regarding all required parameters to fulfil OEM specifications for press hardened steel parts. As a result a highly productive, accurate and efficient system can be described based on one or multiple robot modules working simultaneously together. The paper presents the improvements on the robot machine concept BIM addressed in 2012 [1] leading to an industrial proven system approach for the automotive industry. It further compares the performance and the parameters for 3D cutting applications of the BIM system versus a gantry system by samples of applied parts. Finally an overview of suitable applications for processing complex 3D parts with high productivity at small footprint is given.

  7. Radioiodine scan index: A simplified, quantitative treatment response parameter for metastatic thyroid carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jong Ryool; Ahn, Byeong Cheol; Jeong, Shin Young; Lee, Sang Woo; Lee, Jae Tae [Dept. of Nuclear Medicine, Kyungpook National University School of Medicine and Hospital, Daegu (Korea, Republic of)

    2015-09-15

    We aimed to develop and validate a simplified, novel quantification method for radioiodine whole-body scans (WBSs) as a predictor for the treatment response in differentiated thyroid carcinoma (DTC) patients with distant metastasis. We retrospectively reviewed serial WBSs after radioiodine treatment from 2008 to 2011 in patients with metastatic DTC. For standardization of TSH simulation, only a subset of patients whose TSH level was fully enhanced (TSH > 80 mU/l) was enrolled. The radioiodine scan index (RSI) was calculated by the ratio of tumor-to-brain uptake. We compared correlations between the RSI and TSH-stimulated serum thyroglobulin (TSH{sub sT}g) level and between the RSI and Tg reduction rate of consecutive radioiodine treatments. A total of 30 rounds of radioiodine treatment for 15 patients were eligible. Tumor histology was 11 papillary and 4 follicular subtypes. The TSH{sub sT}g level was mean 980 ng/ml (range, 0.5–11,244). The Tg reduction rate after treatment was a mean of −7 % (range, −90 %–210 %). Mean RSI was 3.02 (range, 0.40–10.97). RSI was positively correlated with the TSH{sub sT}g level (R2 = 0.3084, p = 0.001) and negatively correlated with the Tg reduction rate (R2 = 0.2993, p = 0.037). The regression equation to predict treatment response was as follows: Tg reduction rate = −14.581 × RSI + 51.183. Use of the radioiodine scan index derived from conventional WBS is feasible to reflect the serum Tg level in patients with metastatic DTC, and it may be useful for predicting the biologic treatment response after radioiodine treatment.

  8. Scatter radiation breast exposure during head CT: impact of scanning conditions and anthropometric parameters on shielded and unshielded breast dose

    Energy Technology Data Exchange (ETDEWEB)

    Klasic, B. [Hospital for pulmonary diseases, Zagreb (Croatia); Knezevic, Z.; Vekic, B. [Rudjer Boskovic Institute, Zagreb (Croatia); Brnic, Z.; Novacic, K. [Merkur Univ. Hospital, Zagreb (Croatia)

    2006-07-01

    Constantly increasing clinical requests for CT scanning of the head on our facility continue to raise concern regarding radiation exposure of patients, especially radiosensitive tissues positioned close to the scanning plane. The aim of our prospective study was to estimate scatter radiation doses to the breast from routine head CT scans, both with and without use of lead shielding, and to establish influence of various technical and anthropometric factors on doses using statistical data analysis. In 85 patient referred to head CT for objective medical reasons, one breast was covered with lead apron during CT scanning. Radiation doses were measured at skin of both breasts and over the apron simultaneously, by the use of thermo luminescent dosimeters. The doses showed a mean reduction by 37% due to lead shielding. After we statistically analyzed our data, we observed significant correlation between under-the-shield dose and values of technical parameters. We used multiple linear regression model to describe the relationships of doses to unshielded and shielded breast respectively, with anthropometric and technical factors. Our study proved lead shielding of the breast to be effective, easy to use and leading to a significant reduction in scatter dose. (author)

  9. Methodology for Determining Optimal Exposure Parameters of a Hyperspectral Scanning Sensor

    Science.gov (United States)

    Walczykowski, P.; Siok, K.; Jenerowicz, A.

    2016-06-01

    The purpose of the presented research was to establish a methodology that would allow the registration of hyperspectral images with a defined spatial resolution on a horizontal plane. The results obtained within this research could then be used to establish the optimum sensor and flight parameters for collecting aerial imagery data using an UAV or other aerial system. The methodology is based on an user-selected optimal camera exposure parameters (i.e. time, gain value) and flight parameters (i.e. altitude, velocity). A push-broom hyperspectral imager- the Headwall MicroHyperspec A-series VNIR was used to conduct this research. The measurement station consisted of the following equipment: a hyperspectral camera MicroHyperspec A-series VNIR, a personal computer with HyperSpec III software, a slider system which guaranteed the stable motion of the sensor system, a white reference panel and a Siemens star, which was used to evaluate the spatial resolution. Hyperspectral images were recorded at different distances between the sensor and the target- from 5m to 100m. During the registration process of each acquired image, many exposure parameters were changed, such as: the aperture value, exposure time and speed of the camera's movement on the slider. Based on all of the registered hyperspectral images, some dependencies between chosen parameters had been developed: - the Ground Sampling Distance - GSD and the distance between the sensor and the target, - the speed of the camera and the distance between the sensor and the target, - the exposure time and the gain value, - the Density Number and the gain value. The developed methodology allowed us to determine the speed and the altitude of an unmanned aerial vehicle on which the sensor would be mounted, ensuring that the registered hyperspectral images have the required spatial resolution.

  10. TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes over a Range in Parameter Space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B.; Azmy, Yousry Y. [Department of Mechanical and Nuclear Engineering, Penn State University, University Park, PA 16802 (United States)

    2008-07-01

    We present the TORT solutions to the 3-D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40x40x40, 200 angles) to the finest model (160x160x160, 800 angles), and employed the results of the finest computational model as reference values for evaluating the mesh-refinement effects. The presented results show that the solutions for most cases in the suite of benchmarks as computed by TORT are in the asymptotic regime. (authors)

  11. Revisiting the TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes Over a Range in Parameter Space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B [ORNL; Azmy, Yousry [North Carolina State University

    2009-01-01

    Improved TORT solutions to the 3D transport codes' suite of benchmarks exercise are presented in this study. Preliminary TORT solutions to this benchmark indicate that the majority of benchmark quantities for most benchmark cases are computed with good accuracy, and that accuracy improves with model refinement. However, TORT fails to compute accurate results for some benchmark cases with aspect ratios drastically different from 1, possibly due to ray effects. In this work, we employ the standard approach of splitting the solution to the transport equation into an uncollided flux and a fully collided flux via the code sequence GRTUNCL3D and TORT to mitigate ray effects. The results of this code sequence presented in this paper show that the accuracy of most benchmark cases improved substantially. Furthermore, the iterative convergence problems reported for the preliminary TORT solutions have been resolved by bringing the computational cells' aspect ratio closer to unity and, more importantly, by using 64-bit arithmetic precision in the calculation sequence. Results of this study are also reported.

  12. Reducing CT radiation dose with iterative reconstruction algorithms: The influence of scan and reconstruction parameters on image quality and CTDIvol

    International Nuclear Information System (INIS)

    Highlights: • Iterative reconstruction (IR) and filtered back projection (FBP) were compared. • CT image noise was reduced by 12.4%–52.2% using IR in comparison to FBP. • IR did not affect high- and low-contrast resolution. • CTDIvol was reduced by 26–50% using hybrid IR at comparable image quality levels. • IR produced good to excellent image quality in patients. - Abstract: Objectives: In this phantom CT study, we investigated whether images reconstructed using filtered back projection (FBP) and iterative reconstruction (IR) with reduced tube voltage and current have equivalent quality. We evaluated the effects of different acquisition and reconstruction parameter settings on image quality and radiation doses. Additionally, patient CT studies were evaluated to confirm our phantom results. Methods: Helical and axial 256 multi-slice computed tomography scans of the phantom (Catphan®) were performed with varying tube voltages (80–140 kV) and currents (30–200 mAs). 198 phantom data sets were reconstructed applying FBP and IR with increasing iterations, and soft and sharp kernels. Further, 25 chest and abdomen CT scans, performed with high and low exposure per patient, were reconstructed with IR and FBP. Two independent observers evaluated image quality and radiation doses of both phantom and patient scans. Results: In phantom scans, noise reduction was significantly improved using IR with increasing iterations, independent from tissue, scan-mode, tube-voltage, current, and kernel. IR did not affect high-contrast resolution. Low-contrast resolution was also not negatively affected, but improved in scans with doses <5 mGy, although object detectability generally decreased with the lowering of exposure. At comparable image quality levels, CTDIvol was reduced by 26–50% using IR. In patients, applying IR vs. FBP resulted in good to excellent image quality, while tube voltage and current settings could be significantly decreased. Conclusions: Our

  13. Variation of patient imaging doses with scanning parameters for linac-integrated kilovoltage cone beam CT.

    Science.gov (United States)

    Liao, Xiongfei; Wang, Yunlai; Lang, Jinyi; Wang, Pei; Li, Jie; Ge, Ruigang; Yang, Jack

    2015-01-01

    To evaluate the Elekta kilovoltage CBCT doses and the associated technical protocols with patient dosimetry estimation. Image guidance technique with cone-beam CT (CBCT) in radiation oncology on a daily basis can deliver a significant dose to the patient. To evaluate the patient dose from LINAC-integrated kV cone beam CT imaging in image-guided radiotherapy. CT dose index (CTDI) were measured with PTW TM30009 CT ion chamber in air, in head phantom and body phantom, respectively; with different combinations of tube voltage, current, exposure time per frame, collimator and gantry rotation range. Dose length products (DLP) were subsequently calculated to account for volume integration effects. The CTDI and DLP were also compared to AcQSim™ simulator CT for routine clinical protocols. Both CTDIair and CTDIw depended quadratically on the voltage, while linearly on milliampere x seconds (mAs) settings. It was shown that CTDIw and DLP had very close relationship with the collimator settings and the gantry rotation ranges. Normalized CTDIw for Elekta XVI™ CBCT was lower than that of ACQSim simulator CT owing to its pulsed radiation output characteristics. CTDIw can be used to assess the patient dose in CBCT due to its simplicity for measurement and reproducibility. Regular measurement should be performed in QA & QC program. Optimal image parameters should be chosen to reduce patient dose during CBCT. PMID:26405932

  14. Extracting Roof Parameters and Heat Bridges Over the City of Oldenburg from Hyperspectral, Thermal, and Airborne Laser Scanning Data

    Science.gov (United States)

    Bannehr, L.; Luhmann, Th.; Piechel, J.; Roelfs, T.; Schmidt, An.

    2011-09-01

    Remote sensing methods are used to obtain different kinds of information about the state of the environment. Within the cooperative research project HiReSens, funded by the German BMBF, a hyperspectral scanner, an airborne laser scanner, a thermal camera, and a RGB-camera are employed on a small aircraft to determine roof material parameters and heat bridges of house tops over the city Oldenburg, Lower Saxony. HiReSens aims to combine various geometrical highly resolved data in order to achieve relevant evidence about the state of the city buildings. Thermal data are used to obtain the energy distribution of single buildings. The use of hyperspectral data yields information about material consistence of roofs. From airborne laser scanning data (ALS) digital surface models are inferred. They build the basis to locate the best orientations for solar panels of the city buildings. The combination of the different data sets offers the opportunity to capitalize synergies between differently working systems. Central goals are the development of tools for the collection of heat bridges by means of thermal data, spectral collection of roofs parameters on basis of hyperspectral data as well as 3D-capture of buildings from airborne lasers scanner data. Collecting, analyzing and merging of the data are not trivial especially not when the resolution and accuracy is aimed in the domain of a few decimetre. The results achieved need to be regarded as preliminary. Further investigations are still required to prove the accuracy in detail.

  15. Influence of Confocal Scanning Laser Microscopy specific acquisition parameters on the detection and matching of Speeded-Up Robust Features

    International Nuclear Information System (INIS)

    The robustness and distinctiveness of local features to various object or scene deformations and to modifications of the acquisition parameters play key roles in the design of many computer vision applications. In this paper we present the results of our experiments on the behavior of a recently developed technique for local feature detection and description, Speeded-Up Robust Features (SURF), regarding image modifications specific to Confocal Scanning Laser Microscopy (CSLM). We analyze the repeatability of detected SURF keypoints and the precision-recall of their matching under modifications of three important CSLM parameters: pinhole aperture, photomultiplier (PMT) gain and laser beam power. During any investigation by CSLM these three parameters have to be modified, individually or together, in order to optimize the contrast and the Signal Noise Ratio (SNR), being also inherently modified when changing the microscope objective. Our experiments show that an important amount of SURF features can be detected at the same physical locations in images collected at different values of the pinhole aperture, PMT gain and laser beam power, and further on can be successfully matched based on their descriptors. In the final part, we exemplify the potential of SURF in CSLM imaging by presenting a SURF-based computer vision application that deals with the mosaicing of images collected by this technique. -- Research highlights: → Influence of pinhole aperture modifications on SURF detection and matching in CSLM images. → Influence of photomultiplier gain modifications on SURF detection and matching in CSLM images. → Influence of laser beam power modifications on SURF detection and matching in CSLM images. → SURF-based automated mosaicing of CSLM images.

  16. General squark flavour mixing: constraints, phenomenology and benchmarks

    CERN Document Server

    De Causmaecker, Karen; Herrmann, Bjoern; Mahmoudi, Farvah; O'Leary, Ben; Porod, Werner; Sekmen, Sezen; Strobbe, Nadja

    2015-01-01

    We present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.

  17. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  18. The PRISM Benchmark Suite

    OpenAIRE

    Kwiatkowsa, Marta; Norman, Gethin; Parker, David

    2012-01-01

    We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.

  19. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  20. Influence of scan duration on the estimation of pharmacokinetic parameters for breast lesions: a study based on CAIPIRINHA-Dixon-TWIST-VIBE technique

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Wen; Zhao, Bin; Wang, Guangbin; Wang, Cuiyan [Shandong University, Department of MR Imaging, Shandong Medical Imaging Research Institute, Jinan, Shandong (China); Liu, Hui [Siemens Healthcare, MR Collaborations NE Asia, Shanghai (China)

    2015-04-01

    To evaluate the influence of scan duration on pharmacokinetic parameters and their performance in differentiating benign from malignant breast lesions. Dynamic breast imaging was performed on a 3.0-T MR system using a prototype CAIPIRINHA-Dixon-TWISTVIBE (CDT-VIBE) sequence with a temporal resolution of 11.9 s. Enrolled in the study were 53 women with 55 lesions (26 benign and 29 malignant). Pharmacokinetic parameters (Ktrans, ve, kep and iAUC) were calculated for various scan durations from 1 to 7 min after injection of contrast medium using the Tofts model. Ktrans, kep and ve calculated from the 1-min dataset were significantly different from those calculated from the other datasets. In benign lesions, Ktrans, kep and ve were significantly different only between 1 min and 2 min (corrected P > 0.05), but in malignant lesions there were significant differences for any of the comparisons up to 6 min vs. 7 min (corrected P > 0.05). There were no significant differences in AUCs for any of the parameters (P > 0.05). In breast dynamic contrast-enhanced MRI the scan duration has a significant impact on pharmacokinetic parameters, but the diagnostic ability may not be significantly affected. A scan duration of 5 min after injection of contrast medium may be sufficient for calculation of Tofts model pharmacokinetic parameters. (orig.)

  1. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    Science.gov (United States)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We

  2. AFM扫描参数对样品粗糙度测量的影响%Influence of AFM Scanning Parameters on Surface Roughness Measurement

    Institute of Scientific and Technical Information of China (English)

    王子仪; 张荣君; 郑玉祥; 张振生; 李海涛

    2013-01-01

    Atomic Force Microscope (AFM) is a very important tool in nanotechnology research. As a near field image system, the scanning parameters have a huge effect on the quantity of sample image. Improper scanning parameters may even hurt the sample. In this paper, the control variable method was introduced in changing the scanning parameters, and the RMS (root mean square) roughness was used as criteria. The effect of different parameters on the sample image was studied and summarized. The results show that the amplitude setpoint, scan rate, integral gain have great impact on the measurement of the sample surface roughness. The image quality could be improved by changing the scanning parameters properly.%原子力显微镜(Atomic Force Microscope,AFM)是纳米科技研究领域的一种重要工具.作为一种近场成像仪器,参数的选择对样品成像的效果有着很大的影响,不当的参数甚至可能造成样品的损坏.通过采用控制变量法,并以均方根粗糙度作为评判标准,改变AFM各项扫描参数,研究了AFM中不同参数的调整对于样品扫描图像的影响.结果表明,振幅阈值、扫描速率、积分增益三项参数对于测量样品表面粗糙度均有较大影响,通过适当改变扫描参数,可以有效提升AFM的样品扫描质量.

  3. Simultaneous multi-parameter observation of Harring-tonine-treating HL-60 cells with both two-photon and confo-cal laser scanning microscopy

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Harringtonine (HT), a kind of anticancer drug isolated from Chinese herb-Cephalotaxus hainanensis Li, can induce apoptosis in promyelocytic leukemia HL-60 cells. With both two-photon laser scanning microscopy and confocal laser scanning microscopy in combination with the fluores-cent probe Hoechst 33342, tetramethyrhodamine ethyl ester (TMRE) and Fluo 3-AM, we simulta-neously observed HT-induced changes in nuclear morphology, mitochondrial membrane potential and intracellular calcium concentration ([Ca2+]i) in HL-60 cells, and developed a real-time, sensitive and invasive method for simultaneous multi-parameter observation of drug- treating living cells at the level of single cell.

  4. Effect of imaging parameters of spiral CT scanning on image quality for the dental implants. Visual evaluation using a semi-anthropomorphic mandible phantom

    International Nuclear Information System (INIS)

    The purpose of this study was to evaluate the effect of parameters of spiral CT scanning on the image quality required for the planning of dental implants operations. A semi-anthropomorphic mandible phantom which has artificial mandibular canals and teeth roots was used as a standard object for imaging. Spiral CT scans for the phantom settled in water phantom with diameters of 20 and 16 cm were performed. Visibility of the artificial mandibular canal made of a Teflon tube and gaps between tooth apex and canal in the mandibular phantom was evaluated for various combinations of the slice thickness, tables speeds, angles to the canal, and x-ray tube currents. Teeth roots were made of PVC (poly vinyl chloride). The artificial mandibular canal was clearly observed on the images of 1 mm slice thickness. At the same table speed of 2 mm /rotation, the images of thin slice (1 mm) were superior to that of thick slice (2 mm). The gap between teeth apex and canal was erroneously diagnosed on the images with table speeds of 3 mm/rotation. Horizontal scanning in parallel to the canal result in poor image quality for observation of mandibular canals because of the partial volume effect. A relatively high x-ray tube current (125 mA) at thin slice (1 mm) scanning was required for scanning the mandibular phantom in 20 cm water vessel. Spiral scanning with slice thickness of 1 mm and table speeds of 1 of 2 mm/rotation seemed to be suitable for dental implants. The result of this study suggested that diagnosis from two independent spiral scans with a different angle to the object was more accurate and more efficient than single spiral scanning. (author)

  5. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in...

  6. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part II: Benchmark comparisons of PUMA core parameters with MCNP5 and improvements due to a simple cell heterogeneity correction

    International Nuclear Information System (INIS)

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO2 rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)

  7. Development of a two-parameter slit-scan flow cytometer for screening of normal and aberrant chromosomes: application to a karyotype of Sus scrofa domestica (pig)

    Science.gov (United States)

    Hausmann, Michael; Doelle, Juergen; Arnold, Armin; Stepanow, Boris; Wickert, Burkhard; Boscher, Jeannine; Popescu, Paul C.; Cremer, Christoph

    1992-07-01

    Laser fluorescence activated slit-scan flow cytometry offers an approach to a fast, quantitative characterization of chromosomes due to morphological features. It can be applied for screening of chromosomal abnormalities. We give a preliminary report on the development of the Heidelberg slit-scan flow cytometer. Time-resolved measurement of the fluorescence intensity along the chromosome axis can be registered simultaneously for two parameters when the chromosome axis can be registered simultaneously for two parameters when the chromosome passes perpendicularly through a narrowly focused laser beam combined by a detection slit in the image plane. So far automated data analysis has been performed off-line on a PC. In its final performance, the Heidelberg slit-scan flow cytometer will achieve on-line data analysis that allows an electro-acoustical sorting of chromosomes of interest. Interest is high in the agriculture field to study chromosome aberrations that influence the size of litters in pig (Sus scrofa domestica) breeding. Slit-scan measurements have been performed to characterize chromosomes of pigs; we present results for chromosome 1 and a translocation chromosome 6/15.

  8. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  9. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  10. Benchmark 2 - Springback of a draw / re-draw panel: Part C: Benchmark analysis

    Science.gov (United States)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing

    2013-12-01

    Benchmark analysis is summarized for DP600 and AA 5182-O. Nine simulation results submitted for this benchmark study are compared to the physical measurement results. The details on the codes, friction parameters, mesh technology, CPU, and material models are also summarized at the end of this report with the participant information details.

  11. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  12. Effect of duration of scan acquisition on CT perfusion parameter values in primary and metastatic tumors in the lung

    Energy Technology Data Exchange (ETDEWEB)

    Ng, Chaan S., E-mail: cng@mdanderson.org [Departments of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Chandler, Adam G., E-mail: adam.chandler@mdanderson.org [Departments of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); CT research, GE Healthcare, Waukesha, Wisconsin (United States); Wei, Wei, E-mail: wwei@mdanderson.org [Departments of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Anderson, Ella F., E-mail: eanderson@mdanderson.org [Departments of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Herron, Delise H., E-mail: dherron@mdanderson.org [Departments of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Kurzrock, Razelle, E-mail: rkurzrock@ucsd.edu [Departments of Investigational Cancer Therapeutics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Charnsangavej, Chusilp, E-mail: ccharn@mdanderson.org [Departments of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2013-10-01

    Objectives: To assess the effect of acquisition duration (T{sub acq}) and pre-enhancement set points (T{sub 1}) on computer tomography perfusion (CTp) parameter values in primary and metastatic tumors in the lung. Materials and methods: 24 lung CTp datasets (10 primary; 14 metastatic), acquired using a two phase protocol spanning 125 s, in 12 patients with lung tumors, were analyzed by deconvolution modeling to yield tumor blood flow (BF), blood volume (BV), mean transit time (MTT), and permeability (PS) values. CTp analyses were undertaken for the reference dataset (i.e., T{sub 1} = t{sub 0}) with varying T{sub acq} from 12 to 125 s. This was repeated for shifts in T{sub 1} (±0.5 s, ±1.0 s, ±2.0 s relative to the reference at t{sub 0}). Resultant CTp values were plotted against T{sub acq}; values at 30 s, 50 s, 65 s and 125 s were compared using linear mixed model. Results: All CTp parameter values were noticeably influenced by T{sub acq}, with generally less marked changes beyond 50 s, and with no difference in behavior between primary and secondary tumors. Apart from BV, which attained a plateau at approximately 50 s, the other three CTp parameters did not reach steady-state values within the available 125 s of data, with values at 30 s, 50 s and 65 s significantly different from 125 s (p < 0.004). Shifts in T{sub 1} also affected the CTp parameters values, with positive shifts having greater impact on CTp values than negative shifts. Conclusion: CTp parameter values derived from deconvolution modeling can be markedly affected by T{sub acq}, and pre-enhancement set-points. 50 s acquisition may be adequate for BV, but longer than 125 s is probably required for reliable characterization of the other three CTp parameters.

  13. Limits on the Superconducting Order Parameter in NdFeAsO_{1-x}F_y from Scanning SQUID Microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Hicks, Clifford W.; Lippman, Thomas M.; /Stanford U., Geballe Lab.; Huber, Martin E.; /Colorado U.; Ren, Zhi-An; Yang, Jie; Zhao, Zhong-Xian; /Beijing, Inst. Phys.; Moler, Kathryn A.; /Stanford U., Geballe Lab.

    2009-01-08

    Identifying the symmetry of the superconducting order parameter in the recently-discovered ferrooxypnictide family of superconductors, RFeAsO{sub 1-x}F{sub y}, where R is a rare earth, is a high priority. Many of the proposed order parameters have internal {pi} phase shifts, like the d-wave order found in the cuprates, which would result in direction-dependent phase shifts in tunneling. In dense polycrystalline samples, these phase shifts in turn would result in spontaneous orbital currents and magnetization in the superconducting state. We perform scanning SQUID microscopy on a dense polycrystalline sample of NdFeAsO{sub 0.94}F{sub 0.06} with T{sub c} = 48K and find no such spontaneous currents, ruling out many of the proposed order parameters.

  14. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Experimental results of pulse parameters and control rod worth measurements at TRIGA Mark 2 reactor in Ljubljana are presented. The measurements were performed with a completely fresh, uniform, and compact core. Only standard fuel elements with 12 wt% uranium were used. Special efforts were made to get reliable and accurate results at well-defined experimental conditions, and it is proposed to use the results as a benchmark test case for TRIGA reactors

  15. Scanning drift tube measurements of electron transport parameters in different gases: argon, synthetic air, methane and deuterium

    Science.gov (United States)

    Korolov, I.; Vass, M.; Donkó, Z.

    2016-10-01

    Measurements of transport coefficients of electrons in a scanning drift tube apparatus are reported for different gases: argon, synthetic air, methane and deuterium. The experimental system allows the spatio-temporal development of the electron swarms (‘swarm maps’) to be recorded and this information, when compared with the profiles predicted by theory, makes it possible to determine the ‘time-of-flight’ transport coefficients: the bulk drift velocity, the longitudinal diffusion coefficient and the effective ionization coefficient, in a well-defined way. From these data, the effective Townsend ionization coefficient is determined as well. The swarm maps provide, additionally, direct, unambiguous information about the hydrodynamic/non-hydrodynamic regimes of the swarms, aiding the selection of the proper regions applicable for the determination of the transport coefficients.

  16. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Takashina, Masaaki; Koizumi, Masahiko [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P., E-mail: vadim.p.moskvin@gmail.com [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States)

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  17. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  18. Aeroelastic Benchmark Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the...

  19. Quality parameters of digital aerial survey and airborne laser scanning covering the entire area of the Czech Republic

    Directory of Open Access Journals (Sweden)

    Jiří Šíma

    2013-11-01

    Full Text Available The paper illustrates the development of digital aerial survey and digital elevation models covering the entire area of the Czech Republic at the beginning of 21st century. It also presents some results of systematic investigation of their quality parameters reached by the author in cooperation with Department of Geomatics at the Faculty of Applied Sciences of the University of Western Bohemia in Pilsen and the Land Survey Office.

  20. Optimisation of the CT parameters with evaluation of MDCT double-scan images in the planning of the dental implant treatment

    International Nuclear Information System (INIS)

    Background: The aim of the present study was optimisation of the examination parameters and evaluation of reliability of the MDCT double-scan images obtained with computer navigation for dental implant treatment. Material/Methods: With the use of a MDCT scanner SOMATOM Sensation (Siemens), CT-images of a phantom were performed: slice-collimation (10 A - 0.75 mm, 10 A - 1.5 mm), slice-thickness (0.75, 1, 2, 3, 5 mm), pitch (0.5, 1, 1.5). Additionally, the analysis on various filters from H20f to H60f was performed. For study used a phantom of the human cadaver head. Qualitative analysis was done using Nobel Guide (Nobel Biocare, Sweden), assessing possible artefacts on the images, and measurements of the bone structure on all filters in comparison with the real image. Results: The quality of the phantom images was assessed as optimal for the slices thickness 0.75 and 1 mm. The use of various values of the pitch did not have statistically significant difference on the image quality. Application of various filters did not alter the parameters of the bone structure, however the use of lower filters (H30f and H40f) had a beneficial effect on the quality of 3D reconstruction. The arrangement of the 'window' parameters in CT seemed to have a greater influence on the measurement and evaluation of the bone structure. Conclusions: Slice-collimation and slice-thickness are the most important parameters in selection of the optimal scan-protocol. It is recommended to use in the postprocessing, the mentioned above parameter succession with the application of various filters (H30f and H60f) at a stable arrangement of the 'window' in the CT examination. (authors)

  1. SPIDER - VII. The Central Dark Matter Content of Bright Early-Type Galaxies: Benchmark Correlations with Mass, Structural Parameters and Environment

    CERN Document Server

    Tortora, C; Napolitano, N R; de Carvalho, R R; Romanowsky, A J

    2012-01-01

    We analyze the central dark-matter (DM) content of $\\sim 4,500$ massive ($M_\\star \\gsim 10^{10} \\, M_\\odot$), low-redshift ($z<0.1$), early-type galaxies (ETGs), with high-quality $ugrizYJHK$ photometry and optical spectroscopy from SDSS and UKIDSS. We estimate the "central" fraction of DM within the $K$-band effective radius, \\Re. The main results of the present work are the following: (1) DM fractions increase systematically with both structural parameters (i.e. \\Re, and S\\'ersic index, $n$) and mass proxies (central velocity dispersion, stellar and dynamical mass), as in previous studies, and decrease with central stellar density. 2) All correlations involving DM fractions are caused by two fundamental ones with galaxy effective radius and central velocity dispersion. These correlations are independent of each other, so that ETGs populate a central-DM plane (DMP), i.e. a correlation among fraction of total-to-stellar mass, effective radius, and velocity dispersion, whose scatter along the total-to-stell...

  2. Liver Steatosis Assessed by Controlled Attenuation Parameter (CAP) Measured with the XL Probe of the FibroScan: A Pilot Study Assessing Diagnostic Accuracy.

    Science.gov (United States)

    Sasso, Magali; Audière, Stéphane; Kemgang, Astrid; Gaouar, Farid; Corpechot, Christophe; Chazouillères, Olivier; Fournier, Céline; Golsztejn, Olivier; Prince, Stéphane; Menu, Yves; Sandrin, Laurent; Miette, Véronique

    2016-01-01

    To assess liver steatosis, the controlled attenuation parameter (CAP; giving an estimate of ultrasound attenuation ∼3.5 MHz) is available with the M probe of the FibroScan. We report on the adaptation of the CAP for the FibroScan XL probe (center frequency 2.5 MHz) without modifying the range of values (100-400 dB/m). CAP validation was successfully performed on Field II simulations and on tissue-mimicking phantoms. In vivo performance was assessed in a cohort of 59 patients spanning the range of steatosis. In vivo reproducibility was good and similar with both probes. The area under receiver operative characteristic curve was equal to 0.83/0.84 and 0.92/0.91 for the M/XL probes to detect >2% and >16% liver fat, respectively, as assessed by magnetic resonance imaging. Patients can now be assessed simultaneously for steatosis and fibrosis using the FibroScan, regardless of their morphology.

  3. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  4. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  5. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  6. Motion Interplay as a Function of Patient Parameters and Spot Size in Spot Scanning Proton Therapy for Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Grassberger, Clemens, E-mail: Grassberger.Clemens@mgh.harvard.edu [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States); Center for Proton Radiotherapy, Paul Scherrer Institute, Villigen (Switzerland); Dowdell, Stephen [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States); Lomax, Antony [Center for Proton Radiotherapy, Paul Scherrer Institute, Villigen (Switzerland); Sharp, Greg; Shackleford, James; Choi, Noah; Willers, Henning; Paganetti, Harald [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States)

    2013-06-01

    Purpose: To quantify the impact of respiratory motion on the treatment of lung tumors with spot scanning proton therapy. Methods and Materials: Four-dimensional Monte Carlo simulations were used to assess the interplay effect, which results from relative motion of the tumor and the proton beam, on the dose distribution in the patient. Ten patients with varying tumor sizes (2.6-82.3 cc) and motion amplitudes (3-30 mm) were included in the study. We investigated the impact of the spot size, which varies between proton facilities, and studied single fractions and conventionally fractionated treatments. The following metrics were used in the analysis: minimum/maximum/mean dose, target dose homogeneity, and 2-year local control rate (2y-LC). Results: Respiratory motion reduces the target dose homogeneity, with the largest effects observed for the highest motion amplitudes. Smaller spot sizes (σ ≈ 3 mm) are inherently more sensitive to motion, decreasing target dose homogeneity on average by a factor 2.8 compared with a larger spot size (σ ≈ 13 mm). Using a smaller spot size to treat a tumor with 30-mm motion amplitude reduces the minimum dose to 44.7% of the prescribed dose, decreasing modeled 2y-LC from 87.0% to 2.7%, assuming a single fraction. Conventional fractionation partly mitigates this reduction, yielding a 2y-LC of 71.6%. For the large spot size, conventional fractionation increases target dose homogeneity and prevents a deterioration of 2y-LC for all patients. No correlation with tumor volume is observed. The effect on the normal lung dose distribution is minimal: observed changes in mean lung dose and lung V{sub 20} are <0.6 Gy(RBE) and <1.7%, respectively. Conclusions: For the patients in this study, 2y-LC could be preserved in the presence of interplay using a large spot size and conventional fractionation. For treatments using smaller spot sizes and/or in the delivery of single fractions, interplay effects can lead to significant deterioration of

  7. Reducing CT radiation dose with iterative reconstruction algorithms: The influence of scan and reconstruction parameters on image quality and CTDI{sub vol}

    Energy Technology Data Exchange (ETDEWEB)

    Klink, Thorsten, E-mail: klink_t1@ukw.de [Inselspital – Bern University Hospital, University Institute of Diagnostic, Interventional, and Pediatric Radiology, Freiburgstrasse 10, 3010 Bern (Switzerland); University of Würzburg, Insitute of Diagnostic and Interventional Radiology, Oberdürrbacher Str. 6, 97080 Würzburg (Germany); Obmann, Verena, E-mail: verena.obmann@insel.ch [Inselspital – Bern University Hospital, University Institute of Diagnostic, Interventional, and Pediatric Radiology, Freiburgstrasse 10, 3010 Bern (Switzerland); Heverhagen, Johannes, E-mail: johannes.heverhagen@insel.ch [Inselspital – Bern University Hospital, University Institute of Diagnostic, Interventional, and Pediatric Radiology, Freiburgstrasse 10, 3010 Bern (Switzerland); Stork, Alexander, E-mail: a.stork@roentgeninstitut.de [Roentgeninstitut Duesseldorf, Kaiserswerterstrasse 89, 40476 Duesseldorf (Germany); Adam, Gerhard, E-mail: g.adam@uke.de [University Medical Center Hamburg Eppendorf, Department of Diagnostic and Interventional Radiology, Martinistrasse 52, 20246 Hamburg (Germany); Begemann, Philipp, E-mail: p.begemann@roentgeninstitut.de [Roentgeninstitut Duesseldorf, Kaiserswerterstrasse 89, 40476 Duesseldorf (Germany)

    2014-09-15

    Highlights: • Iterative reconstruction (IR) and filtered back projection (FBP) were compared. • CT image noise was reduced by 12.4%–52.2% using IR in comparison to FBP. • IR did not affect high- and low-contrast resolution. • CTDI{sub vol} was reduced by 26–50% using hybrid IR at comparable image quality levels. • IR produced good to excellent image quality in patients. - Abstract: Objectives: In this phantom CT study, we investigated whether images reconstructed using filtered back projection (FBP) and iterative reconstruction (IR) with reduced tube voltage and current have equivalent quality. We evaluated the effects of different acquisition and reconstruction parameter settings on image quality and radiation doses. Additionally, patient CT studies were evaluated to confirm our phantom results. Methods: Helical and axial 256 multi-slice computed tomography scans of the phantom (Catphan{sup ®}) were performed with varying tube voltages (80–140 kV) and currents (30–200 mAs). 198 phantom data sets were reconstructed applying FBP and IR with increasing iterations, and soft and sharp kernels. Further, 25 chest and abdomen CT scans, performed with high and low exposure per patient, were reconstructed with IR and FBP. Two independent observers evaluated image quality and radiation doses of both phantom and patient scans. Results: In phantom scans, noise reduction was significantly improved using IR with increasing iterations, independent from tissue, scan-mode, tube-voltage, current, and kernel. IR did not affect high-contrast resolution. Low-contrast resolution was also not negatively affected, but improved in scans with doses <5 mGy, although object detectability generally decreased with the lowering of exposure. At comparable image quality levels, CTDI{sub vol} was reduced by 26–50% using IR. In patients, applying IR vs. FBP resulted in good to excellent image quality, while tube voltage and current settings could be significantly decreased

  8. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  9. A Voxel-Based Method for Automated Identification and Morphological Parameters Estimation of Individual Street Trees from Mobile Laser Scanning Data

    Directory of Open Access Journals (Sweden)

    Hongxing Liu

    2013-01-01

    Full Text Available As an important component of urban vegetation, street trees play an important role in maintenance of environmental quality, aesthetic beauty of urban landscape, and social service for inhabitants. Acquiring accurate and up-to-date inventory information for street trees is required for urban horticultural planning, and municipal urban forest management. This paper presents a new Voxel-based Marked Neighborhood Searching (VMNS method for efficiently identifying street trees and deriving their morphological parameters from Mobile Laser Scanning (MLS point cloud data. The VMNS method consists of six technical components: voxelization, calculating values of voxels, searching and marking neighborhoods, extracting potential trees, deriving morphological parameters, and eliminating pole-like objects other than trees. The method is validated and evaluated through two case studies. The evaluation results show that the completeness and correctness of our method for street tree detection are over 98%. The derived morphological parameters, including tree height, crown diameter, diameter at breast height (DBH, and crown base height (CBH, are in a good agreement with the field measurements. Our method provides an effective tool for extracting various morphological parameters for individual street trees from MLS point cloud data.

  10. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  11. Optimization of scanning parameters in children CT examination%儿童CT检查中扫描参数的优化

    Institute of Scientific and Technical Information of China (English)

    李大伟; 周献锋; 杨春勇; 王进; 涂彧; 余宁乐

    2014-01-01

    Objective To reduce the radiation dose to children from CT scanning through proper adjustment to milliamps (mAs) and scan lengths with a view to learning the relationship between scanning condition and radiation dose.Methods To compare the differences in main scanning parameters used for head,chest and abdomen at multi-detector CT examination of paediatric patients (< 1 year old,1-5 years old,6-10 years old,11-15 years old) at seven hospitals in Jiangsu province.CT dose index (CTDI) and dose-length-product (DLP) were gained by using standard children dose model (diameter 16 cm) under the same scanning conditions.Effective doses (E) at different parts of the body from children CT scanning were estimated after modification by empirical weighting factor.Statistical analyses of mAs,scan lengths and DLP were performed with SPSS 16.0 software.The differences in radiation dose due to the choice of condition of scanning were compared between two typical hospitals.Results The mean values of effective doses to paediatric patients during head,chest and abdomen CT scanning were 2.46,5.69,11.86 mSv,respectively.DLP was correlated positively with mAs and scan length (head,chest and abdomen examination,r =0.81,0.81,0.92,P <0.05).Due to higher mAs used,the effective dose from chest and abdomen CT examination among all age groups was higher than that in Germany Galanski research.Due to larger scanning length in abdominal examination among all age groups,effective doses in hospital were the highest.Conclusions Reasonablely reducing the scan length and mAs during CT scanning could lower children's CT radiation risk,while clinical diagnosis is not affected.%目的 了解儿童CT检查扫描条件选择及其所致辐射剂量的相关性,以期通过适当调节mAs、扫描长度等参数,降低儿童CT检查患者受照剂量.方法 比较江苏省7家医院不同年龄组(<1岁、1~5岁、6~10岁和11 ~15岁)儿童头颅、胸部、腹部多排螺旋CT检查主要扫描

  12. Effection of HBV on controlled attenuation parameters using FibroScan(R)%HBV感染对FibroScan(R)实施受控衰减参数评价脂肪肝影响的研究

    Institute of Scientific and Technical Information of China (English)

    朱梦飞; 刘静; 王洁; 高岭; 陈公英; 施军平; 娄国强

    2014-01-01

    Objective To evulate the effection of HBV on controlled attenuation parameters(CAP)measurement of fatty liver using FibroScan(R).Methods Patients with only non alcohol fatty liverdisease (NAFLD),only chronic hepatitis B(CHB)and CHB combining with NAFLD (CHB + NAFLD) were complete the CAP measurement with FibroScan-502.Results 579 patients with CHB,624 patients with NAFLD and 124 patients with CHB + NAFLD were recruited.CAP values of CHB was 218.90 ±56.40 dB/m,significantly lower than that of NAFLD (290.85 ± 61.46 dB/m,P =0.00) and that of CHB + NAFLD (284.93 ±64.70 dB/m,P =0.00).It is no difference between CAP values of CHB + NAFLD and NAFLD (P =0.55).It is no difference between CAP values of high load of HBVDNA group and the low,high load of HBsAg group and the low,and HBeAg positive group and the negative (P =0.73,0.93,0.55).Conclusion HBV infection does not effect on CAP values of FibroScan(R).%目的 评价HBV感染是否对FibroScan(R)实施受控衰减参数(CAP)有影响.方法 使用FibroScan-502机型对临床诊断非酒精性脂肪性肝病(NAFLD)患者、慢性乙型肝炎患者(CHB)及慢性乙肝合并脂肪肝(CHB合并NAFLD)患者进行肝脏脂肪含量(CAP值)测定.结果 579例CHB患者、124例CHB合并NAFLD患者和624例NAFLD患者FibroScan检查,FibroScan测定的CAP与BMI呈正相关(r=0.46,P=0.004),而与血清HBV NDA载量、HBsAg载量以及HBeAg阳性与否无关;CHB组的CAP值(218.90 ±56.40 dB/m)显著低于NAFLD组(290.85±61.46 dB/m,P=0.00),也低于CHB合并NAFLD组(284.93±64.70 dB/m,P=0.00),而CHB合并NAFLD组的CAP值与NAFLD组间的差异无统计学意义(P=0.55);血清高HBV DNA载量组的CAP值与低HBV DNA载量组间,高HBsAg载量的CAP值与低HBsAg载量组间,以及HBeAg阳性组的CAP值与HBeAg阴性组间差异均无统计学意义.结论 HBV感染不影响FibroScan测定的CAP值.

  13. Day to day clinically relevant corneal elevation, thickness, and curvature parameters using the orbscan II scanning slit topographer and the pentacam scheimpflug imaging device

    Directory of Open Access Journals (Sweden)

    Hashemi Hassan

    2010-01-01

    Full Text Available The introduction of different techniques and computerized devices into clinical ophthalmology has significantly improved our knowledge of the eyes, optics, and eye conditions. Today, corneal topography is performed with a wide range of devices that implement a variety of techniques. Advance computerized analysis systems provide us with simple and quick evaluation procedures, yet the sophisticated data and clinical information that is generated can only be interpreted with adequate knowledge of the system itself as well as the accepted normal ranges of various properties assessed with these systems. Two computerized topography systems that are in common use are the Orbscan (Bausch and Lomb Inc., Rochester, NY, USA and the Pentacam (Oculus GmBH, Wetzlar, Germany. The Orbscan is a slit-scanning device and the Pentacam is Scheimpflug imaging device. In this review, we present a brief description of both technologies, the techniques implemented in each device and the acquisition process with each. This will be followed by a list of corneal parameters that need to be assessed in screening patients for refractive surgery. We will discuss how these parameters are displayed, how each parameter may serve as clinic criteria, and how data should be interpreted. We will also try to provide evidence regarding the accuracy of different measurements, and the comparability of the two devices.

  14. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, K [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Takashina, M; Koizumi, M [Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Das, I; Moskvin, V [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  15. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  16. Gaia FGK benchmark stars: Metallicity

    Science.gov (United States)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  17. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    Science.gov (United States)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-06-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  18. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    Science.gov (United States)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-08-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  19. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  20. Automated tracking of quantitative parameters from single line scanning of vocal folds: a case study of the 'messa di voce' exercise.

    Science.gov (United States)

    Dejonckere, Philippe H; Lebacq, Jean; Bocchi, Leonardo; Orlandi, Silvia; Manfredi, Claudia

    2015-04-01

    This article presents a novel application of the 'single line scanning' of the vocal fold vibrations (kymography) in singing pedagogy, particularly in a specific technical voice exercise: the 'messa di voce'. It aims at giving the singer relevant and valid short-term feedback. A user-friendly automatic analysis program makes possible a precise, immediate quantification of the essential physiological parameters characterizing the changes in glottal impedance, concomitant with the progressive increase and decrease of the lung pressure. The data provided by the program show a strong correlation with the hand-made measurements. Additional measurements such as subglottic pressure and flow glottography by inverse filtering can be meaningfully correlated with the data obtained from the kymographic images. PMID:24456119

  1. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  2. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  3. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  4. Benchmarking in radiation protection in pharmaceutical industries

    International Nuclear Information System (INIS)

    A benchmarking on radiation protection in seven pharmaceutical companies in Germany and Switzerland was carried out. As the result relevant parameters describing the performance and costs of radiation protection were acquired and compiled and subsequently depicted in figures in order to make these data comparable. (orig.)

  5. Benchmarking for plant maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Komonen, K.; Ahonen, T.; Kunttu, S. (VTT Technical Research Centre of Finland, Espoo (Finland))

    2010-05-15

    The product of the project, e-Famemain, is a new kind of tool for benchmarking, which is based on many years' research efforts within Finnish industry. It helps to evaluate plants' performance in operations and maintenance by making industrial plants comparable with the aid of statistical methods. The system is updated continually and automatically. It carries out automatically multivariate statistical analysis when data is entered into system, and many other statistical operations. Many studies within Finnish industry during the last ten years have revealed clear causalities between various performance indicators. In addition, these causalities should be taken into account when utilising benchmarking or forecasting indicator values e.g. for new investments. The benchmarking system consists of five sections: data input section, positioning section, locating differences section, best practices and planning section and finally statistical tables. (orig.)

  6. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  7. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  8. Spatial and optical parameters of contrails in the vortex and dispersion regime determined by means of a ground-based scanning lidar

    Energy Technology Data Exchange (ETDEWEB)

    Freudenthaler, V.; Homburg, F.; Jaeger, H. [Fraunhofer-Inst. fuer Atmosphaerische Umweltforschung (IFU), Garmisch-Partenkirchen (Germany)

    1997-12-31

    The spatial growth of individual condensation trails (contrails) of commercial aircrafts in the time range from 15 s to 60 min behind the aircraft is investigated by means of a ground-based scanning backscatter lidar. The growth in width is mainly governed by wind shear and varies between 18 m/min and 140 m/min. The growth of the cross-section varies between 3500 m{sup 2}/min and 25000 m{sup 2}/min. These values are in agreement with results of model calculations and former field measurements. The vertical growth is often limited by boundaries of the humid layer at flight level, but values up to 18 m/min were observed. Optical parameters like depolarization, optical depth and lidar ratio, i.e. the extinction-to-backscatter ratio, have been retrieved from the measurements at a wavelength of 532 nm. The linear depolarization rises from values as low as 0.06 for a young contrail (10 s old) to values around 0.5, typical for aged contrails. The latter indicates the transition from non-crystalline to crystalline particles in persistent contrails within a few minutes. The scatter of depolarization values measured in individual contrails is narrow, independent of the contrails age, and suggests a rather uniform growth of the particles inside a contrail. (author) 18 refs.

  9. Determination of crystallization kinetics parameters of a Li1.5Al0.5Ge1.5(PO43 (LAGP glass by differential scanning calorimetry

    Directory of Open Access Journals (Sweden)

    A. M. Rodrigues

    2013-01-01

    Full Text Available Crystallization kinetics parameters of a stoichiometric glass with the composition Li1.5Al0.5Ge1.5(PO43 were investigated by subjecting parallelepipedonal samples (3 × 3 × 1.5 mm to heat treatment in a differential scanning calorimeter at different heating rates (3, 5, 8 and 10 °C/min. The data were analyzed using Ligero's and Kissinger's methods to determine the activation energy (E of crystallization, which yielded, respectively, E = 415 ± 37 kJ/mol and 378 ± 19 kJ/mol. Ligero's method was also employed to calculate the Avrami coefficient (n, which was found to be n = 3.0. A second set of samples were heat-treated in a tubular furnace at temperatures above the glass transition temperature, Tg, to induce crystallization. The X-ray diffraction analysis of these samples indicated the presence of LiGe2(PO43 which displays a NASICON-type structure. An analysis by optical microscopy revealed the presence of spheric crystals located primarily in the volume, in agreement with the crystallization mechanism predicted by the Avrami coefficient.

  10. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...

  11. Benchmark problem proposal

    International Nuclear Information System (INIS)

    The meeting of the Radiation Energy Spectra Unfolding Workshop organized by the Radiation Shielding Information Center is discussed. The plans of the unfolding code benchmarking effort to establish methods of standardization for both the few channel neutron and many channel gamma-ray and neutron spectroscopy problems are presented

  12. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  13. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  14. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  15. Benchmarking Public Procurement 2016

    OpenAIRE

    World Bank Group

    2015-01-01

    Benchmarking Public Procurement 2016 Report aims to develop actionable indicators which will help countries identify and monitor policies and regulations that impact how private sector companies do business with the government. The project builds on the Doing Business methodology and was initiated at the request of the G20 Anti-Corruption Working Group.

  16. A new numerical benchmark of a freshwater lens

    Science.gov (United States)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  17. Analysis of ANS LWR physics benchmark problems.

    Energy Technology Data Exchange (ETDEWEB)

    Taiwo, T. A.

    1998-07-29

    Various Monte Carlo and deterministic solutions to the three PWR Lattice Benchmark Problems recently defined by the ANS Ad Hoc Committee on Reactor Physics Benchmarks are presented. These solutions were obtained using the VIM continuous-energy Monte Carlo code and the DIF3D/WIMS-D4M code package implemented at the Argonne National Laboratory. The code results for the K{sub eff} and relative pin power distribution are compared to measured values. Additionally, code results for the three benchmark-prescribed infinite lattice configurations are also intercompared. The results demonstrate that the codes produce very good estimates of both the K{sub eff} and power distribution for the critical core and the lattice parameters of the infinite lattice configuration.

  18. Size-dependent scanning parameters (kVp and mAs) for photon-counting spectral CT system in pediatric imaging: simulation study

    Science.gov (United States)

    Chen, Han; Danielsson, Mats; Xu, Cheng

    2016-06-01

    We are developing a photon-counting spectral CT detector with a small pixel size of 0.4× 0.5 mm2, offering a potential advantage for better visualization of small structures in pediatric patients. The purpose of this study is to determine the patient size dependent scanning parameters (kVp and mAs) for pediatric CT in two imaging cases: adipose imaging and iodinated blood imaging. Cylindrical soft-tissue phantoms of diameters between 10-25 cm were used to mimic patients of different ages from 0 to 15 y. For adipose imaging, a 5 mm diameter adipose sphere was assumed as an imaging target, while in the case of iodinated imaging, an iodinated blood sphere of 1 mm in diameter was assumed. By applying the geometry of a commercial CT scanner (GE Lightspeed VCT), simulations were carried out to calculate the detectability index, {{d}\\prime 2} , with tube potentials varying from 40 to 140 kVp. The optimal kVp for each phantom in each imaging case was determined such that the dose-normalized detectability index, {{d}\\prime 2}/ dose, is maximized. With the assumption that the detectability index in pediatric imaging is required the same as in typical adult imaging, the value of mAs at optimal kVp for each phantom was selected to achieve a reference detectability index that was obtained by scanning an adult phantom (30 cm in diameter) in a typical adult CT procedure (120 kVp and 200 mAs) using a modeled energy-integrating system. For adipose imaging, the optimal kVps are 50, 60, 80, and 120 kVp, respectively, for phantoms of 10, 15, 20, and 25 cm in diameter. The corresponding mAs values required to achieve the reference detectability index are only 9%, 23%, 24%, and 54% of the mAs that is used for adult patients at 120 kVp, for 10, 15, 20, and 25 cm diameter phantoms, respectively. In the case of iodinated imaging, a tube potential of 60 kVp was found optimal for all phantoms investigated, and the mAs values required to achieve the reference detectability

  19. Size-dependent scanning parameters (kVp and mAs) for photon-counting spectral CT system in pediatric imaging: simulation study

    Science.gov (United States)

    Chen, Han; Danielsson, Mats; Xu, Cheng

    2016-06-01

    We are developing a photon-counting spectral CT detector with a small pixel size of 0.4× 0.5 mm2, offering a potential advantage for better visualization of small structures in pediatric patients. The purpose of this study is to determine the patient size dependent scanning parameters (kVp and mAs) for pediatric CT in two imaging cases: adipose imaging and iodinated blood imaging. Cylindrical soft-tissue phantoms of diameters between 10–25 cm were used to mimic patients of different ages from 0 to 15 y. For adipose imaging, a 5 mm diameter adipose sphere was assumed as an imaging target, while in the case of iodinated imaging, an iodinated blood sphere of 1 mm in diameter was assumed. By applying the geometry of a commercial CT scanner (GE Lightspeed VCT), simulations were carried out to calculate the detectability index, {{d}\\prime 2} , with tube potentials varying from 40 to 140 kVp. The optimal kVp for each phantom in each imaging case was determined such that the dose-normalized detectability index, {{d}\\prime 2}/ dose, is maximized. With the assumption that the detectability index in pediatric imaging is required the same as in typical adult imaging, the value of mAs at optimal kVp for each phantom was selected to achieve a reference detectability index that was obtained by scanning an adult phantom (30 cm in diameter) in a typical adult CT procedure (120 kVp and 200 mAs) using a modeled energy-integrating system. For adipose imaging, the optimal kVps are 50, 60, 80, and 120 kVp, respectively, for phantoms of 10, 15, 20, and 25 cm in diameter. The corresponding mAs values required to achieve the reference detectability index are only 9%, 23%, 24%, and 54% of the mAs that is used for adult patients at 120 kVp, for 10, 15, 20, and 25 cm diameter phantoms, respectively. In the case of iodinated imaging, a tube potential of 60 kVp was found optimal for all phantoms investigated, and the mAs values required to achieve the reference detectability

  20. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  1. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts. The blade shape is considered as a fixed parameter...

  2. Radiography benchmark 2014

    Science.gov (United States)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  3. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects, and m...... of the benchmark to three spatio-temporal indexes - the TPR-, TPR*-, and Bx-trees. Representative experimental results and consequent guidelines for the usage of these indexes are reported....

  4. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  5. Nuclear Scans

    Science.gov (United States)

    Nuclear scans use radioactive substances to see structures and functions inside your body. They use a special ... images. Most scans take 20 to 45 minutes. Nuclear scans can help doctors diagnose many conditions, including ...

  6. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  7. The PROOF benchmark suite measuring PROOF performance

    Science.gov (United States)

    Ryu, S.; Ganis, G.

    2012-06-01

    The PROOF benchmark suite is a new utility suite of PROOF to measure performance and scalability. The primary goal of the benchmark suite is to determine optimal configuration parameters for a set of machines to be used as PROOF cluster. The suite measures the performance of the cluster for a set of standard tasks as a function of the number of effective processes. Cluster administrators can use the suite to measure the performance of the cluster and find optimal configuration parameters. PROOF developers can also utilize the suite to help them measure, identify problems and improve their software. In this paper, the new tool is explained in detail and use cases are presented to illustrate the new tool.

  8. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  9. First CSNI numerical benchmark problem: comparison report

    International Nuclear Information System (INIS)

    In order to be able to make valid statements about a model's ability to describe a certain physical situation, it is indispensable that the numerical errors are much smaller than the modelling errors; otherwise, numerical errors could compensate or over pronounce model errors in an uncontrollable way. Therefore, knowledge about the numerical errors dependence on discretization parameters (e.g. size of spatial and temporal mesh) is required. In recognition of this need, numerical benchmark problems have been introduced. In the area of transient two-phase flow, numerical benchmarks are rather new. In June 1978, the CSNI Working Group on Emergency Core Cooling of Water Reactors has proposed to ICD /CSNI to sponsor a First CSNI Numerical Benchmark exercise. By the end of October 1979, results of the computation had been received from 10 organisations in 10 different countries. Based on these contributions, a preliminary comparison report has been prepared and distributed to the members of the CSNI Working Group on Emergency Core Cooling of Water Reactors, and to the contributors to the benchmark exercise. Comments on the preliminary comparison report by some contributors have subsequently been received. They have been considered in writing this final comparison report

  10. A Benchmark of Lidar-Based Single Tree Detection Methods Using Heterogeneous Forest Data from the Alpine Space

    Directory of Open Access Journals (Sweden)

    Lothar Eysn

    2015-05-01

    Full Text Available In this study, eight airborne laser scanning (ALS-based single tree detection methods are benchmarked and investigated. The methods were applied to a unique dataset originating from different regions of the Alpine Space covering different study areas, forest types, and structures. This is the first benchmark ever performed for different forests within the Alps. The evaluation of the detection results was carried out in a reproducible way by automatically matching them to precise in situ forest inventory data using a restricted nearest neighbor detection approach. Quantitative statistical parameters such as percentages of correctly matched trees and omission and commission errors are presented. The proposed automated matching procedure presented herein shows an overall accuracy of 97%. Method based analysis, investigations per forest type, and an overall benchmark performance are presented. The best matching rate was obtained for single-layered coniferous forests. Dominated trees were challenging for all methods. The overall performance shows a matching rate of 47%, which is comparable to results of other benchmarks performed in the past. The study provides new insight regarding the potential and limits of tree detection with ALS and underlines some key aspects regarding the choice of method when performing single tree detection for the various forest types encountered in alpine regions.

  11. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  12. Quantitative consistency testing of thermal benchmark lattice experiments

    International Nuclear Information System (INIS)

    The paper sets forth a general method to demonstrate the quantitative consistency (or inconsistency) of results of thermal reactor lattice experiments. The method is of particular importance in selecting standard ''benchmark'' experiments for comparison testing of lattice analysis codes and neutron cross sections. ''Benchmark'' thermal lattice experiments are currently selected by consensus, which usually means the experiment is geometrically simple, well-documented, reasonably complete, and qualitatively consistent. A literature search has not revealed any general quantitative test that has been applied to experimental results to demonstrate consistency, although some experiments must have been subjected to some form or other of quantitative test. The consistency method is based on a two-group neutron balance condition that is capable of revealing the quantitative consistency (or inconsistency) of reported thermal benchmark lattice integral parameters. This equation is used in conjunction with a second equation in the following discussion to assess the consistency (or inconsistency) of: (1) several Cross Section Evaluation Working Group (CSEWG) defined thermal benchmark lattices, (2) SRL experiments on the Mark 5R and Mark 15 lattices, and (3) several D2O lattices encountered as proposed thermal benchmark lattices. Nineteen thermal benchmark lattice experiments were subjected to a quantitative test of consistency between the reported experimental integral parameters. Results of this testing showed only two lattice experiments to be generally useful as ''benchmarks,'' three lattice experiments to be of limited usefulness, three lattice experiments to be potentially useful, and 11 lattice experiments to be not useful. These results are tabulated with the lattices identified

  13. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  14. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  15. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The applica......nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  16. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  17. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  18. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  19. CT Scans

    Science.gov (United States)

    ... cross-sectional pictures of your body. Doctors use CT scans to look for Broken bones Cancers Blood clots Signs of heart disease Internal bleeding During a CT scan, you lie still on a table. The table ...

  20. Thyroid scan

    Science.gov (United States)

    ... PET scan Skin nodules Thyroid cancer Thyroid cancer - medullary carcinoma Thyroid cancer - papillary carcinoma Toxic nodular goiter ... Topics Hyperthyroidism Hypothyroidism Nuclear Scans Thyroid Cancer Thyroid Diseases Thyroid Tests Browse the Encyclopedia A.D.A. ...

  1. Benchmark calculations of sodium fast critical experiments

    International Nuclear Information System (INIS)

    The high expectations from fast critical experiments impose the additional requirements on reliability of final reconstructed values, obtained in experiments at critical facility. Benchmark calculations of critical experiments are characterized by impossibility of complete experiment reconstruction, the large amounts of input data (dependent and independent) with very different reliability. It should also take into account different sensitivity of the measured and appropriate calculated characteristics to the identical changes of geometry parameters, temperature, and isotopic composition of individual materials. The calculations of critical facility experiments are produced for the benchmark models, generated by the specific reconstructing codes with its features when adjusting model parameters, and using the nuclear data library. The generated benchmark model, providing the agreed calculated and experimental values for one or more neutronic characteristics can lead to considerable differences for other key characteristics. The sensitivity of key neutronic characteristics to the extra steel allocation in the core, and ENDF/B nuclear data sources is performed using a few calculated models of BFS-62-3A and BFS1-97 critical assemblies. The comparative analysis of the calculated effective multiplication factor, spectral indices, sodium void reactivity, and radial fission-rate distributions leads to quite different models, providing the best agreement the calculated and experimental neutronic characteristics. This fact should be considered during the refinement of computational models and code-verification purpose. (author)

  2. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  3. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  4. Proposal of an innovative benchmark for accuracy evaluation of dental crown manufacturing.

    Science.gov (United States)

    Atzeni, Eleonora; Iuliano, Luca; Minetola, Paolo; Salmi, Alessandro

    2012-05-01

    An innovative benchmark representing a dental arch with classic features corresponding to different kinds of prepared teeth is proposed. Dental anatomy and general rules for tooth preparation are taken into account. This benchmark includes tooth orientation and provides oblique surfaces similar to those of real prepared teeth. The benchmark is produced by additive manufacturing (AM) and subjected to digitization by a dental three-dimensional scanner. The evaluation procedure proves that the scan data can be used as reference model for crown restorations design. Therefore this benchmark is at the basis for comparative studies about different CAD/CAM and AM techniques for dental crowns. PMID:22364825

  5. Quantum benchmarks for Gaussian states

    CERN Document Server

    Chiribella, Giulio

    2014-01-01

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.

  6. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  7. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  8. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  9. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  10. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  11. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  12. Gaia FGK benchmark stars: new candidates at low metallicities

    Science.gov (United States)

    Hawkins, K.; Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Casagrande, L.; Gilmore, G.; Lind, K.; Magrini, L.; Masseron, T.; Pancino, E.; Randich, S.; Worley, C. C.

    2016-07-01

    Context. We have entered an era of large spectroscopic surveys in which we can measure, through automated pipelines, the atmospheric parameters and chemical abundances for large numbers of stars. Calibrating these survey pipelines using a set of "benchmark stars" in order to evaluate the accuracy and precision of the provided parameters and abundances is of utmost importance. The recent proposed set of Gaia FGK benchmark stars has up to five metal-poor stars but no recommended stars within -2.0 http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/592/A70

  13. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  14. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    International Nuclear Information System (INIS)

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  15. FIESTA序列扫描参数优化在胎羊中的应用研究%Scan parameters optimization of FIESTA sequence in the application study of fetal sheep

    Institute of Scientific and Technical Information of China (English)

    曹剑锋; 张玉珍; 朱铭

    2011-01-01

    Objective In order to enhance the image qualities, optimize scan parameters of FIESTA sequence were studied through fetal sheep for better use of fetal MRI in human. Materials and Methods Two fetal sheep were scanned by different parameters of FIESTA. Compared their specific absorption rate (SAR), imaging definition. Better parameters summarized after the study. Results The better parameters of FIESTA were TR: 3. 8ms.TE:l. 4ms,thicknesS:7mm,gap!lmmtmatrix!224X224,NEX:2. Conclusion The optimization parameters of FIESTA sequence will do better for human fetus and be important for diagnosing fetal abnormalities.%目的:优化快速稳态进动采集序列(FIESTA)扫描序列的参数,取得高质量的图像,便于MRI在人类胎儿扫描中的应用.方法:对2只胎羊行FIESTA序列扫描,变化各种扫描参数,比较不同参数下图像质量的差异.比较总结出相对理想的扫描参数.结果:总结得出相对理想的扫描参数为TR:3.8ms,TE:1.4ms,层厚:7mm,间隔:1mm,矩阵:224×224,采集次数:2次.结论:通过胎羊动物模型,所取得的比较理想的扫描参数,能更好地运用于人类胎儿,持对诊断胎儿各种病变诊断有重要作用.

  16. Determination of crystallization kinetics parameters of a Li1.5Al0.5Ge1.5(PO4)3 (LAGP) glass by differential scanning calorimetry

    OpenAIRE

    A.M. Rodrigues; J. L. Narváez-Semanate; A. A. Cabral; A. C. M. Rodrigues

    2013-01-01

    Crystallization kinetics parameters of a stoichiometric glass with the composition Li1.5Al0.5Ge1.5(PO4)3 were investigated by subjecting parallelepipedonal samples (3 × 3 × 1.5 mm) to heat treatment in a differential scanning calorimeter at different heating rates (3, 5, 8 and 10 °C/min). The data were analyzed using Ligero's and Kissinger's methods to determine the activation energy (E) of crystallization, which yielded, respectively, E = 415 ± 37 kJ/mol and 378 ± 19 kJ/mol. Ligero's method ...

  17. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    Energy Technology Data Exchange (ETDEWEB)

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  18. MRI Scans

    Science.gov (United States)

    Magnetic resonance imaging (MRI) uses a large magnet and radio waves to look at organs and structures inside your body. Health care professionals use MRI scans to diagnose a variety of conditions, from ...

  19. CT scan

    Science.gov (United States)

    ... come from a CT scan. Some people have allergies to contrast dye. Let your doctor know if you have ... vein contains iodine. If you have an iodine allergy, a type of contrast may cause nausea or vomiting , sneezing , itching , or ...

  20. Prognostic role of metabolic parameters of {sup 18}F-FDG PET-CT scan performed during radiation therapy in locally advanced head and neck squamous cell carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Min, Myo; Forstner, Dion [Liverpool Hospital, Cancer Therapy Centre, Liverpool, NSW (Australia); University of New South Wales, Sydney, NSW (Australia); Ingham Institute of Applied Medical Research, Liverpool, NSW (Australia); Lin, Peter; Shon, Ivan Ho; Lin, Michael [University of New South Wales, Sydney, NSW (Australia); Liverpool Hospital, Department of Nuclear Medicine and Positron Emission Tomography, Liverpool, NSW (Australia); University of Western Sydney, Sydney, NSW (Australia); Lee, Mark T. [Liverpool Hospital, Cancer Therapy Centre, Liverpool, NSW (Australia); University of New South Wales, Sydney, NSW (Australia); Bray, Victoria; Fowler, Allan [Liverpool Hospital, Cancer Therapy Centre, Liverpool, NSW (Australia); Chicco, Andrew [Liverpool Hospital, Department of Nuclear Medicine and Positron Emission Tomography, Liverpool, NSW (Australia); Tieu, Minh Thi [Calvary Mater Newcastle, Department of Radiation Oncology, Newcastle, NSW (Australia); University of Newcastle, Newcastle, NSW (Australia)

    2015-12-15

    To evaluate the prognostic value of {sup 18}F-FDG PET-CT performed in the third week (iPET) of definitive radiation therapy (RT) in patients with newly diagnosed locally advanced mucosal primary head and neck squamous-cell-carcinoma (MPHNSCC). Seventy-two patients with MPHNSCC treated with radical RT underwent staging PET-CT and iPET. The maximum standardised uptake value (SUV{sub max}), metabolic tumour volume (MTV) and total lesional glycolysis (TLG) of primary tumour (PT) and index node (IN) [defined as lymph node(s) with highest TLG] were analysed, and results were correlated with loco-regional recurrence-free survival (LRFS), disease-free survival (DFS), metastatic failure-free survival(MFFS) and overall survival (OS), using Kaplan-Meier analysis. Optimal cutoffs (OC) were derived from receiver operating characteristic curves: SUV{sub max-PT} = 4.25 g/mL, MTV{sub PT} = 3.3 cm{sup 3}, TLG{sub PT} = 9.4 g, for PT, and SUV{sub max-IN} = 4.05 g/mL, MTV{sub IN} = 1.85 cm{sup 3} and TLG{sub IN} = 7.95 g for IN. Low metabolic values in iPET for PT below OC were associated with statistically significant better LRFS and DFS. TLG was the best predictor of outcome with 2-year LRFS of 92.7 % vs. 71.1 % [p = 0.005, compared with SUV{sub max} (p = 0.03) and MTV (p = 0.022)], DFS of 85.9 % vs. 60.8 % [p = 0.005, compared with SUV{sub max} (p = 0.025) and MTV (p = 0.018)], MFFS of 85.9 % vs. 83.7 % [p = 0.488, compared with SUV{sub max} (p = 0.52) and MTV (p = 0.436)], and OS of 81.1 % vs. 75.0 % [p = 0.279, compared with SUV{sub max} (p = 0.345) and MTV (p = 0.512)]. There were no significant associations between the percentage reduction of primary tumour metabolic parameters and outcomes. In patients with nodal disease, metabolic parameters below OC (for both PT and IN) were significantly associated with all oncological outcomes, while TLG was again the best predictor: LRFS of 84.0 % vs. 55.3 % (p = 0.017), DFS of 79.4 % vs. 38.6 % (p = 0.001), MFFS 86.4 % vs. 68.2 % (p = 0

  1. General scan in flavor parameter space in models with vector quark doublets and an enhancement in the B → Xsγ process

    Science.gov (United States)

    Wang, Wenyu; Xiong, Zhao-Hua; Zhao, Xin-Yan

    2016-09-01

    In models with vector-like quark doublets, the mass matrices of up and down type quarks are related. Precise diagonalization of the mass matrices has become an obstacle in numerical studies. In this work we first propose a diagonalization method. As its application, in the Standard Model with one vector-like quark doublet we present the quark mass spectrum and Feynman rules for the calculation of B → Xsγ. We find that i) under the constraints of the CKM matrix measurements, the mass parameters in the bilinear term are constrained to a small value by the small deviation from unitarity; ii) compared with the fourth generation extension of the Standard Model, there is an enhancement to the B → Xsγ process in the contribution of vector-like quarks, resulting in a non-decoupling effect in such models. Supported by Natural Science Foundation of China (11375001) and Talents Foundation of Education Department of Beijing

  2. General scan in flavor parameter space in the models with vector quark doublets and an enhancement in $B\\to X_s\\gamma$ process

    CERN Document Server

    Wang, Wenyu; Zhao, Xin-Yan

    2016-01-01

    In the models with vector like quark doublets, the mass matrices of up and down type quarks are related. Precise diagonalization for the mass matrices became an obstacle in the numerical studies. In this work we propose a diagonalization method at first. As its application, in the standard model with one vector like quark doublet we present quark mass spectrum, Feynman rules for the calculation of $B\\to X_s\\gamma$. We find that i) under the constraints of the CKM matrix measurements, the mass parameters in the bilinear term are constrained to a small value by the small deviation from unitarity; ii) compared with the fourth generation extension of the standard model, there is an enhancement to $B\\to X_s\\gamma$ process in the contribution of vector like quark, resulting a non-decoupling effect in such models.

  3. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  4. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  5. (Invited) Microreactors for Characterization and Benchmarking of Photocatalysts

    DEFF Research Database (Denmark)

    Vesborg, Peter Christian Kjærgaard; Dionigi, Fabio; Trimarco, Daniel Bøndergaard;

    2015-01-01

    In the field of photocatalysis the batch-nature of the typical benchmarking experiment makes it very laborious to obtain good kinetic data as a function of parameters such as illumination wavelength, irradiance, catalyst temperature, reactant composition, etc. Microreactors with on-line mass...

  6. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... of the ‘inside’ costs of the sub-component, technical specifications of the product, opportunistic behavior from the suppliers and cognitive limitation. These are all aspects that easily can dismantle the market mechanism and make it counter-productive in the organization. Thus, by directing more attention...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...

  7. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  8. Plasma Waves as a Benchmark Problem

    CERN Document Server

    Kilian, Patrick; Schreiner, Cedric; Spanier, Felix

    2016-01-01

    A large number of wave modes exist in a magnetized plasma. Their properties are determined by the interaction of particles and waves. In a simulation code, the correct treatment of field quantities and particle behavior is essential to correctly reproduce the wave properties. Consequently, plasma waves provide test problems that cover a large fraction of the simulation code. The large number of possible wave modes and the freedom to choose parameters make the selection of test problems time consuming and comparison between different codes difficult. This paper therefore aims to provide a selection of test problems, based on different wave modes and with well defined parameter values, that is accessible to a large number of simulation codes to allow for easy benchmarking and cross validation. Example results are provided for a number of plasma models. For all plasma models and wave modes that are used in the test problems, a mathematical description is provided to clarify notation and avoid possible misunderst...

  9. Higgs pair production: choosing benchmarks with cluster analysis

    Science.gov (United States)

    Carvalho, Alexandra; Dall'Osso, Martino; Dorigo, Tommaso; Goertz, Florian; Gottardo, Carlo A.; Tosi, Mia

    2016-04-01

    New physics theories often depend on a large number of free parameters. The phenomenology they predict for fundamental physics processes is in some cases drastically affected by the precise value of those free parameters, while in other cases is left basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics predicted by different models; a clustering algorithm using that metric may allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmarks are then guaranteed to be sensitive to a large area of the parameter space.

  10. In-core fuel management benchmarks for PHWRs

    International Nuclear Information System (INIS)

    Under its in-core fuel management activities, the IAEA set up two co-ordinated research programmes (CRPs) on complete in-core fuel management code packages. At a consultant meeting in November 1988 the outline of the CRP on in-core fuel management benchmars for PHWRs was prepared, three benchmarks were specified and the corresponding parameters were defined. At the first research co-ordination meeting in December 1990, seven more benchmarks were specified. The objective of this TECDOC is to provide reference cases for the verification of code packages used for reactor physics and fuel management of PHWRs. 91 refs, figs, tabs

  11. Isospin-Violating Dark Matter Benchmarks for Snowmass 2013

    CERN Document Server

    Feng, Jonathan L; Marfatia, Danny; Sanford, David

    2013-01-01

    Isospin-violating dark matter (IVDM) generalizes the standard spin-independent scattering parameter space by introducing one additional parameter, the neutron-to-proton coupling ratio f_n/f_p. In IVDM the implications of direct detection experiments can be altered significantly. We review the motivations for considering IVDM and present benchmark models that illustrate some of the qualitatively different possibilities. IVDM strongly motivates the use of a variety of target nuclei in direct detection experiments.

  12. Nominal GDP: Target or Benchmark?

    OpenAIRE

    Hetzel, Robert L.

    2015-01-01

    Some observers have argued that the Federal Reserve would best fulfill its mandate by adopting a target for nominal gross domestic product (GDP). Insights from the monetarist tradition suggest that nominal GDP targeting could be destabilizing. However, adopting benchmarks for both nominal and real GDP could offer useful information about when monetary policy is too tight or too loose.

  13. Benchmark calculations for EGS5

    International Nuclear Information System (INIS)

    In the past few years, EGS4 has undergone an extensive upgrade to EGS5, in particularly in the areas of low-energy electron physics, low-energy photon physics, PEGS cross section generation, and the coding from Mortran to Fortran programming. Benchmark calculations have been made to assure the accuracy, reliability and high quality of the EGS5 code system. This study reports three benchmark examples that show the successful upgrade from EGS4 to EGS5 based on the excellent agreements among EGS4, EGS5 and measurements. The first benchmark example is the 1969 Crannell Experiment to measure the three-dimensional distribution of energy deposition for 1-GeV electrons shower in water and aluminum tanks. The second example is the 1995 Compton-scattered spectra measurements for 20-40 keV, linearly polarized photon by Namito et. al., in KEK, which was a main part of the low-energy photon expansion work for both EGS4 and EGS5. The third example is the 1986 heterogeneity benchmark experiment by Shortt et. al., who used a monoenergetic 20-MeV electron beam to hit the front face of a water tank containing both air and aluminum cylinders and measured spatial depth dose distribution using a small solid-state detector. (author)

  14. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  15. Benchmarking biodiversity performances of farmers

    NARCIS (Netherlands)

    Snoo, de G.R.; Lokhorst, A.M.; Dijk, van J.; Staats, H.; Musters, C.J.M.

    2010-01-01

    Farmers are the key players when it comes to the enhancement of farmland biodiversity. In this study, a benchmark system that focuses on improving farmers’ nature conservation was developed and tested among Dutch arable farmers in different social settings. The results show that especially tailored

  16. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  17. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  18. Benchmark 2 - Springback of a draw / re-draw panel: Part A: Benchmark description

    Science.gov (United States)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing; Chen, Zhong

    2013-12-01

    Numerical methods have been effectively implemented to predict springback behavior of complex stampings to reduce die tryout through compensation and produce dimensionally accurate products after forming and trimming. However, accurate prediction of the sprung shape of a panel formed with an initial draw followed with a restrike forming step remains a difficult challenge. The objective of this benchmark was to predict the sprung shape after stamping, restriking and trimming a sheet metal panel. A simple, rectangular draw die was used to draw sheet metal to a set depth with a "larger" tooling radius, followed by additional drawing to a greater depth with a "smaller" tooling radius. Panels were sectioned along a centerline and released to allow measurement of thickness strain and position of the trim line in the sprung condition. Smaller radii were used in the restrike step in order to significantly alter the deformation and the sprung shape. These measurements were used to evaluate numerical analysis predictions submitted by benchmark participants. Additional panels were drawn to "failure" during both the first draw and the re-draw in order to set the parameters for the springback trials and to demonstrate that a sheet metal going through a re-strike operation can exceed conventional forming limits of that under a simple draw operation. Two sheet metals were used for this benchmark study: DP600 steel sheet and aluminum alloy 5182-O.

  19. Structured Light 3 D Scanning System Based on Dynamic Parameter Control%基于动态参数控制的结构光三维扫描系统

    Institute of Scientific and Technical Information of China (English)

    沈杭锦; 吴以凡; 张桦; 吴燕萍

    2013-01-01

    Based on the dynamic parameter control , this paper designs a structured light 3 D scanning system using webcam and projector .Firstly, this paper uses ZHANG Zheng-you calibration algorithm to calculate the intrinsic parameters and distortion coefficients of webcam and projector , then uses triangulation and encoding and decoding of Gray code to calculate three-dimensional coordinate .Generally, 3D scanning system should be in a sealed opaque dark room , but in reality it may exist external illumination , thus the surface of the object may become high brightness and lead to photo overexposed .This problem will affect the result .This paper dynamic controls two parameters which are the webcam gain and the projector's projection brightness before scanning the object, which achieve a better result .Finally, it compares the experimental results .%该文利用摄像头和投影仪平台,设计并实现了基于动态参数控制的结构光三维扫描系统。首先采用张正友棋盘格标定算法计算摄像头和投影仪的内参矩阵及畸变系数,然后利用三角测量法原理及格雷码编解码技术计算被测物体表面的三维坐标值。由于结构光扫描系统一般要求实验场景是封闭不透光的暗室,但实际实验环境往往存在外界光照,导致物体表面光照强度过强,使拍摄照片曝光过度,进而影响扫描精度。为了消除外界光照对测量结果的影响,提出在进行结构光扫描之前采用动态调节摄像头的增益水平和投影仪的投射光强这两个动态参数,取得了较好的效果。

  20. Scan Statistics

    CERN Document Server

    Glaz, Joseph

    2009-01-01

    Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.

  1. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  2. 2009 South American benchmarking study: natural gas transportation companies

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Nathalie [Gas TransBoliviano S.A. (Bolivia); Walter, Juliana S. [TRANSPETRO, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In the current business environment large corporations are constantly seeking to adapt their strategies. Benchmarking is an important tool for continuous improvement and decision-making. Benchmarking is a methodology that determines which aspects are the most important to be improved upon, and it proposes establishing a competitive parameter in an analysis of the best practices and processes, applying continuous improvement driven by the best organizations in their class. At the beginning of 2008, GTB (Gas TransBoliviano S.A.) contacted several South American gas transportation companies to carry out a regional benchmarking study in 2009. In this study, the key performance indicators of the South American companies, whose reality is similar, for example, in terms of prices, availability of labor, and community relations, will be compared. Within this context, a comparative study of the results, the comparative evaluation among natural gas transportation companies, is becoming an essential management instrument to help with decision-making. (author)

  3. Development of a Benchmark Example for Delamination Fatigue Growth Prediction

    Science.gov (United States)

    Krueger, Ronald

    2010-01-01

    The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  4. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  5. Benchmark results in vector atmospheric radiative transfer

    International Nuclear Information System (INIS)

    In this paper seven vector radiative transfer codes are inter-compared for the case of underlying black surface. They include three techniques based on the discrete ordinate method (DOM), two Monte-Carlo methods, the successive orders scattering method, and a modified doubling-adding technique. It was found that all codes give very similar results. Therefore, we were able to produce benchmark results for the Stokes parameters both for reflected and transmitted light in the cases of molecular, aerosol and cloudy multiply scattering media. It was assumed that the single scattering albedo is equal to one. Benchmark results have been provided by several studies before, including Coulson et al., Garcia and Siewert, Wauben and Hovenier, and Natraj et al. among others. However, the case of the elongated phase functions such as for a cloud and with a high angular resolution is presented here for the first time. Also in difference with other studies, we make inter-comparisons using several codes for the same input dataset, which enables us to quantify the corresponding errors more accurately.

  6. 螺旋CT参数测量指导全髋关节置换术的可行性研究%Accuracy of three-dimensional CT scan parameters for guiding total hip arthroplasty

    Institute of Scientific and Technical Information of China (English)

    马纪坤; 朱凤臣; 张铭华

    2015-01-01

    目的 探讨螺旋CT相关参数测量在预防全髋关节置换术(THA)患者肢体不等长、术后脱位的指导作用. 方法 收治86例初次行单侧THA患者,按随机数字表法分成两组:研究组45例通过螺旋CT三维重建技术测量健侧髋臼前倾角、外展角、股骨颈截骨处与股骨头旋转中心距离、大转子尖与股骨头旋转中心距离,以此规划患侧THA;对照组41例不进行参数测量.对比手术前后疗效,验证螺旋CT三维重建术前相关参数测量在预防THA患者肢体不等长及术后脱位等的指导作用. 结果 86例患者均获随访,研究组随访时间(11.2±6.2)个月,对照组随访时间(11.6±6.2)个月.关节功能随访情况:研究组术后3个月Haris评分(87.2±5.4)分;对照组术后3个月Harris评分(80.9±7.9)分(P<0.05).研究组术后肢体不等长(0.4±0.2)cm;对照组为(1.1±0.4)cm(P<0.05).两组术后3个月内脱位各1例. 结论 术前CT三维重建参数测量在THA术中具有一定的指导意义.%Objective To investigate the clinical significance of preoperative three-dimensional CT scan parameters to restore postoperative limb length and reduce postoperative dislocation in patients with total hip arthroplasty (THA).Methods Clinical data of two groups involving 86 cases that had primary unilateral THA were included.In study group 45 cases were operated on with the measurement of contralateral acetabular anteversion, acetabular abduction angle, distance from femoral neck osteotomy to the center of rotation of the femoral head and distance from femoral trochanter tip to the center of rotation based on three-dimensional CT scan.Another 41 cases under conventional surgery which not used these parameters served as control.Surgical efficacy was compared to verify the role of CT scan parameters in restoring postoperative limb length and reducing postoperative dislocation.Results Period of follow-up was (11.2 ± 6.2) months in study group and (11.6 ± 6

  7. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  8. Perineal scanning.

    Science.gov (United States)

    Jeanty, P; d'Alton, M; Romero, R; Hobbins, J C

    1986-10-01

    Although various techniques have been described to aid in the ultrasound diagnosis of placenta previa and incompetent cervix, these maneuvers depend on the precise identification of the internal cervical os, a feat which is notoriously difficult to accomplish consistently. In an attempt to get a closer view of the cervix we tried another approach. This simple technique of perineal scanning has the potential to help considerably with these problems. PMID:3530265

  9. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  10. Development and Application of Benchmark Examples for Mode II Static Delamination Propagation and Fatigue Growth Predictions

    Science.gov (United States)

    Krueger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  11. Development of Benchmark Examples for Quasi-Static Delamination Propagation and Fatigue Growth Predictions

    Science.gov (United States)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  12. Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions

    Science.gov (United States)

    Kruger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  13. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  14. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Science.gov (United States)

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  15. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  16. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  17. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    Science.gov (United States)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  18. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  19. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  20. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past the doorstep and is now in early trials in the project and construction environment....

  1. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  2. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  3. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  4. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  5. Influencing factors and reproducibility of controlled attenuation parameters in the evaluation of fatty liver disease using FibroScan R%FibroScan R 实施受控衰减参数评价脂肪肝的影响因素及重复性分析

    Institute of Scientific and Technical Information of China (English)

    沈峰; 徐正婕; 潘勤; 陈光榆; 曹毅; 黄家懿; 范建高

    2013-01-01

    Objective To evaluate the influencing factors and reproducibility of controlled attenuation parameters (CAP)measurement of fatty liver using FibroScan R . Methods Patients with non-alcohol fatty liver disease(NAFLD)and normal controls were recruited to complete the CAP measurement with new FibroScan-502 and M probe. In NAFLD groups,some subjects were repeatedly checked-up by the same or different operator. Intraclass correlation coefficient (ICC)was used to evaluate the reproducibility of the operation. Results A total of 228 subjects were recruited, and 200 subjects(87.7%)got the valid measurement;the success rates in normal and obese persons were 93.9%(77/82)and 75.0%(33/44,x2=9.548,P=0.02),respectively;female,senior and obese persons had lower success of examination;CAP values in NAFLD group was 291.1±54.0 dB/m,significantly higher than that in control groups(216.4±43.3dB/m,P<0.01);The ICC was 0.848(95% CI 0.761~0.905,P<0.01)with same operator and 0.718 (95% CI 0.607~0.896,P<0.01)with different operator. Conclusion The CAP can be used for non-invasive diagnosis of fatty liver,with a good reproducibility.%  目的评价瞬时弹性记录仪(FibroScan R )实施受控衰减参数(CAP)无创定量诊断脂肪肝的影响因素及重复性.方法使用新型FibroScan-502机型及M探头对非酒精性脂肪性肝病(NAFLD)患者及对照人群进行肝脏脂肪含量(CAP值)测定.应用组内相关系数(ICC)评价重复测量的变异程度.结果169例NAFLD患者和59例对照人群接受FibroScan R 检查,其中200例(87.7%)完成有效测量;肥胖组检查成功率显著低于体重正常组[75.0%(33/44)对93.9%(77/82),x2=9.548,P=0.02],女性、高龄和中心性肥胖者影响检查成功率;NAFLD组CAP值显著高于对照组(291.1±54.0dB/m对216.4±43.3dB/m,P<0.01);63例NAFLD患者进行CAP重复性测量提示同一操作者ICC为0.848(95%可信区间为0.761~0.905,P<0.01),不同操作者ICC为0.718(95%可信区间为0.607~0.896, P<0.01).

  6. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  7. Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity

    Directory of Open Access Journals (Sweden)

    Yu.V. Dvirko

    2013-03-01

    Full Text Available The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalization big companies, enterprises, organizations realize the necessity of the thorough and profound research of the best achievements of market subjects relations with their further using in their own activity. Benchmarking is the process of competitive advantages borrowing and competitiveness increasing of Consumer Cooperatives trade enterprises at the expense of research leaning and adapting the best methods of business processes realization with the purpose to increase their functioning affectivity and best satisfaction of societal needs.The main goals of benchmarking using in Consumer Cooperatives are the following: increasing of needs satisfaction level at the expense of products quality increasing, transportation goods term shortening, service quality increasing; enterprise potential strengthening, competitiveness strengthening, image improvement; generation and new ideas and innovative decisions implementation in trade enterprise activity. The advantages of benchmarking using in Consumer Cooperatives trade enterprises activity are the following: adapting the parameters of enterprise functioning to market demands; gradual defining and removing inadequacies which obstacle enterprise development; borrowing the best methods of further enterprise development; competitive advantages gaining; technological innovations; employees motivation. Authors classification of benchmarking is represented by the following components: by cycle durability strategic, operative

  8. Isprs Benchmark for Multi-Platform Photogrammetry

    Science.gov (United States)

    Nex, F.; Gerke, M.; Remondino, F.; Przybilla, H.-J.; Bäumker, M.; Zurhorst, A.

    2015-03-01

    Airborne high resolution oblique imagery systems and RPAS/UAVs are very promising technologies that will keep on influencing the development of geomatics in the future years closing the gap between terrestrial and classical aerial acquisitions. These two platforms are also a promising solution for National Mapping and Cartographic Agencies (NMCA) as they allow deriving complementary mapping information. Although the interest for the registration and integration of aerial and terrestrial data is constantly increasing, only limited work has been truly performed on this topic. Several investigations still need to be undertaken concerning algorithms ability for automatic co-registration, accurate point cloud generation and feature extraction from multiplatform image data. One of the biggest obstacles is the non-availability of reliable and free datasets to test and compare new algorithms and procedures. The Scientific Initiative "ISPRS benchmark for multi-platform photogrammetry", run in collaboration with EuroSDR, aims at collecting and sharing state-of-the-art multi-sensor data (oblique airborne, UAV-based and terrestrial images) over an urban area. These datasets are used to assess different algorithms and methodologies for image orientation and dense matching. As ground truth, Terrestrial Laser Scanning (TLS), Aerial Laser Scanning (ALS) as well as topographic networks and GNSS points were acquired to compare 3D coordinates on check points (CPs) and evaluate cross sections and residuals on generated point cloud surfaces. In this paper, the acquired data, the pre-processing steps, the evaluation procedures as well as some preliminary results achieved with commercial software will be presented.

  9. Benchmarking of human resources management

    OpenAIRE

    David M. Akinnusi

    2008-01-01

    This paper reviews the role of human resource management (HRM) which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HR...

  10. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  11. NFS Tricks and Benchmarking Traps

    OpenAIRE

    Seltzer, Margo; Ellard, Daniel

    2003-01-01

    We describe two modi cations to the FreeBSD 4.6 NFS server to increase read throughput by improving the read-ahead heuristic to deal with reordered requests and stride access patterns. We show that for some stride access patterns, our new heuristics improve end-to-end NFS throughput by nearly a factor of two. We also show that benchmarking and experimenting with changes to an NFS server can be a subtle and challenging task, and that it is often difficult to distinguish the impact of a new ...

  12. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    The experimental results of startup tests after reconstruction and modification of the TRIGA Mark II reactor in Ljubljana are presented. The experiments were performed with a completely fresh, compact, and uniform core. The operating conditions were well defined and controlled, so that the results can be used as a benchmark test case for TRIGA reactor calculations. Both steady-state and pulse mode operation were tested. In this paper, the following steady-state experiments are treated: critical core and excess reactivity, control rod worths, fuel element reactivity worth distribution, fuel temperature distribution, and fuel temperature reactivity coefficient

  13. Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications

    Energy Technology Data Exchange (ETDEWEB)

    Leal, Luiz C [ORNL; Ivanov, E. [Institut de Radioprotection et de Surete Nucleaire

    2015-01-01

    The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.

  14. Residual Generation for the Ship Benchmark Using Structural Approach

    DEFF Research Database (Denmark)

    Cocquempot, V.; Izadi-Zamanabadi, Roozbeh; Staroswiecki, M;

    1998-01-01

    The prime objective of Fault-tolerant Control (FTC) systems is to handle faults and discrepancies using appropriate accommodation policies. The issue of obtaining information about various parameters and signals, which have to be monitored for fault detection purposes, becomes a rigorous task wit...... with the growing number of subsystems. The structural approach, presented in this paper, constitutes a general framework for providing information when the system becomes complex. The methodology of this approach is illustrated on the ship propulsion benchmark....

  15. Portfolio selection and asset pricing under a benchmark approach

    Science.gov (United States)

    Platen, Eckhard

    2006-10-01

    The paper presents classical and new results on portfolio optimization, as well as the fair pricing concept for derivative pricing under the benchmark approach. The growth optimal portfolio is shown to be a central object in a market model. It links asset pricing and portfolio optimization. The paper argues that the market portfolio is a proxy of the growth optimal portfolio. By choosing the drift of the discounted growth optimal portfolio as parameter process, one obtains a realistic theoretical market dynamics.

  16. FibroTouch及FibroScan(R)实施受控衰减参数评价肝细胞脂肪变与肝脏病理比较分析%Comparative analysis of controlled attenuation parameters in evaluation of liver steatosis using FibroTouch and FibroScan(R)

    Institute of Scientific and Technical Information of China (English)

    朱梦飞; 刘静; 王洁; 陈公英; 娄国强; 施军平

    2014-01-01

    目的 评价瞬时弹性超声诊断仪FibroTouch、FibroScan?实施受控衰减参数(CAP)无创定量诊断脂肪肝与肝脏穿刺病理结果的对比及相关分析.方法 对2013年7-10月在杭州师范大学附属医院对病理确诊的非酒精性脂肪性肝病(NAFLD)患者及慢性乙型肝炎患者(CHB)应用FibroTouch、FibroScan-502机型测定肝脏脂肪含量(CAP值).结果 41例CHB患者与20例NAFLD患者同时接受FibroTouch、FibroScan检查,测定的CAP与患者体质量指数(BMI,r分别为0.42,0.61;P分别为0.02,0.000)及肝细胞脂肪变呈正相关(r分别为0.54,0.56;P均<0.00l);33例病理诊断肝细胞脂肪变>5%的患者FibroTouch与FibroScan测定的CAP值均显著高于28例无脂肪变的CAP值[FibroTouch:(252.18±41.23) dB/m比(220.68±54.75) dB/m,P=0.04;FibroScan:(291.61±56.80) dB/m比(215.75±45.11) dB/m,P=0.000];FibroScan测定肝细胞脂肪变<30%患者的CAP值、肝细胞脂肪变30% ~ 60%患者的CAP值、>60%脂肪变组的CAP值间存在差异(F=6.82,P=0.004),且肝细胞脂肪变<30%患者的CAP值低于>60%脂肪变组的CAP值[(258.73±52.54) dB/m比(327.42±49.08)dB/m,P=0.04].但FibroTouch对上述3组患者测定的CAP值间差异无统计学意义(F=2.30,P=0.12).结论 FibroTouch与FibroScan测定的CAP值均能评价肝细胞脂肪变,CAP值的高低与患者的BMI大小、肝病理脂肪变严重程度呈正相关.FibroScan测定的CAP值能区分轻度与重度的脂肪变,而FibroTouch测定的CAP值不能区分,可能与本组实验病例过少有关,有待进一步研究.%Objective To compare the controlled attenuation parameters (CAP) measurement of liver steatosis using FibroTouch and FibroScan?.Methods Patients with non-alcohol fatty liver disease (NAFLD) and chronic hepatitis B (CHB) were included from July to October 2013 and liver biopsy was completed on these patients.They were examined to collect the CAP values by FibroTouch and FibroScan-502 respectively.Results A

  17. 非酒精性单纯性脂肪肝 FibroScan 值与代谢参数的相关性研究%Study on the correlation between FibroScan value and metabolic parameters in patients with nonalcoholic fatty liver

    Institute of Scientific and Technical Information of China (English)

    李岩; 杨涛; 田海燕; 王亚珍

    2014-01-01

    目的:应用瞬时弹性成像技术(FibroScan)对非酒精性单纯性脂肪肝(NAFL)患者肝脏硬度(E)及受控衰减系数(CAPTM)进行检测及分析。方法选取NAFL患者729例及正常健康人300例,均行腹部超声检查、实验室检查及FibroScan测量E及CAPTM值。结果 NAFL组临床资料各项数值均高于对照组( P <05.01),E值与丙氨酸氨基转移酶(ALT)、天冬氨酸氨基转移酶(AST)、CAP呈正相关( P <0.05),与总胆固醇(TC)、高密度脂蛋白胆固醇(HDL-C)呈负相关( P <0.01),CAPTM值三酰甘油(TG)、空腹血糖(FPG)、高密度脂蛋白胆固醇(HDL-C)、硬度值(E)呈正相关( P <0.05)。不同人群特征对失败率存在影响:失败率随BMI升高而升高,老年人测量失败率高于年轻人,女性失败率高于男性,肋间隙狭窄者失败率高( P均<0.01)。结论 FibroScan是目前唯一完全无创、无痛、对肝脏无损伤的评估与监测脂肪肝的量化工具,对脂肪定量具有较高的敏感度和准确度,对脂肪肝的诊断及定期跟踪随访具有较高的价值。%Objective Using transient elastography ( FibroScan ) to detect liver stiffness ( E ) and controlled attenuation coefficient ( CAPTM ) in patients with nonalcoholic fatty liver ( NAFL ) , and to explore the correlation between FibroScan value and metabolic parameters .Methods The 729 patients with NAFL ( NAFL group ) and 300 healthy subjects ( control group ) received abdominal ultrasound examination , laboratory examination and FibroScan measurement for E and CAPTM value .Results The clinical parameters in NAFL group were significantly higher than those in control group ( P <0.01).The E value in NAFL group was positively correlated to alanine aminotransferase (ALT),aspartate aminotransferase (AST), CAP, however, which was negatively correlated with total cholesterol ( TC), high density lipoprotein

  18. Benchmarking of radiological departments. Starting point for successful process optimization

    International Nuclear Information System (INIS)

    Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)

  19. Benchmarking: A tool for conducting self-assessment

    International Nuclear Information System (INIS)

    There is more information on nuclear plant performance available than can reasonably be assimilated and used effectively by plant management or personnel responsible for self-assessment. Also, it is becoming increasingly more important that an effective self-assessment program uses internal parameters not only to evaluate performance, but to incorporate lessons learned from other plants. Because of the quantity of information available, it is important to focus efforts and resources in areas where safety or performance is a concern and where the most improvement can be realized. One of the techniques that is being used to effectively accomplish this is benchmarking. Benchmarking involves the use of various sources of information to self-identify a plant's strengths and weaknesses, identify which plants are strong performers in specific areas, evaluate what makes a top performer, and incorporate the success factors into existing programs. The formality with which benchmarking is being implemented varies widely depending on the objective. It can be as simple as looking at a single indicator, such as systematic assessment of licensee performance (SALP) in engineering and technical support, then surveying the top performers with specific questions. However, a more comprehensive approach may include the performance of a detailed benchmarking study. Both operational and economic indicators may be used in this type of evaluation. Some of the indicators that may be considered and the limitations of each are discussed

  20. Compilation report of VHTRC temperature coefficient benchmark calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yasuda, Hideshi; Yamane, Tsuyoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-11-01

    A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, `Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors` to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k{sub eff}, by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other`s ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.).

  1. Gaia FGK Benchmark Stars: New Candidates At Low-Metallicities

    CERN Document Server

    Hawkins, Keith; Heiter, Ulrike; Soubiran, Caroline; Blanco-Cuaresma, Sergi; Casagrande, Luca; Gilmore, Gerry; Lind, Karin; Magrini, Laura; Masseron, Thomas; Pancino, Elena; Randich, Sofia; Worley, Clare C

    2016-01-01

    We have entered an era of large spectroscopic surveys in which we can measure, through automated pipelines, the atmospheric parameters and chemical abundances for large numbers of stars. Calibrating these survey pipelines using a set of "benchmark stars" in order to evaluate the accuracy and precision of the provided parameters and abundances is of utmost importance. The recent proposed set of Gaia FGK benchmark stars of Heiter et al. (2015) has no recommended stars within the critical metallicity range of $-2.0 <$ [Fe/H] $< -1.0$ dex. In this paper, we aim to add candidate Gaia benchmark stars inside of this metal-poor gap. We began with a sample of 21 metal-poor stars which was reduced to 10 stars by requiring accurate photometry and parallaxes, and high-resolution archival spectra. The procedure used to determine the stellar parameters was similar to Heiter et al. (2015) and Jofre et al. (2014) for consistency. The effective temperature (T$_{\\mathrm{eff}}$) of all candidate stars was determined using...

  2. Transient elastography (FibroScan(®)) with controlled attenuation parameter in the assessment of liver steatosis and fibrosis in patients with nonalcoholic fatty liver disease - Where do we stand?

    Science.gov (United States)

    Mikolasevic, Ivana; Orlic, Lidija; Franjic, Neven; Hauser, Goran; Stimac, Davor; Milic, Sandra

    2016-08-28

    Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease worldwide. Currently, the routinely used modalities are unable to adequately determine the levels of steatosis and fibrosis (laboratory tests and ultrasonography) or cannot be applied as a screening procedure (liver biopsy). Among the non-invasive tests, transient elastography (FibroScan(®), TE) with controlled attenuation parameter (CAP) has demonstrated good accuracy in quantifying the levels of liver steatosis and fibrosis in patients with NAFLD, the factors associated with the diagnosis and NAFLD progression. The method is fast, reliable and reproducible, with good intra- and interobserver levels of agreement, thus allowing for population-wide screening and disease follow-up. The initial inability of the procedure to accurately determine fibrosis and steatosis in obese patients has been addressed with the development of the obese-specific XL probe. TE with CAP is a viable alternative to ultrasonography, both as an initial assessment and during follow-up of patients with NAFLD. Its ability to exclude patients with advanced fibrosis may be used to identify low-risk NAFLD patients in whom liver biopsy is not needed, therefore reducing the risk of complications and the financial costs.

  3. Transient elastography (FibroScan®) with controlled attenuation parameter in the assessment of liver steatosis and fibrosis in patients with nonalcoholic fatty liver disease - Where do we stand?

    Science.gov (United States)

    Mikolasevic, Ivana; Orlic, Lidija; Franjic, Neven; Hauser, Goran; Stimac, Davor; Milic, Sandra

    2016-01-01

    Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease worldwide. Currently, the routinely used modalities are unable to adequately determine the levels of steatosis and fibrosis (laboratory tests and ultrasonography) or cannot be applied as a screening procedure (liver biopsy). Among the non-invasive tests, transient elastography (FibroScan®, TE) with controlled attenuation parameter (CAP) has demonstrated good accuracy in quantifying the levels of liver steatosis and fibrosis in patients with NAFLD, the factors associated with the diagnosis and NAFLD progression. The method is fast, reliable and reproducible, with good intra- and interobserver levels of agreement, thus allowing for population-wide screening and disease follow-up. The initial inability of the procedure to accurately determine fibrosis and steatosis in obese patients has been addressed with the development of the obese-specific XL probe. TE with CAP is a viable alternative to ultrasonography, both as an initial assessment and during follow-up of patients with NAFLD. Its ability to exclude patients with advanced fibrosis may be used to identify low-risk NAFLD patients in whom liver biopsy is not needed, therefore reducing the risk of complications and the financial costs. PMID:27621571

  4. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  5. Rethinking benchmark dates in international relations

    OpenAIRE

    Buzan, Barry; Lawson, George

    2014-01-01

    International Relations (IR) has an ‘orthodox set’ of benchmark dates by which much of its research and teaching is organized: 1500, 1648, 1919, 1945 and 1989. This article argues that IR scholars need to question the ways in which these orthodox dates serve as internal and external points of reference, think more critically about how benchmark dates are established, and generate a revised set of benchmark dates that better reflects macro-historical international dynamics. The first part of t...

  6. Benchmarking for Excellence and the Nursing Process

    Science.gov (United States)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  7. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  8. Method and system for benchmarking computers

    Science.gov (United States)

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  9. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  10. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  11. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  12. Helix Scan: A Scan Design for Diagnosis

    Institute of Scientific and Technical Information of China (English)

    WANG Fei; HU Yu; LI Xiaowei

    2007-01-01

    Scan design is a widely used design-for-testability technique to improve test quality and efficiency. For the scan-designed circuit, test and diagnosis of the scan chain and the circuit is an important process for silicon debug and yield learning. However, conventional scan designs and diagnosis methods abort the subsequent diagnosis process after diagnosing the scan chain if the scan chain is faulty. In this work, we propose a design-for-diagnosis scan strategy called helix scan and a diagnosis algorithm to address this issue. Unlike previous proposed methods, helix scan has the capability to carry on the diagnosis process without losing information when the scan chain is faulty. What is more, it simplifies scan chain diagnosis and achieves high diagnostic resolution as well as accuracy. Experimental results demonstrate the effectiveness of our design.

  13. Data Assimilation of Benchmark Experiments for Homogenous Thermal / Epithermal Uranium Systems

    International Nuclear Information System (INIS)

    This presentation reports on the data assimilation of benchmark experiments for homogeneous thermal and epithermal uranium systems. The assimilation method is based on Kalman filters using integral parameters and sensitivity coefficients calculated with MONK9 and ENDF/B-VII data. The assimilation process results in an overall improvement of the calculation-benchmark agreement, and may help in the selection of nuclear data after analysis of adjustment trends

  14. Instrumental fundamental parameters and selected applications of the microfocus X-ray fluorescence analysis at a scanning electron microscope; Instrumentelle Fundamentalparameter und ausgewaehlte Anwendungen der Mikrofokus-Roentgenfluoreszenzanalyse am Rasterelektronenmikroskop

    Energy Technology Data Exchange (ETDEWEB)

    Rackwitz, Vanessa

    2012-05-30

    For a decade X-ray sources have been commercially available for the microfocus X-ray fluorescence analysis ({mu}-XRF) and offer the possibility of extending the analytics at a scanning electron microscope (SEM) with an attached energy dispersive X-ray spectrometer (EDS). By using the {mu}-XRF it is possible to determine the content of chemical elements in a microscopic sample volume in a quantitative, reference-free and non-destructive way. For the reference-free quantification with the XRF the Sherman equation is referred to. This equation deduces the intensity of the detected X-ray intensity of a fluorescence peak to the content of the element in the sample by means of fundamental parameters. The instrumental fundamental parameters of the {mu}-XRF at a SEM/EDS system are the excitation spectrum consisting of X-ray tube spectrum and the transmission of the X-ray optics, the geometry and the spectrometer efficiency. Based on a calibrated instrumentation the objectives of this work are the development of procedures for the characterization of all instrumental fundamental parameters as well as the evaluation and reduction of their measurement uncertainties: The algorithms known from the literature for the calculation of X-ray tube spectrum are evaluated with regard to their deviations in the spectral distribution. Within this work a novel semi-empirical model is improved with respect to its uncertainties and enhanced in the low energy range as well as extended for another three anodes. The emitted X-ray tube spectrum is calculated from the detected one, which is measured at an especially developed setup for the direct measurement of X-ray tube spectra. This emitted X-ray tube spectrum is compared to the one calculated on base of the model of this work. A procedure for the determination of the most important parameters of an X-ray semi-lens in parallelizing mode is developed. The temporal stability of the transmission of X-ray full lenses, which have been in regular

  15. A Scanning Quantum Cryogenic Atom Microscope

    CERN Document Server

    Yang, Fan; Taylor, Stephen F; Turner, Richard W; Lev, Benjamin L

    2016-01-01

    Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity, high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented DC-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a noise floor of 300 pT and provides a 100x improvement in magnetic flux sensitivity over previous atomic scanning probe magnetometers. These capabilities are carefully benchmarked by imaging magnet...

  16. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  17. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  18. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  19. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  20. Benchmark scenarios for the NMSSM

    CERN Document Server

    Djouadi, A; Ellwanger, U; Godbole, R; Hugonie, C; King, S F; Lehti, S; Moretti, S; Nikitenko, A; Rottlander, I; Schumacher, M; Teixeira, A

    2008-01-01

    We discuss constrained and semi--constrained versions of the next--to--minimal supersymmetric extension of the Standard Model (NMSSM) in which a singlet Higgs superfield is added to the two doublet superfields that are present in the minimal extension (MSSM). This leads to a richer Higgs and neutralino spectrum and allows for many interesting phenomena that are not present in the MSSM. In particular, light Higgs particles are still allowed by current constraints and could appear as decay products of the heavier Higgs states, rendering their search rather difficult at the LHC. We propose benchmark scenarios which address the new phenomenological features, consistent with present constraints from colliders and with the dark matter relic density, and with (semi--)universal soft terms at the GUT scale. We present the corresponding spectra for the Higgs particles, their couplings to gauge bosons and fermions and their most important decay branching ratios. A brief survey of the search strategies for these states a...

  1. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  2. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    DEFF Research Database (Denmark)

    Menze, Bjoern H.; Jakab, Andras; Bauer, Stefan;

    2015-01-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low......- and high-grade glioma patients – manually annotated by up to four raters – and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74...... a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing...

  3. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  4. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  5. Fundamental modeling issues on benchmark structure for structural health monitoring

    Institute of Scientific and Technical Information of China (English)

    LI HuaJun; ZHANG Min; WANG JunRong; HU Sau-Lon James

    2009-01-01

    The IASC-ASCE Structural Health Monitoring Task Group developed a series of benchmark problems,and participants of the benchmark study were charged with using a 12-degree-of-freedom (DOF) shear building as their identification model. The present article addresses improperness, including the parameter and modeling errors, of using this particular model for the intended purpose of damage detection, while the measurements of damaged structures are synthesized from a full-order finite-element model. In addressing parameter errors, a model calibration procedure is utilized to tune the mass and stiffness matrices of the baseline identification model, and a 12-DOF shear building model that preserves the first three modes of the full-order model is obtained. Sequentially, this calibrated model is employed as the baseline model while performing the damage detection under various damage scenarios. Numerical results indicate that the 12-DOF shear building model is an over-simplified identification model, through which only idealized damage situations for the benchmark structure can be detected. It is suggested that a more sophisticated 3-dimensional frame structure model should be adopted as the identification model, if one intends to detect local member damages correctly.

  6. Fundamental modeling issues on benchmark structure for structural health monitoring

    Institute of Scientific and Technical Information of China (English)

    HU; Sau-Lon; James

    2009-01-01

    The IASC-ASCE Structural Health Monitoring Task Group developed a series of benchmark problems, and participants of the benchmark study were charged with using a 12-degree-of-freedom (DOF) shear building as their identification model. The present article addresses improperness, including the parameter and modeling errors, of using this particular model for the intended purpose of damage detec- tion, while the measurements of damaged structures are synthesized from a full-order finite-element model. In addressing parameter errors, a model calibration procedure is utilized to tune the mass and stiffness matrices of the baseline identification model, and a 12-DOF shear building model that preserves the first three modes of the full-order model is obtained. Sequentially, this calibrated model is employed as the baseline model while performing the damage detection under various damage scenarios. Numerical results indicate that the 12-DOF shear building model is an over-simplified identification model, through which only idealized damage situations for the benchmark structure can be detected. It is suggested that a more sophisticated 3-dimensional frame structure model should be adopted as the identification model, if one intends to detect local member damages correctly.

  7. Synergetic effect of benchmarking competitive advantages

    Directory of Open Access Journals (Sweden)

    N.P. Tkachova

    2011-12-01

    Full Text Available It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  8. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  9. Evaluating software verification systems: benchmarks and competitions

    NARCIS (Netherlands)

    Beyer, Dirk; Huisman, Marieke; Klebanov, Vladimir; Monahan, Rosemary

    2014-01-01

    This report documents the program and the outcomes of Dagstuhl Seminar 14171 “Evaluating Software Verification Systems: Benchmarks and Competitions”. The seminar brought together a large group of current and future competition organizers and participants, benchmark maintainers, as well as practition

  10. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  11. Benchmark Assessment for Improved Learning. AACC Report

    Science.gov (United States)

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  12. The Linked Data Benchmark Council Project

    NARCIS (Netherlands)

    Boncz, P.A.; Fundulaki, I.; Gubichev, A.; Larriba-Pey, J.; Neumann, T.

    2013-01-01

    Despite the fast growth and increasing popularity, the broad field of RDF and Graph database systems lacks an independent authority for developing benchmarks, and for neutrally assessing benchmark results through industry-strength auditing which would allow to quantify and compare the performance of

  13. Benchmarking implementations of lazy functional languages

    NARCIS (Netherlands)

    Hartel, P.H.; Langendoen, K.G.

    1993-01-01

    Five implementations of different lazy functional languages are compared using a common benchmark of a dozen medium size programs. The benchmarking procedure has been designed such that one set of programs can be translated automatically into different languages, thus allowing a fair comparison of t

  14. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  15. The research of the relativity between the scan parameters optimization and radiation dose at orbital helical CT%眼眶部低剂量螺旋CT扫描参数的优化

    Institute of Scientific and Technical Information of China (English)

    江时淦; 洪春凤; 王豪; 白雪冰; 张谭俊雄

    2013-01-01

    目的:探讨眼眶部低剂量螺旋CT扫描参数的优化.方法:320例被检者分成16组(每组20例),将管电流140、110、80、60mA,层厚2和3mm,螺距0.75和1.5,设计成16组螺旋CT扫描参数.记录每组容积CT剂量指数(CTD-Iovl)和剂量长度乘积(DLP)平均值,分析管电流、层厚、螺距与辐射量的关系.图像质量从影像层次、背景噪声、解剖结构及能否满足诊断要求等方面进行综合评价,图像质量等级采用秩和检验分析.结果:X线辐射量与管电流呈正相关,当管电流从140mA降至80mA时,CTDIovl和DLP分别下降42.84%、42.86%,图像质量均符合诊断要求,且差异无统计学意义(P>0.05);降至60mA时,图像质量差异有统计学意义(P0.05).X线辐射量与层厚关系不大,当层厚由3mm降至2mm时,CTDIovl和DLP分别降低9.90%、12.23%;图像质量差异无统计学意义(P>0.05),但2mm层厚的图像噪声较3mm大.结论:降低管电流,加大螺距是降低辐射量的有效途径;眼眶部螺旋CT扫描参数设置为管电流80mA、螺距1.5时,能兼顾图像质量和辐射量,扫描层厚根据检查要求选择.%Objective:To explore the scanning parameters optimization with low dose for orbital spiral CT. Methods: 320 cases were divided into 16 groups (n = 20 cases). mA was respectively 140mA,110mA,80mA,60mA; slice thickness: 2mm, 3mm; pitch :0. 75,1. 5 ; spiral CT scanning parameters designed for the 16 groups: mA, slice thickness, pitch 140,2, 0.75;140,2,l. 5; 140,3,0.75;140,3,1. 5;110,2,0. 75;110,2,1. 5;110,3,0. 75;110,3,1. 5;80,2,0. 75;80,2,1. 5;80,3, 0. 75 ; 80,3,1. 5 ; 60,2,0.75; 60,2,1. 5; 60, 3,0. 75; 60, 3,1. 5. CT dose index (CTDIovl) and dose length product (DLP) were recorded to analyze relativity among mA,slice thickness,pitch and radiation dose. Image quality was evaluated accord ing to its low density resolution,background noise,anatomic structure and whether it can meet the diagnostic requirements. The image data were analyzed by Rank sum

  16. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  17. RESRAD benchmarking against six radiation exposure pathway models

    Energy Technology Data Exchange (ETDEWEB)

    Faillace, E.R.; Cheng, J.J.; Yu, C.

    1994-10-01

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, input parameters such as occupancy, shielding, and consumption factors.

  18. Synthetic benchmarks for machine olfaction: Classification, segmentation and sensor damage☆

    Science.gov (United States)

    Ziyatdinov, Andrey; Perera, Alexandre

    2015-01-01

    The design of the signal and data processing algorithms requires a validation stage and some data relevant for a validation procedure. While the practice to share public data sets and make use of them is a recent and still on-going activity in the community, the synthetic benchmarks presented here are an option for the researches, who need data for testing and comparing the algorithms under development. The collection of synthetic benchmark data sets were generated for classification, segmentation and sensor damage scenarios, each defined at 5 difficulty levels. The published data are related to the data simulation tool, which was used to create a virtual array of 1020 sensors with a default set of parameters [1]. PMID:26217732

  19. Benchmark 1 - Nonlinear strain path forming limit of a reverse draw: Part A: Benchmark description

    Science.gov (United States)

    Benchmark-1 Committee

    2013-12-01

    The objective of this benchmark is to demonstrate the predictability of forming limits under nonlinear strain paths for a draw panel with a non-axisymmetric reversed dome-shape at the center. It is important to recognize that treating strain forming limits as though they were static during the deformation process may not lead to successful predictions of this benchmark, due to the nonlinearity of the strain paths involved in this benchmark. The benchmark tool is designed to enable a two-stage draw/reverse draw continuous forming process. Three typical sheet materials, AA5182-O Aluminum, and DP600 and TRIP780 Steels, are selected for this benchmark study.

  20. Benchmarking in healthcare using aggregated indicators

    DEFF Research Database (Denmark)

    Traberg, Andreas; Jacobsen, Peter

    2010-01-01

    Benchmarking has become a fundamental part of modern health care systems, but unfortunately, no benchmarking framework is unanimously accepted for assessing both quality and performance. The aim of this paper is to present a benchmarking model that is able to take different stakeholder perspectives...... into account. By presenting performance as a function of a patient perspective, an operations management perspective, and an employee perspective a more holistic approach to benchmarking is proposed. By collecting statistical information from several national and regional agencies and internal databases......, the model is constructed as a comprehensive hierarchy of indicators. By aggregating the outcome of each indicator, the model is able to benchmark healthcare providing units. By assessing performance deeper in the hierarchy, a more detailed view of performance is obtained. The validity test of the model...

  1. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).

    Science.gov (United States)

    Menze, Bjoern H; Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B; Ayache, Nicholas; Buendia, Patricia; Collins, D Louis; Cordier, Nicolas; Corso, Jason J; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M; Jena, Raj; John, Nigel M; Konukoglu, Ender; Lashkari, Danial; Mariz, José Antonió; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J; Raviv, Tammy Riklin; Reza, Syed M S; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A; Sousa, Nuno; Subbanna, Nagesh K; Szekely, Gabor; Taylor, Thomas J; Thomas, Owen M; Tustison, Nicholas J; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2015-10-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.

  2. Design of Test Wrapper Scan Chain Based on Differential Evolution

    Directory of Open Access Journals (Sweden)

    Aijun Zhu

    2013-08-01

    Full Text Available Integrated Circuit has entered the era of design of the IP-based SoC (System on Chip, which makes the IP core reuse become a key issue. SoC test wrapper design for scan chain is a NP Hard problem, we propose an algorithm based on Differential Evolution (DE to design wrapper scan chain. Through group’s mutation, crossover and selection operations, the design of test wrapper scan chain is achieved. Experimental verification is carried out according to the international standard benchmark ITC’02. The results show that the algorithm can obtain shorter longest wrapper scan chains, compared with other algorithms.

  3. Theory of second optimization for scan experiment

    CERN Document Server

    Mo, X H

    2015-01-01

    The optimal design of scan experiment is of great significance both for scientific research and from economical viewpoint. Two approaches, one has recourse to the sampling technique and the other resorts to the analytical proof, are adopted to figure out the optimized scan scheme for the relevant parameters. The final results indicate that for $n$ parameters scan experiment, $n$ energy points are necessary and sufficient for optimal determination of these $n$ parameters; each optimal position can be acquired by single parameter scan (sampling method), or by analysis of auxiliary function (analytic method); the luminosity allocation among the points can be determined analytically with respect to the relative importance between parameters. By virtue of the second optimization theory established in this paper, it is feasible to accommodate the perfectly optimal scheme for any scan experiment.

  4. KENO-IV code benchmark calculation, (4)

    International Nuclear Information System (INIS)

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multi-group constants library MGCL. The present paper describes the results of a test using criticality experiments about slab-cylinder system of uranium nitrate solution. In all, 128 cases of experiments have been calculated for the slab-cylinder configuration with and without plexiglass reflector, having the various critical parameters such as the number of cylinders and height of the uranium nitrate solution. It is shown among several important results that the code and library gives a fairly good multiplication factor, that is, k sub(eff) -- 1.0 for heavily reflected cases, whereas k sub(eff) -- 0.91 for the unreflected ones. This suggests the necessity of more advanced treatment of the criticality calculation for the system where neutrons can easily leak out during slowing down process. (author)

  5. Effect of noise correlations on randomized benchmarking

    Science.gov (United States)

    Ball, Harrison; Stace, Thomas M.; Flammia, Steven T.; Biercuk, Michael J.

    2016-02-01

    Among the most popular and well-studied quantum characterization, verification, and validation techniques is randomized benchmarking (RB), an important statistical tool used to characterize the performance of physical logic operations useful in quantum information processing. In this work we provide a detailed mathematical treatment of the effect of temporal noise correlations on the outcomes of RB protocols. We provide a fully analytic framework capturing the accumulation of error in RB expressed in terms of a three-dimensional random walk in "Pauli space." Using this framework we derive the probability density function describing RB outcomes (averaged over noise) for both Markovian and correlated errors, which we show is generally described by a Γ distribution with shape and scale parameters depending on the correlation structure. Long temporal correlations impart large nonvanishing variance and skew in the distribution towards high-fidelity outcomes—consistent with existing experimental data—highlighting potential finite-sampling pitfalls and the divergence of the mean RB outcome from worst-case errors in the presence of noise correlations. We use the filter-transfer function formalism to reveal the underlying reason for these differences in terms of effective coherent averaging of correlated errors in certain random sequences. We conclude by commenting on the impact of these calculations on the utility of single-metric approaches to quantum characterization, verification, and validation.

  6. Evaluation of the applicability of the Benchmark approach to existing toxicological data. Framework: Chemical compounds in the working place

    NARCIS (Netherlands)

    Appel MJ; Bouman HGM; Pieters MN; Slob W; CSR

    2001-01-01

    Vijf stoffen in de werkomgeving waarvoor risico-evaluaties beschikbaar waren, werden geselecteerd voor analyse met de benchmark-benadering. De kritische studies werden voor elk van deze stoffen geanalyseerd. De onderzochte toxicologische parameters betroffen zowel continue als ordinale gegevens.

  7. Validation of neutron-transport calculations in benchmark facilities for improved damage-fluence predictions

    International Nuclear Information System (INIS)

    An accurate determination of damage fluence accumulated by reactor pressure vessels (RPV) as a function of time is essential in order to evaluate the vessel integrity for both pressurized thermal shock (PTS) transients and end-of-life considerations. The desired accuracy for neutron exposure parameters such as displacements per atom or fluence (E > 1 MeV) is of the order of 20 to 30%. However, these types of accuracies can only be obtained realistically by validation of nuclear data and calculational methods in benchmark facilities. The purposes of this paper are to review the needs and requirements for benchmark experiments, to discuss the status of current benchmark experiments, to summarize results and conclusions obtained so far, and to suggest areas where further benchmarking is needed

  8. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  9. Benchmarking Measures of Network Influence

    Science.gov (United States)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  10. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  11. Earnings Benchmarks in International hotel firms

    Directory of Open Access Journals (Sweden)

    Laura Parte Esteban

    2011-11-01

    Full Text Available This paper focuses on earnings management around earnings benchmarks (avoiding losses and earnings decreases hypothesis in international firms and non international firms belonging to the Spanish hotel industry. First, frequency histograms are used to determine the existence of a discontinuity in earnings in both segments. Second, the use of discretionary accruals as a tool to meet earnings benchmarks is analysed in international and non international firms. Empirical evidence shows that international and non international firms meet earnings benchmarks. It is also noted different behaviour between international and non international firms.

  12. LAPUR-K BWR stability benchmark

    International Nuclear Information System (INIS)

    This paper documents the stability benchmark of the LAPUR-K code using the measurements taken at the Ringhals Unit 1 plant over four cycles of operation. This benchmark was undertaken to demonstrate the ability of LAPUR-K to calculate the decay ratios for both core-wide and regional mode oscillations. This benchmark contributes significantly to assuring that LAPUR-K can be used to define the exclusion region for the Monticello Plant in response to recent US Nuclear Regulatory Commission notices concerning oscillation observed at Boiling Water Reactor plants. Stability is part of Northern States Power Reload Safety Evaluation of the Monticello Plant

  13. Research on Effect of SpectraI Scanning Parameters on Quantitative AnaIysis ModeI of TotaI Acids and Amino Acid Nitrogen in Soy Sauce%光谱扫描参数对酱油总酸和氨基酸态氮定量分析模型的影响研究

    Institute of Scientific and Technical Information of China (English)

    胡亚云; 崔璐

    2015-01-01

    研究了适合进行酱油中总酸和氨基酸态氮定量分析的近红外光谱扫描参数。通过设定不同的分辨率和扫描次数,采用光程为1 mm的比色皿分别扫描各种扫描参数下的酱油透射光谱,利用PLS-交叉验证法建立酱油定量分析校正模型,结果表明:在光谱扫描频率范围为12000~4000 cm-1,分辨率为8 cm-1,扫描次数64次的参数条件下所建模型最优。%Study near infrared spectrum scanning parameters for total acids and amino acid nitrogen in soy sauce by quantitative analysis.By setting different resolution and scanning times,using the cuvette with optical path of 1 mm to scan transmission spectra of soy sauce under various parameters respectively,using PLS-cross validation method to establish quantitative analysis calibration model of soy sauce,the results show that:in the spectral scanning frequency range of 12000~4000 cm-1 ,the resolution of 8 cm-1 ,the scanning times of 64,the model is the best.

  14. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  15. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and...

  16. DOE Commercial Building Benchmark Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  17. Clinical study on combined diagnosis of FibroScan and multi parameter model in liver fibrosis,cirrhosis%FibroScan与多参数模型联合诊断肝纤维化、肝硬化临床研究

    Institute of Scientific and Technical Information of China (English)

    蔺咏梅; 韩洁; 陆长春; 皇甫彤; 段作斌; 李秀花

    2015-01-01

    目的:探讨FibroScan联合多参数模型对肝纤维化、肝硬化的诊断价值.方法:收治肝纤维化患者52例和肝硬化患者27例,检测其AST、ALT、G及PLT水平,计算FibroIdex、FIB-4及APRI模型值,并进行FibroScan检查,评价模型参数与 FibroScan 联合和单独诊断肝纤维化、肝硬化的效能.结果:肝硬化组的 APRI、FIB-4、FibroIndex 及FibroScan均较肝纤维化组显著升高,且组内呈升高趋势,FibroScan与3个模型间呈显著相关性(P<0.05);FibroScan与多参数模型联合可提高诊断准确率与AUC.结论:FibroScan联合多参数模型可提高肝纤维化与肝硬化的诊断准确度.%Objective:To explore the combined diagnosis value of FibroScan and multi parameter model in liver fibrosis and cirrhosis.Methods:52 patients with liver fibrosis and 27 patients with liver cirrhosis were selected.AST,ALT,G and PLT levels were detected to calculate FibroIdex.FIB-4 and APRI model values,FibroScan was examined.We evaluated the efficiency of combined with the parameters and FibroScan model diagnosis and a separate diagnosis for liver fibrosis and cirrhosis.Results:In liver cirrhosis group,compared with the hepatic fibrosis group,APRI,FIB-4,FibroIndex and FibroScan were significantly increased, and within groups showed a rising trend,FibroScan was significantly associated with the three model(P<0.05);the multi parameter model combined with FibroScan can improve the accuracy of diagnosis and AUC.Conclusion:FibroScan combined with multi parameter model can improve the diagnostic accuracy of hepatic fibrosis and cirrhosis.

  18. Heart PET scan

    Science.gov (United States)

    ... nuclear medicine scan; Heart positron emission tomography; Myocardial PET scan ... A PET scan requires a small amount of radioactive material (tracer). This tracer is given through a vein (IV), ...

  19. Coronary Calcium Scan

    Science.gov (United States)

    ... the NHLBI on Twitter. What Is a Coronary Calcium Scan? A coronary calcium scan is a test ... you have calcifications in your coronary arteries. Coronary Calcium Scan Figure A shows the position of the ...

  20. Benchmarking carbon emissions performance in supply chains

    OpenAIRE

    Acquaye, Adolf; Genovese, Andrea; Barrett, John W.; Koh, Lenny

    2014-01-01

    Purpose – The paper aims to develop a benchmarking framework to address issues such as supply chain complexity and visibility, geographical differences and non-standardized data, ensuring that the entire supply chain environmental impact (in terms of carbon) and resource use for all tiers, including domestic and import flows, are evaluated. Benchmarking has become an important issue in supply chain management practice. However, challenges such as supply chain complexity and visibility, geogra...

  1. MPI Benchmarking Revisited: Experimental Design and Reproducibility

    OpenAIRE

    Hunold, Sascha; Carpen-Amarie, Alexandra

    2015-01-01

    The Message Passing Interface (MPI) is the prevalent programming model used on today's supercomputers. Therefore, MPI library developers are looking for the best possible performance (shortest run-time) of individual MPI functions across many different supercomputer architectures. Several MPI benchmark suites have been developed to assess the performance of MPI implementations. Unfortunately, the outcome of these benchmarks is often neither reproducible nor statistically sound. To overcome th...

  2. Benchmark Two-Good Utility Functions

    OpenAIRE

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticity. It is shown how each of these utility functions arises from a simple graphical construction based on a single given indifference curve. Also, it is shown that possessors of such utility function...

  3. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  4. Under Pressure Benchmark for DDBMS Availability

    OpenAIRE

    Fior, Alessandro Gustavo; Meira, Jorge Augusto; Cunha De Almeida, Eduardo; Coelho, Ricardo Gonçalves; Didonet Del Fabro, Marcos; Le Traon, Yves

    2013-01-01

    The availability of Distributed Database Management Systems (DDBMS) is related to the probability of being up and running at a given point in time, and managing failures. One well-known and widely used mechanism to ensure availability is replication, which includes performance impact on maintaining data replicas across the DDBMS's machine nodes. Benchmarking can be used to measure such impact. In this article, we present a benchmark that evaluates the performance of DDBMS, considering availab...

  5. Benchmarking implementations of lazy functional languages

    OpenAIRE

    Hartel, P.H.; Langendoen, K. G.

    1993-01-01

    Five implementations of different lazy functional languages are compared using a common benchmark of a dozen medium size programs. The benchmarking procedure has been designed such that one set of programs can be translated automatically into different languages, thus allowing a fair comparison of the quality of compilers for different lazy functional languages. Aspects studied include compile time, execution time, ease of programmingdetermined by the availability of certain key features

  6. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  7. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  8. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  9. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  10. Clinically meaningful performance benchmarks in MS

    Science.gov (United States)

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (10 seconds) ranges of performance. PMID:24174581

  11. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  12. Benchmark studies of the gyro-Landau-fluid code and gyro-kinetic codes on kinetic ballooning modes

    Science.gov (United States)

    Tang, T. F.; Xu, X. Q.; Ma, C. H.; Bass, E. M.; Holland, C.; Candy, J.

    2016-03-01

    A Gyro-Landau-Fluid (GLF) 3 + 1 model has been recently implemented in BOUT++ framework, which contains full Finite-Larmor-Radius effects, Landau damping, and toroidal resonance [Ma et al., Phys. Plasmas 22, 055903 (2015)]. A linear global beta scan has been conducted using the JET-like circular equilibria (cbm18 series), showing that the unstable modes are kinetic ballooning modes (KBMs). In this work, we use the GYRO code, which is a gyrokinetic continuum code widely used for simulation of the plasma microturbulence, to benchmark with GLF 3 + 1 code on KBMs. To verify our code on the KBM case, we first perform the beta scan based on "Cyclone base case parameter set." We find that the growth rate is almost the same for two codes, and the KBM mode is further destabilized as beta increases. For JET-like global circular equilibria, as the modes localize in peak pressure gradient region, a linear local beta scan using the same set of equilibria has been performed at this position for comparison. With the drift kinetic electron module in the GYRO code by including small electron-electron collision to damp electron modes, GYRO generated mode structures and parity suggest that they are kinetic ballooning modes, and the growth rate is comparable to the GLF results. However, a radial scan of the pedestal for a particular set of cbm18 equilibria, using GYRO code, shows different trends for the low-n and high-n modes. The low-n modes show that the linear growth rate peaks at peak pressure gradient position as GLF results. However, for high-n modes, the growth rate of the most unstable mode shifts outward to the bottom of pedestal and the real frequency of what was originally the KBMs in ion diamagnetic drift direction steadily approaches and crosses over to the electron diamagnetic drift direction.

  13. Scan BIST with biased scan test signals

    Institute of Scientific and Technical Information of China (English)

    XIANG Dong; CHEN MingJing; SUN JiaGuang

    2008-01-01

    The conventional test-per-scan built-in self-test (BIST) scheme needs a number of shift cycles followed by one capture cycle.Fault effects received by the scan flip-flops are shifted out while shifting in the next test vector like scan testing.Unlike deterministic testing,it is unnecessary to apply a complete test vector to the scan chains.A new scan-based BIST scheme is proposed by properly controlling the test signals of the scan chains,Different biased random values are assigned to the test signals of scan flip-flops in separate scan chains.Capture cycles can be inserted at any clock cycle if necessary.A new testability estimation procedure according to the proposed testing scheme is presented.A greedy procedure is proposed to select a weight for each scan chain.Experimental results show that the proposed method can improve test effectiveness of scan-based BIST greatly,and most circuits can obtain complete fault coverage or very close to complete fault coverage.

  14. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  15. Benchmarks and statistics of entanglement dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tiersch, Markus

    2009-09-04

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  16. Benchmarks and statistics of entanglement dynamics

    International Nuclear Information System (INIS)

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  17. Influência de alguns parâmetros experimentais nos resultados de análises calorimétricas diferenciais - DSC Influence of some experimental parameters on the results of differential scanning calorimetry - DSC.

    Directory of Open Access Journals (Sweden)

    Cláudia Bernal

    2002-09-01

    Full Text Available A series of experiments were performed in order to demonstrate to undergraduate students or users of the differential scanning calorimetry (DSC, that several factors can influence the qualitative and quantitative aspects of DSC results. Saccharin, an artificial sweetner, was used as a probe and its thermal behavior is also discussed on the basis of thermogravimetric (TG and DSC curves.

  18. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind...

  19. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  20. Analysis of PSBT benchmark exercises for void distribution and DNB using a subchannel code MATRA

    International Nuclear Information System (INIS)

    In the framework of OECD/NRC PSBT benchmark, the subchannel grade void distribution data and DNB data were evaluated by a subchannel code MATRA. The zone-averaged void fraction at the central region of the 5x5 test bundle was compared with the benchmark data. Optimum values of turbulent mixing parameter, which is an input parameter for MATRA code, were evaluated by employing subchannel fluid temperature data. The influence of mixing vanes on the subchannel flow distribution was examined through a CFD analysis. The steady-state DNB benchmark data with uniform and non-uniform axial power shapes were evaluated by several DNB prediction models including an empirical correlation, CHF lookup table, and representative mechanistic DNB models with subchannel cross-sectional averaged local properties. (author)

  1. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  2. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    Science.gov (United States)

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  3. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  4. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  5. Organizational and economic aspects of benchmarking innovative products at the automobile industry enterprises

    Directory of Open Access Journals (Sweden)

    L.M. Taraniuk

    2016-06-01

    Full Text Available The aim of the article. The aim of the article is to determine the nature and characteristics of the use of benchmarking in the activity of domestic enterprises of automobile industry under current economic conditions. The results of the analysis. The article identified the concept of benchmarking, examining the stages of benchmarking, determination the efficiency of benchmarking in work automakers. It is considered the historical aspects of the emergence of benchmarking method in world economics. It is determined the economic aspects of the benchmarking in the work of enterprise automobile industry. The analysis on the stages of benchmarking of innovative products in the modern development of the productive forces and the impact of market factors on the economic activities of companies, including in the enterprise of automobile industry. The attention is focused on the specifics of implementing benchmarking at companies of automobile industry. It is considered statistics number of owners of electric vehicles worldwide. The authors researched market of electric vehicles in Ukraine. Also, it is considered the need of benchmarking using to improve the competitiveness of the national automobile industry especially CJSC “Zaporizhia Automobile Building Plant”. Authors suggested reasonable steps for its improvement. The authors improved methodical approach to assessing the selection of vehicles with the best technical parameters based on benchmarking, which, unlike the existing ones, based on the calculation of the integral factor of technical specifications of vehicles in order to establish better competitive products of companies automobile industry among evaluated. The main indicators of the national production of electric vehicles are shown. Attention is paid to the development of important ways of CJSC “Zaporizhia Automobile Building Plant”, where authors established the aspects that need to pay attention in the management of the

  6. Conventional cerebrospinal fluid scanning

    Energy Technology Data Exchange (ETDEWEB)

    Schicha, H.

    1985-06-01

    Conventional cerebrospinal fluid scanning (CSF scanning) today is mainly carried out in addition to computerized tomography to obtain information about liquor flow kinetics. Especially in patients with communicating obstructive hydrocephalus, CSF scanning is clinically useful for the decision for shunt surgery. In patients with intracranial cysts, CSF scanning can provide information about liquor circulation. Further indications for CSF scanning include the assessment of shunt patency especially in children, as well as the detection and localization of cerebrospinal fluid leaks.

  7. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  8. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  9. Uncertainty Analysis for OECD-NEA-UAM Benchmark Problem of TMI-1 PWR Fuel Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Hyuk; Kim, S. J.; Seo, K.W.; Hwang, D. H. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    A quantification of code uncertainty is one of main questions that is continuously asked by the regulatory body like KINS. Utility and code developers solve the issue case by case because the general answer about this question is still opened. Under the circumference, OECD-NEA has attracted the global consensus on the uncertainty quantification through the UAM benchmark program. OECD-NEA benchmark II-2 problem is a problem on the uncertainty quantification of subchannel code. It is a problem that the uncertainty of fuel temperature and ONB location on the TMI-1 fuel assembly are estimated on the transient and steady condition. In this study, the uncertainty quantification of MATRA code is performed on the problem. Workbench platform is developed to produce the large set of inputs that is needed to estimate the uncertainty quantification on the benchmark problem. Direct Monte Carlo sampling is used to the random sampling from sample PDF. Uncertainty analysis of MATRA code on OECD-NEA benchmark problem is estimated using the developed tool and MATRA code. Uncertainty analysis on OECD-NEA benchmark II-2 problem was performed to quantify the uncertainty of MATRA code. Direct Monte Carlo sampling is used to extract 2000 random parameters. Workbench program is developed to generate input files and post process of calculation results. Uncertainty affected by input parameters was estimated on the DNBR, the cladding and the coolant temperatures.

  10. Construction of Benchmark Problems for Solution of Ordinary Differential Equations

    Directory of Open Access Journals (Sweden)

    Sangchul Lee

    1994-01-01

    Full Text Available An inverse method is introduced to construct benchmark problems for the numerical solution of initial value problems. Benchmark problems constructed in this fashion have a known exact solution, even though analytical solutions are generally not obtainable. The process leading to the exact solution makes use of an initially available approximate numerical solution. A smooth interpolation of the approximate solution is forced to exactly satisfy the differential equation by analytically deriving a small forcing function to absorb all of the errors in the interpolated approximate solution. Using this special case exact solution, it is possible to directly investigate the relationship between global errors of a candidate numerical solution process and the associated tuning parameters for a given code and a given problem. Under the assumption that the original differential equation is well-posed with respect to the small perturbations, we thereby obtain valuable information about the optimal choice of the tuning parameters and the achievable accuracy of the numerical solution. Five illustrative examples are presented.

  11. Power loss benchmark of nine-switch converters in three-phase online-UPS application

    DEFF Research Database (Denmark)

    Qin, Zian; Loh, Poh Chiang; Blaabjerg, Frede

    2014-01-01

    as parameters like modulation index and phase angle of the load are taken into account. The benchmark of power loss will become a guidance for the users to make best use of the advantages and bypass the disadvantages of nine-switch converters. The results are finally verified on a 1.5 kW prototype....

  12. Studies of thermal-reactor benchmark-data interpretation: experimental corrections

    International Nuclear Information System (INIS)

    Experimental values of integral parameters of the lattices studied in this report, i.e., the MIT(D2O) and TRX benchmark lattices have been re-examined and revised. The revisions correct several systematic errors that have been previously ignored or considered insignificant. These systematic errors are discussed in detail. The final corrected values are presented

  13. SP2Bench: A SPARQL Performance Benchmark

    CERN Document Server

    Schmidt, Michael; Lausen, Georg; Pinkel, Christoph

    2008-01-01

    Recently, the SPARQL query language for RDF has reached the W3C recommendation status. In response to this emerging standard, the database community is currently exploring efficient storage techniques for RDF data and evaluation strategies for SPARQL queries. A meaningful analysis and comparison of these approaches necessitates a comprehensive and universal benchmark platform. To this end, we have developed SP$^2$Bench, a publicly available, language-specific SPARQL performance benchmark. SP$^2$Bench is settled in the DBLP scenario and comprises both a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. As a proof of concept, we apply SP$^2$Bench to existing engines and discuss ...

  14. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador;

    2016-01-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision......-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  15. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  16. Standardized benchmarking in the quest for orthologs.

    Science.gov (United States)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  17. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  18. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  19. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  20. International Benchmarking of Electricity Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2014-01-01

    TSO operating in each jurisdiction. The solution for European regulators has been found in international regulatory benchmarking, organized in collaboration with the Council of European Energy Regulators (CEER) in 2008 and 2012 for 22 and 23 TSOs, respectively. The frontier study provides static cost...... efficiency estimates for each TSO, as well as dynamic results in terms of technological improvement rate and efficiency catch-up speed. In this paper, we provide the methodology for the benchmarking, using non-parametric DEA under weight restrictions, as well as an analysis of the static cost efficiency...

  1. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  2. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li17Pb83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li17Pb83 blankets. (author)

  3. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    Science.gov (United States)

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible

  4. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    Science.gov (United States)

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible

  5. Rapid Frequency Scan EPR

    OpenAIRE

    Tseitlin, Mark; Rinard, George A.; Quine, Richard W.; Eaton, Sandra S.; Eaton, Gareth R.

    2011-01-01

    In rapid frequency scan EPR with triangular scans, sufficient time must be allowed to insure that the magnetization in the x,y plane decays to baseline at the end of the scan, which typically is about 5 T2 after the spins are excited. To permit relaxation of signals excited toward the extremes of the scan the total scan time required may be much longer than 5 T2. However, with periodic, saw-tooth excitation, the slow-scan EPR spectrum can be recovered by Fourier deconvolution of data recorded...

  6. Extraction of electron beam dose parameters from EBT2 film data scored in a mini phantom.

    Science.gov (United States)

    O'Reilly, Dedri; Smit, Cobus J L; du Plessis, Freek C P

    2013-09-01

    Quality assurance of medical linear accelerators includes dosimetric parameter measurement of therapeutic electron beams e.g. relative dose at a depth of 80% (R₈₀). This parameter must be within a tolerance of 0.2 cm of the declared value. Cumbersome water tank measurements can be regarded as a benchmark to measure electron depth dose curves. A mini-phantom was designed and built, in which a strip of GAFCHROMIC® EBT2 film could be encased tightly for electron beam depth dose measurement. Depth dose data were measured for an ELEKTA Sl25 MLC, ELEKTA Precise, and ELEKTA Synergy (Elekta Oncology Systems, Crawley, UK) machines. The electron beam energy range was between 4 and 22 MeV among the machines. A 10 × 10 cm² electron applicator with 95 cm source-surface-distance was used on all the machines. 24 h after irradiation, the EBT2 film strips were scanned on Canon CanoScan N670U scanner. Afterwards, the data were analysed with in-house developed software that entailed optical density to dose conversion, and optimal fitting of the PDD data to de-noise the raw data. From the PDD data R₈₀ values were solved for and compared with acceptance values. A series of tests were also carried out to validate the use of the scanner for film Dosimetry. These tests are presented in this study. It was found that this method of R₈₀ evaluation was reliable with good agreement with benchmark water tank measurements using a commercial parallel plate ionization chamber as the radiation detector. The EBT2 film data yielded R₈₀ values that were on average 0.06 cm different from benchmark water tank measured R₈₀ values. PMID:23794059

  7. Higgs-Boson Benchmarks in Agreement with CDM, EWPO and BPO

    CERN Document Server

    Heinemeyer, S

    2007-01-01

    We explore `benchmark planes' in the Minimal Supersymmetric Standard Model (MSSM) that are in agreement with the measured cold dark matter (CDM) density, electroweak precision observables (EWPO) and B physics observables (BPO). The M_A-tan_beta planes are specified assuming that gaugino masses m_{1/2}, soft trilinear supersymmetry-breaking parameters A_0 and the soft supersymmetry-breaking contributions m_0 to the squark and slepton masses are universal, but not those associated with the Higgs multiplets (the NUHM framework). We discuss the prospects for probing experimentally these benchmark surfaces at the Tevatron collider, the LHC and the ILC.

  8. Gaia benchmark stars and their twins in the Gaia-ESO Survey

    Science.gov (United States)

    Jofré, P.

    2016-09-01

    The Gaia benchmark stars are stars with very precise stellar parameters that cover a wide range in the HR diagram at various metallicities. They are meant to be good representative of typical FGK stars in the Milky Way. Currently, they are used by several spectroscopic surveys to validate and calibrate the methods that analyse the data. I review our recent activities done for these stars. Additionally, by applying our new method to find stellar twins on the Gaia-ESO Survey, I discuss how good representatives of Milky Way stars the benchmark stars are and how they distribute in space.

  9. Status of the international criticality safety benchmark evaluation project (ICSBEP)

    International Nuclear Information System (INIS)

    Since ICNC'99, four new editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments have been published. The number of benchmark specifications in the Handbook has grown from 2157 in 1999 to 3073 in 2003, an increase of nearly 1000 specifications. These benchmarks are used to validate neutronics codes and nuclear cross-section data. Twenty evaluations representing 192 benchmark specifications were added to the Handbook in 2003. The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) is provided in this paper along with a summary of the newly added benchmark specifications that appear in the 2003 Edition of the Handbook. (author)

  10. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...... organizational relations, behaviors and actions. In closing it is briefly considered how to study the calculative practices of benchmarking....

  11. RBC nuclear scan

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003835.htm RBC nuclear scan To use the sharing features on this page, please enable JavaScript. An RBC nuclear scan uses small amounts of radioactive material to ...

  12. 不同CT扫描条件模拟定位对放射治疗计划的影响%Evaluation of scanning parameters in CT simulation on radiotherapy planning

    Institute of Scientific and Technical Information of China (English)

    李定杰; 刘如; 毛荣虎; 吴慧; 雷宏昌; 王建华

    2012-01-01

    目的:利用不均匀等效模体,探讨不同CT扫描条件对CT值及照射跳数(MU)的影响.方法:采用大小2种CT孔径(80和70 cm)、2种模体几何摆放顺序及2种扫描电压(120和140 KV),测量不同组合,比较各自的CT值,建立相应的CT-电子密度(ED)转换曲线,选取盆腔、胸部、头颈部各10例患者的CT图像模拟适形计划(CRT)和调强计划(IMRT),分析照射MU数值的偏差.结果:对于小孔径CT,无论扫描电压、模体几何位置如何变化,其MU数值相差均≤0.1%;大孔径CT扫描电压改变对MU值无影响,模体几何摆放顺序的改变有影响,但<0.3%;CRT计划和IMRT计划各自偏差值相同.结论:对于精确放疗计划系统,CT扫描条件改变和测量模体位置改变均可引起照射MU数值误差.若使用大孔径CT进行模拟定位,需根据其特性建立合理的电子密度曲线,并应用在计划制定中.%OBJECTIVE : To study the Influence of different CT scanning conditions on the CT value and radiation doses by using heterogeneous equivalent phantoms. METHODS: Two kinds of CT bore (80 cm and 70 cm),geometrical arrangements of phantom and scanning voltage (120 KV and 140 KV) were adopted respectively. The CT values of different combinations were measured and compared respectively. The CT-electron density (CT-ED) conversion curve was established. The CT images from thirty patients composed of ten cases of pelvic cavity,ten cases of thorax and ten eases of head and neck region respectively were used to make the CRT treatment planning and the IMRT planning respectively. The MU deviations of the treatment plannings were analyzed, RESULTS : For the smaller bore CT with any changes of scanning voltage and geometrical arrangements of phantom,the MU deviations of was less than 0.1%. For the larger bore CT,the change of scanning voltage had not influence on the MU value, the arrangements of phantom affected the MU value and the MU deviations was less than 0. 3 %. Both

  13. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    solutions to the problem have been proposed so far including, for instance, evolutionary techniques, swarm intelligence or ad hoc solutions. However, the large diversity of the solutions and the lack of a common benchmark, made any comparative analysis of the different solutions extremely difficult...

  14. Benchmark Generation and Simulation at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Lagadapati, Mahesh [North Carolina State University (NCSU), Raleigh; Mueller, Frank [North Carolina State University (NCSU), Raleigh; Engelmann, Christian [ORNL

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  15. Algorithm and Architecture Independent Benchmarking with SEAK

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  16. A protein–DNA docking benchmark

    NARCIS (Netherlands)

    van Dijk, M.; Bonvin, A.M.J.J.

    2008-01-01

    We present a protein–DNA docking benchmark containing 47 unbound–unbound test cases of which 13 are classified as easy, 22 as intermediate and 12 as difficult cases. The latter shows considerable structural rearrangement upon complex formation. DNA-specific modifications such as flipped out bases an

  17. FinPar: A Parallel Financial Benchmark

    DEFF Research Database (Denmark)

    Andreetta, Christian; Begot, Vivien; Berthold, Jost;

    2016-01-01

    sensitive to the input dataset and therefore requires multiple code versions that are optimized differently, which also raises maintainability problems. This article presents three array-based applications from the financial domain that are suitable for gpgpu execution. Common benchmark-design practice has...

  18. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  19. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  20. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    Angles Rojas, R.; Pham, M.D.; Boncz, P.A.

    2014-01-01

    With inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics in industrial-st

  1. Alberta K-12 ESL Proficiency Benchmarks

    Science.gov (United States)

    Salmon, Kathy; Ettrich, Mike

    2012-01-01

    The Alberta K-12 ESL Proficiency Benchmarks are organized by division: kindergarten, grades 1-3, grades 4-6, grades 7-9, and grades 10-12. They are descriptors of language proficiency in listening, speaking, reading, and writing. The descriptors are arranged in a continuum of seven language competences across five proficiency levels. Several…

  2. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  3. Indoor Modelling Benchmark for 3D Geometry Extraction

    Science.gov (United States)

    Thomson, C.; Boehm, J.

    2014-06-01

    A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.

  4. Pulmonary ventilation/perfusion scan

    Science.gov (United States)

    V/Q scan; Ventilation/perfusion scan; Lung ventilation/perfusion scan ... A pulmonary ventilation/perfusion scan is actually two tests. They may be done separately or together. During the perfusion scan, a health care ...

  5. Pulmonary ventilation/perfusion scan

    Science.gov (United States)

    V/Q scan; Ventilation/perfusion scan; Lung ventilation/perfusion scan ... A pulmonary ventilation/perfusion scan is actually two tests. They may be done separately or together. During the perfusion scan, a health ...

  6. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  7. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    International Nuclear Information System (INIS)

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of keff. Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for keff, f28/f25, c28/f25, and βeff. These limited results demonstrate the importance of studying other integral parameters in addition to keff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files

  8. Preliminary study of the optimization of abdominal CT scanning parameters on 64-slice spiral CT%64层螺旋CT腹部扫描参数优化的初步研究

    Institute of Scientific and Technical Information of China (English)

    胡敏霞; 赵心明; 宋俊峰; 周纯武; 赵红枫

    2011-01-01

    Objective To investigate the appropriate low tube current of abdominal CT on a 64-slice spiral CT. Methods (1) Phantom study:The phantom Catphan500R was scanned with a fixed 120 kVp,and 450,400,380,360,340,320,300,280 mA, respectively. 15, 9, 8, 7, 6 mm diameter low-contrast objects with 1% contrast were scanned for evaluating image quality. CT images were graded in terms of lowcontrast conspicuity by using a five-point scale. Statistical analyses were performed to determine the appropriate tube current and the interval leading to the qualitative change. (2) Clinical study: 3 groups of 45 patients who had 2 examinations of non-enhanced abdominal CT within 3 months were enrolled. All patients were scanned with 450 mA at first scanning. For the second scanning, group-1 was scanned with optimal tube current, group-2 was scanned with optimal tube current plus interval, group-3 was scanned with optimal tube current sinus interval. CT images were graded in terms of the diagnostic acceptability at three anatomic levels including porta hepatis, pancreas and the upper pole kidney, and the image noises of eight organs including abdominal aorta, portal vein, liver, spleen, gallbladder, pancreas, renal cortex, renal medulla were graded by using a five-point scale. The image quality was compared with non-parametric rank sum test,and the individual factors of the patients were compared with the A VONA. Results (1) The optimal tube current and interval leading to the qualitative change were 340 mA and 40 mA respectively. (2) There were no significant differences in image quality between 340 mA and 450 mA in group-1, between 380 mA and 450 mA in group-2 (P > 0. 05). There was significant difference in image quality between 300 mA and 450 mA in group-3 (the mean scores for 300 mA were 2. 92 ± 0. 62,2.92 ± 0. 62,2.64 ± 0. 84,2. 72 ±0.82,2.63 ±0.71,2.51 ±0.84,3.04 ±0.72,3.04 ±0.72,2.63 ±0.71,2.52 ±0.73,2.93 ±0.81respectively; for 450 mA were 3.93 ± 0. 72,3.94 ± 0. 72

  9. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    Science.gov (United States)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  10. Benchmark exercise on expert judgment techniques in PSA Level 2

    International Nuclear Information System (INIS)

    This article summarizes objectives and aims of the concerted action 'Benchmark Exercise on Expert Judgment Techniques in PSA Level 2' and the results obtained within the project. The project was organized in three phases, namely a survey phase (pre-phase), a first phase devoted to parameter estimation assessment and a second phase devoted to benchmarking expert judgment methods on a scenario development case. The paper is focused on the first phase and on the results obtained by the application of five structured Expert Judgment (EJ) methodologies to the problem at hand. The results of the comparison of EJ methodologies are also provided; they are based on the use of some metrics suitably designed during the project. The context of Phase 2 and the issue to be tackled in this phase are briefly described; since this phase has been carried out only at a preliminary level (mainly after the end of the project), the results obtained are not reported here in detail but are only briefly commented on

  11. Benchmarking computational fluid dynamics models for lava flow simulation

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi

    2016-04-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.

  12. PWR experimental benchmark analysis using WIMSD and PRIDE codes

    International Nuclear Information System (INIS)

    Highlights: • PWR experimental benchmark calculations were performed using WIMSD and PRIDE codes. • Various models for lattice cell homogenization were used. • Multiplication factors, power distribution and reaction rates were studied. • The effect of cross section libraries on these parameters was analyzed. • The results were compared with experimental and reported results. - Abstract: The PWR experimental benchmark problem defined by ANS was analyzed using WIMSD and PRIDE codes. Different modeling methodologies were used to calculate the infinite and effective multiplication factors. Relative pin power distributions were calculated for infinite lattice and critical core configurations, while reaction ratios were calculated for infinite lattice only. The discrete ordinate method (DSN) and collision probability method (PERSEUS) were used in each calculation. Different WIMSD cross-section libraries based on ENDF/B-VI.8, ENDF/B-VII.0, IAEA, JEF-2.2, JEFF-3.1 and JENDL-3.2 nuclear data files were also employed in the analyses. Comparison was made with experimental data and other reported results in order to find a suitable strategy for PWR analysis

  13. Regression Tree-Based Methodology for Customizing Building Energy Benchmarks to Individual Commercial Buildings

    Science.gov (United States)

    Kaskhedikar, Apoorva Prakash

    According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI

  14. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  15. Effects of Exposure Imprecision on Estimation of the Benchmark Dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose......Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose...

  16. Rapid frequency scan EPR.

    Science.gov (United States)

    Tseitlin, Mark; Rinard, George A; Quine, Richard W; Eaton, Sandra S; Eaton, Gareth R

    2011-08-01

    In rapid frequency scan EPR with triangular scans, sufficient time must be allowed to insure that the magnetization in the x, y plane decays to baseline at the end of the scan, which typically is about 5T(2) after the spins are excited. To permit relaxation of signals excited toward the extremes of the scan the total scan time required may be much longer than 5T(2). However, with periodic, saw-tooth excitation, the slow-scan EPR spectrum can be recovered by Fourier deconvolution of data recorded with a total scan period of 5T(2), even if some spins are excited later in the scan. This scan time is similar to polyphase excitation methods. The peak power required for either polyphase excitation or rapid frequency scans is substantially smaller than for pulsed EPR. The use of an arbitrary waveform generator (AWG) and cross loop resonator facilitated implementation of the rapid frequency scan experiments reported here. The use of constant continuous low B(1), periodic excitation waveform, and constant external magnetic field is similar to polyphase excitation, but could be implemented without the AWG that is required for polyphase excitation. PMID:21664848

  17. Line-scanning, stage scanning confocal microscope

    Science.gov (United States)

    Carucci, John A.; Stevenson, Mary; Gareau, Daniel

    2016-03-01

    We created a line-scanning, stage scanning confocal microscope as part of a new procedure: video assisted micrographic surgery (VAMS). The need for rapid pathological assessment of the tissue on the surface of skin excisions very large since there are 3.5 million new skin cancers diagnosed annually in the United States. The new design presented here is a confocal microscope without any scanning optics. Instead, a line is focused in space and the sample, which is flattened, is physically translated such that the line scans across its face in a direction perpendicular to the line its self. The line is 6mm long and the stage is capable of scanning 50 mm, hence the field of view is quite large. The theoretical diffraction-limited resolution is 0.7um lateral and 3.7um axial. However, in this preliminary report, we present initial results that are a factor of 5-7 poorer in resolution. The results are encouraging because they demonstrate that the linear array detector measures sufficient signal from fluorescently labeled tissue and also demonstrate the large field of view achievable with VAMS.

  18. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    the visible human CT scans from the National Library of Medicine, are essential for producing realistic images. Sets of test cases with systematic and random errors in selected setup parameters and anatomic volumes are suitable for use as standard benchmarks by the radiotherapy community. In addition to serving as an aid to research and development, benchmark images may also be useful for evaluation of commercial systems and as part of a quality assurance program for clinical systems. Test cases and software are available upon request

  19. Pre Managed Earnings Benchmarks and Earnings Management of Australian Firms

    Directory of Open Access Journals (Sweden)

    Subhrendu Rath

    2012-03-01

    Full Text Available This study investigates benchmark beating behaviour and circumstances under which managers inflate earnings to beat earnings benchmarks. We show that two benchmarks, positive earnings and positive earnings change, are associated with earnings manipulation. Using a sample ofAustralian firms from 2000 to 2006, we find that when the underlying earnings are negative or below prior year’s earnings, firms are more likely to use discretionary accruals to inflate earnings to beat benchmarks.

  20. Benchmarking of corporate social responsibility: Methodological problems and robustness.

    OpenAIRE

    Graafland, J.J.; Eijffinger, S.C.W.; Smid, H.

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  1. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, and inefficient portfolio versions. The covariance between the of...

  2. Towards a Benchmark Suite for Modelica Compilers: Large Models

    OpenAIRE

    Frenkel, Jens; Schubert, Christian; Kunze, Günter; Fritzson, Peter; Sjölund, Martin; Pop, Adrian

    2011-01-01

    The paper presents a contribution to a Modelica benchmark suite. Basic ideas for a tool independent benchmark suite based on Python scripting along with models for testing the performance of Modelica compilers regarding large systems of equation are given. The automation of running the benchmark suite is demonstrated followed by a selection of benchmark results to determine the current limits of Modelica tools and how they scale for an increasing number of equations.

  3. 47 CFR 69.108 - Transport rate benchmark.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  4. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  5. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  6. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  7. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  8. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  9. 29 CFR 1952.203 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.203 Section 1952.203... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  10. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  11. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  12. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  13. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  15. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. Provenance and depositional environment of epi-shelf lake sediment from Schirmacher Oasis, East Antarctica, vis-à-vis scanning electron microscopy of quartz grain, size distribution and chemical parameters

    Science.gov (United States)

    Shrivastava, Prakash K.; Asthana, Rajesh; Roy, Sandip K.; Swain, Ashit K.; Dharwadkar, Amit

    2012-07-01

    The scientific study of quartz grains is a powerful tool in deciphering the depositional environment and mode of transportation of sediments, and ultimately the origin and classification of sediments. Surface microfeatures, angularity, chemical features, and grain-size analysis of quartz grains, collectively reveal the sedimentary and physicochemical processes that acted on the grains during different stages of their geological history. Here, we apply scanning electron microscopic (SEM) analysis to evaluating the sedimentary provenance, modes of transport, weathering characteristics, alteration, and sedimentary environment of selected detrital quartz grains from the peripheral part of two epi-shelf lakes (ESL-1 and ESL-2) of the Schirmacher Oasis of East Antarctica. Our study reveals that different styles of physical weathering, erosive signatures, and chemical precipitation variably affected these quartz grains before final deposition as lake sediments. Statistical analysis (central tendencies, sorting, skewness, and kurtosis) indicates that these quartz-bearing sediments are poorly sorted glaciofluvial sediments. Saltation and suspension seem to have been the two dominant modes of transportation, and chemical analysis of these sediments indicates a gneissic provenance.

  17. Measuring NUMA effects with the STREAM benchmark

    CERN Document Server

    Bergstrom, Lars

    2011-01-01

    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

  18. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  19. Active vibration control of nonlinear benchmark buildings

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xing-de; CHEN Dao-zheng

    2007-01-01

    The present nonlinear model reduction methods unfit the nonlinear benchmark buildings as their vibration equations belong to a non-affine system. Meanwhile,the controllers designed directly by the nonlinear control strategy have a high order, and they are difficult to be applied actually. Therefore, a new active vibration control way which fits the nonlinear buildings is proposed. The idea of the proposed way is based on the model identification and structural model linearization, and exerting the control force to the built model according to the force action principle. This proposed way has a better practicability as the built model can be reduced by the balance reduction method based on the empirical Grammian matrix. A three-story benchmark structure is presented and the simulation results illustrate that the proposed method is viable for the civil engineering structures.

  20. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  1. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  2. Non-judgemental Dynamic Fuel Cycle Benchmarking

    CERN Document Server

    Scopatz, Anthony Michael

    2015-01-01

    This paper presents a new fuel cycle benchmarking analysis methodology by coupling Gaussian process regression, a popular technique in Machine Learning, to dynamic time warping, a mechanism widely used in speech recognition. Together they generate figures-of-merit that are applicable to any time series metric that a benchmark may study. The figures-of-merit account for uncertainty in the metric itself, utilize information across the whole time domain, and do not require that the simulators use a common time grid. Here, a distance measure is defined that can be used to compare the performance of each simulator for a given metric. Additionally, a contribution measure is derived from the distance measure that can be used to rank order the importance of fuel cycle metrics. Lastly, this paper warns against using standard signal processing techniques for error reduction. This is because it is found that error reduction is better handled by the Gaussian process regression itself.

  3. Argonne Code Center: benchmark problem book

    International Nuclear Information System (INIS)

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification

  4. KENO-IV code benchmark calculation, (6)

    International Nuclear Information System (INIS)

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO3)4 aqueous solution, Pu metal or PuO2-polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)

  5. Argonne Code Center: benchmark problem book

    Energy Technology Data Exchange (ETDEWEB)

    1977-06-01

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification. (RWR)

  6. Parameter Extraction of Highway Route Based on 3-d Laser Scanning Technology%基于地面三维激光扫描技术的公路路线设计参数提取

    Institute of Scientific and Technical Information of China (English)

    王鑫森; 孔立; 郑德华

    2013-01-01

    3-D laser scanner is applied to highway survey, the point cloud density is optimized by the control of point cloud acquisition parameters, improved the accuracy of registration using ICP algorithm based on quaternion, and got rid of noise points with the algorithm of local outlier. Thus, the highway pavement point cloud of high quality is got, which can be provided as reliable data source for the follow-up design parameter extraction. Then, the road boundary lines and center lines are extracted with an edge detection algorithm and finally the design parameters are computed. Experiments show that the results fully meet the needs of the subsequent construction design.%将三维激光扫描仪应用于公路测量,通过控制点云采集的参数来优化点云密度分布,采用基于四元数的ICP算法进行配准,提高配准精度.运用局部点离群算法完成点云去噪,得到高质量的公路路面点云,为后续设计参数的提取工作提供了可靠的数据源.最后运用点云边界点识别算法准确提取了公路边界线,并进一步生成了公路中心线及各项路线设计参数.实验表明结果完全满足后续施工设计的需要.

  7. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    ' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... state during the computation. We ran the system with two servers doing the secure computation using a database with information on about 2500 users. Answers arrived in about 25 seconds....

  8. Benchmarking Performance of Web Service Operations

    OpenAIRE

    Zhang, Shuai

    2011-01-01

    Web services are often used for retrieving data from servers providing information of different kinds. A data providing web service operation returns collections of objects for a given set of arguments without any side effects. In this project a web service benchmark (WSBENCH) is developed to simulate the performance of web service calls. Web service operations are specified as SQL statements. The function generator of WSBENCH converts user specified SQL queries into functions and automatical...

  9. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  10. WIDER FACE: A Face Detection Benchmark

    OpenAIRE

    Yang, Shuo; Luo, Ping; Loy, Chen Change; Tang, Xiaoou

    2015-01-01

    Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categori...

  11. Benchmarking Nature Tourism between Zhangjiajie and Repovesi

    OpenAIRE

    Wu, Zhou

    2014-01-01

    Since nature tourism became a booming business in modern society, more and more tourists choose nature-based tourism destination for their holidays. To find ways to promote Repovesi national park is quite significant, in a bid to reinforce the competitiveness of Repovesi national park. The topic of this thesis is both to find good marketing strategies used by the Zhangjiajie national park, via benchmarking and to provide some suggestions to Repovesi national park. The Method used in t...

  12. Benchmarking polish basic metal manufacturing companies

    Directory of Open Access Journals (Sweden)

    P. Pomykalski

    2014-01-01

    Full Text Available Basic metal manufacturing companies are undergoing substantial strategic changes resulting from global changes in demand. During such periods managers should closely monitor and benchmark the financial results of companies operating in their section. Proper and timely identification of the consequences of changes in these areas may be crucial as managers seek to exploit opportunities and avoid threats. The paper examines changes in financial ratios of basic metal manufacturing companies operating in Poland in the period 2006-2011.

  13. BN-600 full MOX core benchmark analysis

    International Nuclear Information System (INIS)

    As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.

  14. Direct Simulation of a Solidification Benchmark Experiment

    OpenAIRE

    Carozzani, Tommy; Gandin, Charles-André; Digonnet, Hugues; Bellet, Michel; Zaidat, Kader; Fautrelle, Yves

    2013-01-01

    International audience A solidification benchmark experiment is simulated using a three-dimensional cellular automaton-finite element solidification model. The experiment consists of a rectangular cavity containing a Sn-3 wt pct Pb alloy. The alloy is first melted and then solidified in the cavity. A dense array of thermocouples permits monitoring of temperatures in the cavity and in the heat exchangers surrounding the cavity. After solidification, the grain structure is revealed by metall...

  15. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  16. Benchmarking and energy management schemes in SMEs

    Energy Technology Data Exchange (ETDEWEB)

    Huenges Wajer, Boudewijn [SenterNovem (Netherlands); Helgerud, Hans Even [New Energy Performance AS (Norway); Lackner, Petra [Austrian Energy Agency (Austria)

    2007-07-01

    Many companies are reluctant to focus on energy management or to invest in energy efficiency measures. Nevertheless, there are many good examples proving that the right approach to implementing energy efficiency can very well be combined with the business-priorities of most companies. SMEs in particular can benefit from a facilitated European approach because they normally have a lack of resources and time to invest in energy efficiency. In the EU supported pilot project BESS, 60 SMEs from 11 European countries of the food and drink industries successfully tested a package of interactive instruments which offers such a facilitated approach. A number of pilot companies show a profit increase of 3 up to 10 %. The package includes a user-friendly and web based E-learning scheme for implementing energy management as well as a benchmarking module for company specific comparison of energy performance indicators. Moreover, it has several practical and tested tools to support the cycle of continuous improvement of energy efficiency in the company such as checklists, sector specific measure lists, templates for auditing and energy conservation plans. An important feature and also a key trigger for companies is the possibility for SMEs to benchmark anonymously their energy situation against others of the same sector. SMEs can participate in a unique web based benchmarking system to interactively benchmark in a way which fully guarantees confidentiality and safety of company data. Furthermore, the available data can contribute to a bottom-up approach to support the objectives of (national) monitoring and targeting and thereby also contributing to the EU Energy Efficiency and Energy Services Directive. A follow up project to expand the number of participating SMEs of various sectors is currently being developed.

  17. Radionucleotide scanning in osteomyelitis

    International Nuclear Information System (INIS)

    Radionucleotide bone scanning can be an excellent adjunct to the standard radiograph and clinical findings in the diagnosis of osteomyelitis. Bone scans have the ability to detect osteomyelitis far in advance of the standard radiograph. The sequential use of technetium and gallium has been useful in differentiating cellulitis and osteomyelitis. Serial scanning with technetium and gallium may be used to monitor the response of osteomyelitis to antibiotic therapy

  18. Low Cost Scan Test by Test Correlation Utilization

    Institute of Scientific and Technical Information of China (English)

    Ozgur Sinanoglu

    2007-01-01

    Scan-based testing methodologies remedy the testability problem of sequential circuits; yet they suffer from prolonged test time and excessive test power due to numerous shift operations. The correlation among test data along with the high density of the unspecified bits in test data enables the utilization of the existing test data in the scan chain for the generation of the subsequent test stimulus, thus reducing both test time and test data volume. We propose a pair of scan approaches in this paper; in the first approach, a test stimulus partially consists of the preceding stimulus, while in the second approach, a test stimulus partially consists of the preceding test response bits. Both proposed scan-based test schemes access only a subset of scan cells for loading the subsequent test stimulus while freezing the remaining scan cells with the preceding test data, thus decreasing scan chain transitions during shift operations. The proposed scan architecture is coupled with test data manipulation techniques which include test stimuli ordering and partitioning algorithms, boosting test time reductions. The experimental results confirm that test time reductions exceeding 97%,and test power reductions exceeding 99% can be achieved by the proposed scan-based testing methodologies on larger ISCAS89 benchmark circuits.

  19. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  20. BENCHMARKING ON-LINE SERVICES INDUSTRIES

    Institute of Scientific and Technical Information of China (English)

    John HAMILTON

    2006-01-01

    The Web Quality Analyser (WQA) is a new benchmarking tool for industry. It hasbeen extensively tested across services industries. Forty five critical success features are presented as measures that capture the user's perception of services industry websites. This tool differs to previous tools, in that it captures the information technology (IT) related driver sectors of website performance, along with the marketing-services related driver sectors. These driver sectors capture relevant structure, function and performance components.An 'on-off' switch measurement approach determines each component. Relevant component measures scale into a relative presence of the applicable feature, with a feature block delivering one of the sector drivers. Although it houses both measurable and a few subjective components, the WQA offers a proven and useful means to compare relevant websites.The WQA defines website strengths and weaknesses, thereby allowing for corrections to the website structure of the specific business. WQA benchmarking against services related business competitors delivers a position on the WQA index, facilitates specific website driver rating comparisons, and demonstrates where key competitive advantage may reside. This paper reports on the marketing-services driver sectors of this new benchmarking WQA tool.

  1. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  2. Introduction to the HPC Challenge Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  3. Benchmarking and accounting for the (private) cloud

    Science.gov (United States)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  4. State of the art: benchmarking microprocessors for embedded automotive applications

    Directory of Open Access Journals (Sweden)

    Adnan Shaout

    2016-09-01

    Full Text Available Benchmarking microprocessors provides a way for consumers to evaluate the performance of the processors. This is done by using either synthetic or real world applications. There are a number of benchmarks that exist today to assist consumers in evaluating the vast number of microprocessors that are available in the market. In this paper an investigation of the various benchmarks available for evaluating microprocessors for embedded automotive applications will be performed. We will provide an overview of the following benchmarks: Whetstone, Dhrystone, Linpack, standard performance evaluation corporation (SPEC CPU2006, embedded microprocessor benchmark consortium (EEMBC AutoBench and MiBench. A comparison of existing benchmarks will be given based on relevant characteristics of automotive applications which will give the proper recommendation when benchmarking processors for automotive applications.

  5. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  6. Selected papers on laser scanning and recording

    Science.gov (United States)

    Beiser, L.

    Previously published papers concerned with laser scanning and recording techniques are presented. Subjects treated include laser beam information scanning and recording, laser scanning techniques, system design considerations for laser scanning, laser noise, reliability, resolution, and dynamic range, and optical data storage systems. Consideration is given to the components and media for developing laser scanning and recording systems, in particular the laser, the optics, the scanner, and the storage media. Topics discussed include basic and operational multichannel acoustooptic operation; galvanometer and analog compensations; motor and control systems; angle measurement of scanner by interferometry, preheat-aided laser recording, creating multidimensional scan using a single rotating component, digital techniques in high resolution analog scanning and recording; laser scanning parameters and latitudes in laser xerography; optical video disc technology; focus error detection in optical data storage systems, holographic laser scanners for nonimpact printing; and techniques in optical strobe recording. Laser beam recording, techniques, film recorder systems, laser scanner applicaations, the optimization of printing speed and printout quality of laser beam printers, an internal drum laser scanning plate exposure system, and an ultra-high resolution graphic data terminal are described.

  7. SDSSJ14584479+3720215: A Benchmark JHK Blazar Light Curve from the 2MASS Calibration Scans

    CERN Document Server

    Davenport, James R A; Becker, Andrew C; Macleod, Chelsea L; Cutri, Roc M

    2015-01-01

    Active galactic nuclei (AGNs) are well-known to exhibit flux variability across a wide range of wavelength regimes, but the precise origin of the variability at different wavelengths remains unclear. To investigate the relatively unexplored near-IR variability of the most luminous AGNs, we conduct a search for variability using well sampled JHKs-band light curves from the 2MASS survey calibration fields. Our sample includes 27 known quasars with an average of 924 epochs of observation over three years, as well as one spectroscopically confirmed blazar (SDSSJ14584479+3720215) with 1972 epochs of data. This is the best-sampled NIR photometric blazar light curve to date, and it exhibits correlated, stochastic variability that we characterize with continuous auto-regressive moving average (CARMA) models. None of the other 26 known quasars had detectable variability in the 2MASS bands above the photometric uncertainty. A blind search of the 2MASS calibration field light curves for AGN candidates based on fitting C...

  8. Information-Theoretic Benchmarking of Land Surface Models

    Science.gov (United States)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  9. Getting a CAT Scan

    Medline Plus

    Full Text Available ... Crushes What's a Booger? Getting a CAT Scan (Video) KidsHealth > For Kids > Getting a CAT Scan (Video) Print A A A Text Size en español Obtención de una tomografía computada (video) CAT stands for "computerized axial tomography." Translated, that ...

  10. Getting a CAT Scan

    Medline Plus

    Full Text Available ... Here's Help White House Lunch Recipes Getting a CAT Scan (Video) KidsHealth > For Kids > Getting a CAT Scan (Video) Print A A A Text Size en español Obtención de una tomografía computada (video) CAT stands for "computerized axial tomography." Translated, that means ...

  11. Frequency scanning microstrip antennas

    DEFF Research Database (Denmark)

    Danielsen, Magnus; Jørgensen, Rolf

    1979-01-01

    The principles of using radiating microstrip resonators as elements in a frequency scanning antenna array are described. The resonators are cascade-coupled. This gives a scan of the main lobe due to the phase-shift in the resonator in addition to that created by the transmission line phase...

  12. Abdominal CT scan

    Science.gov (United States)

    ... than regular x-rays. Many x-rays or CT scans over time may increase your risk for cancer. However, the risk from any one scan is small. Talk to your doctor about this risk and the benefit of the test for getting a correct diagnosis ...

  13. Environmental Scanning Report.

    Science.gov (United States)

    Truckee Meadows Community Coll., Sparks, NV.

    This report describes Truckee Meadows Community College's (Nevada) environmental scanning process and results. The college decided that environmental scanning and forecasting techniques should be used to plan for both short-term and long-term external factors that impact programs, enrollment, and budgets. Strategic goals include: (1) keeping pace…

  14. Optical Scanning Applications.

    Science.gov (United States)

    Wagner, Hans

    The successful use of optical scanning at the University of the Pacific (UOP) indicates that such techniques can simplify a number of administrative data processing tasks. Optical scanning is regularly used at UOP to assist with data processing in the areas of admissions, registration and grade reporting and also has applications for other tasks…

  15. Knee CT scan

    Science.gov (United States)

    Risks of CT scans include: Exposure to radiation Allergy to contrast dye CT scans give off more radiation than ... injected contrast dye. The most common type of contrast contains ... you have this an iodine allergy. If you need to have this kind of ...

  16. Getting a CAT Scan

    Medline Plus

    Full Text Available ... Dictionary of Medical Words En Español What Other Kids Are Reading Movie: Digestive System Winter Sports: Sledding, ... Booger? Getting a CAT Scan (Video) KidsHealth > For Kids > Getting a CAT Scan (Video) Print A A ...

  17. Getting a CAT Scan

    Medline Plus

    Full Text Available ... Skiing, Snowboarding, Skating Crushes What's a Booger? Getting a CAT Scan (Video) KidsHealth > For Kids > Getting a CAT Scan (Video) Print A A A Text Size en español Obtención de ...

  18. Getting a CAT Scan

    Medline Plus

    Full Text Available ... Snowboarding, Skating Crushes What's a Booger? Getting a CAT Scan (Video) KidsHealth > For Kids > Getting a CAT Scan (Video) Print A A A Text Size en español Obtención de una tomografía computada (video) CAT stands for "computerized axial tomography." Translated, that means ...

  19. MODEL BENCHMARK WITH EXPERIMENT AT THE SNS LINAC

    Energy Technology Data Exchange (ETDEWEB)

    Shishlo, Andrei P [ORNL; Aleksandrov, Alexander V [ORNL; Liu, Yun [ORNL; Plum, Michael A [ORNL

    2016-01-01

    The history of attempts to perform a transverse match-ing in the Spallation Neutron Source (SNS) superconduct-ing linac (SCL) is discussed. The SCL has 9 laser wire (LW) stations to perform non-destructive measurements of the transverse beam profiles. Any matching starts with the measurement of the initial Twiss parameters, which in the SNS case was done by using the first four LW stations at the beginning of the superconducting linac. For years the consistency between data from all LW stations could not be achieved. This problem was resolved only after significant improvements in accuracy of the phase scans of the SCL cavities, more precise analysis of all available scan data, better optics planning, and the initial longitudi-nal Twiss parameter measurements. The presented paper discusses in detail these developed procedures.

  20. Statistical Analysis for Criticality Uncertainty in Manufacturing Random Parameters

    International Nuclear Information System (INIS)

    The Expert Group on Uncertainty Analysis for Criticality Safety Assessment (UACSA) was organized for quantification of these uncertainties. The PHASE II work of UACSA was focused on the uncertainty from manufacturing tolerance. The benchmark problem of PHASE II assumes all manufacturing parameters change systematically, which means every fuel rods in fuel assembly has same parameters. But this assumption is a bit unrealistic. Each fuel rod can have random manufacturing parameters. The benchmark problem of UACSA PHASE IV assumes randomness of manufacturing parameters. Criticality uncertainty from manufacturing parameters is assessed with statistic method and assumption of parameter randomness via benchmark problem based on the criticality experiment. The resulting uncertainty seems to decrease as randomness increases

  1. MIMIC: An Innovative Methodology for Determining Mobile Laser Scanning System Point Density

    Directory of Open Access Journals (Sweden)

    Conor Cahalane

    2014-08-01

    Full Text Available Understanding how various Mobile Mapping System (MMS laser hardware configurations and operating parameters exercise different influence on point density is important for assessing system performance, which in turn facilitates system design and MMS benchmarking. Point density also influences data processing, as objects that can be recognised using automated algorithms generally require a minimum point density. Although obtaining the necessary point density impacts on hardware costs, survey time and data storage requirements, a method for accurately and rapidly assessing MMS performance is lacking for generic MMSs. We have developed a method for quantifying point clouds collected by an MMS with respect to known objects at specified distances using 3D surface normals, 2D geometric formulae and line drawing algorithms. These algorithms were combined in a system called the Mobile Mapping Point Density Calculator (MIMIC and were validated using point clouds captured by both a single scanner and a dual scanner MMS. Results from MIMIC were promising: when considering the number of scan profiles striking the target, the average error equated to less than 1 point per scan profile. These tests highlight that MIMIC is capable of accurately calculating point density for both single and dual scanner MMSs.

  2. Benchmark calculations on the phase II problem of uncertainty analyses for criticality safety assessment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G. S.; Lee, J.; Kim, G. Y.; Woo, S. W. [Korea Inst. of Nuclear Safety, 62 Gwahak-ro, Yuseong-gu, Daejeon, 305-338 (Korea, Republic of)

    2012-07-01

    The phase II benchmark problem of expert group UACSA includes a configuration of a PWR fuel storage rack and focuses on the uncertainty of criticality from manufacturing tolerance of design parameters such as fuel enrichment, density, diameter, thickness of neutron absorber and structural material, and so on. It provides probability density functions for each design parameter. In this paper, upper limits of k{sub eff} of 95%/95% tolerance with two methods are calculated by sampling design parameters using given probability distributions and compared with the result from traditional approach. (authors)

  3. CALIBRATION METHODS OF A CONSTITUTIVE MODEL FOR PARTIALLY SATURATED SOILS: A BENCHMARKING EXERCISE WITHIN THE MUSE NETWORK

    OpenAIRE

    D'Onza, Francesca

    2008-01-01

    The paper presents a benchmarking exercise comparing different procedures, adopted by seven different teams of constitutive modellers, for the determination of parameter values in the Barcelona Basic Model, which is an elasto-plastic model for unsaturated soils. Each team is asked to determine a set of parameter values based on the same laboratory test data. The different set of parameters are then employed to simulate soil behaviour along a variety of stress paths. The results are finally co...

  4. Structural Benchmark Creep Testing for Microcast MarM-247 Advanced Stirling Convertor E2 Heater Head Test Article SN18

    Science.gov (United States)

    Krause, David L.; Brewer, Ethan J.; Pawlik, Ralph

    2013-01-01

    This report provides test methodology details and qualitative results for the first structural benchmark creep test of an Advanced Stirling Convertor (ASC) heater head of ASC-E2 design heritage. The test article was recovered from a flight-like Microcast MarM-247 heater head specimen previously used in helium permeability testing. The test article was utilized for benchmark creep test rig preparation, wall thickness and diametral laser scan hardware metrological developments, and induction heater custom coil experiments. In addition, a benchmark creep test was performed, terminated after one week when through-thickness cracks propagated at thermocouple weld locations. Following this, it was used to develop a unique temperature measurement methodology using contact thermocouples, thereby enabling future benchmark testing to be performed without the use of conventional welded thermocouples, proven problematic for the alloy. This report includes an overview of heater head structural benchmark creep testing, the origin of this particular test article, test configuration developments accomplished using the test article, creep predictions for its benchmark creep test, qualitative structural benchmark creep test results, and a short summary.

  5. Scanning with Iodine-125

    International Nuclear Information System (INIS)

    The use of I125 as I125-iodide for scanning of the thyroid and of thyroid carcinoma métastasés and as I125-labelled Rose Bengal for scanning of the liver has been investigated in our laboratories. In some patients, the thyroid scans and the scans of thyroid carcinoma metastases were repeated with I131. The liver scans were repeated with colloidal Au198. Cold nodules in the thyroid which could not be detected with I131 were clearly seen with I125. Anterior métastasés in the lungs could be differentiated from posterior métastasés with I125, while with I131 this was not possible. Using the same doses of radioactivity the background with I131 was much higher and could not be eliminated, as this would have reduced the counting rate over the lesions to levels which could not be detected. Some of the photoscans and mechanical scans of livers carried out with colloidal Au198 could not be interpreted. However, ''cold'' lesions were clearly seen on the scans of the same livers with I125-labelled Rose Bengal. (author)

  6. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  7. Towards Systematic Benchmarking of Climate Model Performance

    Science.gov (United States)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  8. Scan path entropy and Arrow plots: Capturing scanning behavior of multiple observers

    Directory of Open Access Journals (Sweden)

    Ignace T C Hooge

    2013-12-01

    Full Text Available Designers of visual communication material want their material to attract and retain attention. In marketing research, heat maps, dwell time, and time to AOI first hit are often used as evaluation parameters. Here we present two additional measures 1 scan path entropy to quantify gaze guidance and 2 the arrow plot to visualize the average scan path. Both are based on string representations of scan paths. The latter also incorporates transition matrices and time required for 50% of the observers to first hit AOIs (T50. The new measures were tested in an eye tracking study (48 observers, 39 advertisements. Scan path entropy is a sensible measure for gaze guidance and the new visualization method reveals aspects of the average scan path and gives a better indication in what order global scanning takes place.

  9. Scan path entropy and arrow plots: capturing scanning behavior of multiple observers.

    Science.gov (United States)

    Hooge, Ignace; Camps, Guido

    2013-01-01

    Designers of visual communication material want their material to attract and retain attention. In marketing research, heat maps, dwell time, and time to AOI first hit are often used as evaluation parameters. Here we present two additional measures (1) "scan path entropy" to quantify gaze guidance and (2) the "arrow plot" to visualize the average scan path. Both are based on string representations of scan paths. The latter also incorporates transition matrices and time required for 50% of the observers to first hit AOIs (T50). The new measures were tested in an eye tracking study (48 observers, 39 advertisements). Scan path entropy is a sensible measure for gaze guidance and the new visualization method reveals aspects of the average scan path and gives a better indication in what order global scanning takes place. PMID:24399993

  10. Supply chain integration scales validation and benchmark values

    Directory of Open Access Journals (Sweden)

    Juan A. Marin-Garcia

    2013-06-01

    Full Text Available Purpose: The clarification of the constructs of the supply chain integration (clients, suppliers, external and internal, the creation of a measurement instrument based on a list of items taken from earlier papers, the validation of these scales and a preliminary benchmark to interpret the scales by percentiles based on a set of control variables (size of the plant, country, sector and degree of vertical integration. Design/methodology/approach: Our empirical analysis is based on the HPM project database (2005-2007 timeframe. The international sample is made up of 266 plants across ten countries: Austria, Canada, Finland, Germany, Italy, Japan, Korea, Spain, Sweden and the USA. In each country. We analized the descriptive statistics, internal consistency testing to purify the items (inter-item correlations, Cronbach’s alpha, squared multiple correlation, corrected item-total correlation, exploratory factor analysis, and finally, a confirmatory factor analysis to check the convergent and discriminant validity of the scales. The analyses will be done with the SPSS and EQS programme using the maximum likelihood parameter estimation method. Findings: The four proposed scales show excellent psychometric properties. Research limitations/implications: with a clearer and more concise designation of the supply chain integration measurement scales more reliable and accurate data could be taken to analyse the relations between these constructs with other variables of interest to the academic l fields. Practical implications: providing scales that are valid as a diagnostic tool for best practices, as well as providing a benchmark with which to compare the score for each individual plant against a collection of industrial companies from the machinery, electronics and transportation sectors. Originality/value: supply chain integration may be a major factor in explaining the performance of companies. The results are nevertheless inconclusive, the vast range

  11. BEGAFIP. Programming service, development and benchmark calculations

    International Nuclear Information System (INIS)

    This report summarizes improvements to BEGAFIP (the Swedish equivalent to the Oak Ridge computer code ORIGEN). The improvements are: addition of a subroutine making it possible to calculate neutron sources, exchange of fission yields and branching ratios in the data library to those published by Meek and Rider in 1978. In addition, BENCHMARK-calculations have been made with BEGAFIP as well as with ORIGEN regarding the build-up of actinides for a fuel burnup of 33 MWd/kg U. The results were compared to those arrived upon from the more sophisticated code CASMO. (author)

  12. An OpenMP Compiler Benchmark

    Directory of Open Access Journals (Sweden)

    Matthias S. Müller

    2003-01-01

    Full Text Available The purpose of this benchmark is to propose several optimization techniques and to test their existence in current OpenMP compilers. Examples are the removal of redundant synchronization constructs, effective constructs for alternative code and orphaned directives. The effectiveness of the compiler generated code is measured by comparing different OpenMP constructs and compilers. If possible, we also compare with the hand coded "equivalent" solution. Six out of seven proposed optimization techniques are already implemented in different compilers. However, most compilers implement only one or two of them.

  13. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich;

    2015-01-01

    In multicomponent electrolyte solutions, the tendency of ions to diffuse at different rates results in a charge imbalance that is counteracted by the electrostatic coupling between charged species leading to a process called “electrochemical migration” or “electromigration.” Although not commonly...... not been published to date. This contribution provides a set of three benchmark problems that demonstrate the effect of electric coupling during multicomponent diffusion and electrochemical migration and at the same time facilitate the intercomparison of solutions from existing reactive transport codes...

  14. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...... of an undergraduate business school education. This paper presents case analysis of the research-oriented participatory education curriculum developed at Copenhagen Business School because it appears uniquely suited, by a curious mix of Danish education tradition and deliberate innovation, to offer an educational...

  15. COVE 2A Benchmarking calculations using NORIA

    International Nuclear Information System (INIS)

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs

  16. ABM11 parton distributions and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, Sergey [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institut Fiziki Vysokikh Ehnergij, Protvino (Russian Federation); Bluemlein, Johannes; Moch, Sven-Olaf [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-08-15

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant {alpha}{sub s} at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS scheme for {alpha}{sub s} and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  17. Bulk and surface energetics of crystalline lithium hydride: Benchmarks from quantum Monte Carlo and quantum chemistry

    OpenAIRE

    Binnie, S. J.; Nolan, S. J.; Drummond, Neil; Alfe`, D.; Allan, N. L; Manby, F. R.; Gillan, M. J.

    2010-01-01

    We show how accurate benchmark values of the surface formation energy of crystalline lithium hydride can be computed by the complementary techniques of quantum Monte Carlo (QMC) and wavefunction-based molecular quantum chemistry. To demonstrate the high accuracy of the QMC techniques, we present a detailed study of the energetics of the bulk LiH crystal, using both pseudopotential and all-electron approaches. We show that the equilibrium lattice parameter agrees with experiment to within 0.03...

  18. Platform Independent Dynamic Java Virtual Machine Analysis: the Java Grande Forum Benchmark Suite

    OpenAIRE

    Daly, Charles; Horgan, Jane; Power, James; Waldron, John

    2001-01-01

    In this paper we present a platform independent analysis of the dynamic profiles of Java programs when executing on the Java Virtual Machine. The Java programs selected are taken from the Java Grande Forum benchmark suite, and five different Java-to-bytecode compilers are analysed. The results presented describe the dynamic instruction usage frequencies, as well as the sizes of the local variable, parameter and operand stacks during execution on the JVM. These results,...

  19. Gaia FGK benchmark stars : abundances of α and iron-peak elements

    OpenAIRE

    González Hernández, J. I.; Montes Gutiérrez, David; Tabernero Guzmán, Hugo Martín; otros

    2015-01-01

    Context. In the current era of large spectroscopic surveys of the Milky Way, reference stars for calibrating astrophysical parameters and chemical abundances are of paramount importance. Aims. We determine elemental abundances of Mg, Si, Ca, Sc, Ti, V, Cr, Mn, Co, and Ni for our predefined set of Gaia FGK benchmark stars. Methods. By analysing high-resolution spectra with a high signal-to-noise ratio taken from several archive datasets, we combined results of eight different methods to determ...

  20. Sensitivity Studies for the Exercise I-1 of the OECD/UAM Benchmark

    OpenAIRE

    Canuti, E.; Petruzzi, A.; F. D'Auria; Kozlowski, T.

    2012-01-01

    OECD/NEA has initiated an international Uncertainty Analysis in Modeling (UAM) benchmark focused on uncertainties in modeling of Light Water Reactor (LWR). The first step of uncertainty propagation is to perform sensitivity to the input data affected by the numerical errors and physical models. The objective of the present paper is to study the effect of the numerical discretization error and the manufacturing tolerances on fuel pin lattice integral parameters (multiplication factor and macro...

  1. Peculiarity by Modeling of the Control Rod Movement by the Kalinin-3 Benchmark

    International Nuclear Information System (INIS)

    The paper presents an important part of the results of the OECD/NEA benchmark transient 'Switching off one main circulation pump at nominal power' analyzed as a boundary condition problem by the coupled system code ATHLET-BIPR-VVER. Some observations and comparisons with measured data for integral reactor parameters are discussed. Special attention is paid on the modeling and comparisons performed for the control rod movement and the reactor power history. (Authors)

  2. The New Weather Radar for America's Space Program in Florida: A Temperature Profile Adaptive Scan Strategy

    Science.gov (United States)

    Carey, L. D.; Petersen, W. A.; Deierling, W.; Roeder, W. P.

    2009-01-01

    A new weather radar is being acquired for use in support of America s space program at Cape Canaveral Air Force Station, NASA Kennedy Space Center, and Patrick AFB on the east coast of central Florida. This new radar replaces the modified WSR-74C at Patrick AFB that has been in use since 1984. The new radar is a Radtec TDR 43-250, which has Doppler and dual polarization capability. A new fixed scan strategy was designed to best support the space program. The fixed scan strategy represents a complex compromise between many competing factors and relies on climatological heights of various temperatures that are important for improved lightning forecasting and evaluation of Lightning Launch Commit Criteria (LCC), which are the weather rules to avoid lightning strikes to in-flight rockets. The 0 C to -20 C layer is vital since most generation of electric charge occurs within it and so it is critical in evaluating Lightning LCC and in forecasting lightning. These are two of the most important duties of 45 WS. While the fixed scan strategy that covers most of the climatological variation of the 0 C to -20 C levels with high resolution ensures that these critical temperatures are well covered most of the time, it also means that on any particular day the radar is spending precious time scanning at angles covering less important heights. The goal of this project is to develop a user-friendly, Interactive Data Language (IDL) computer program that will automatically generate optimized radar scan strategies that adapt to user input of the temperature profile and other important parameters. By using only the required scan angles output by the temperature profile adaptive scan strategy program, faster update times for volume scans and/or collection of more samples per gate for better data quality is possible, while maintaining high resolution at the critical temperature levels. The temperature profile adaptive technique will also take into account earth curvature and refraction

  3. Live ultrasound volume reconstruction using scout scanning

    Science.gov (United States)

    Meyer, Amelie; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor

    2015-03-01

    Ultrasound-guided interventions often necessitate scanning of deep-seated anatomical structures that may be hard to visualize. Visualization can be improved using reconstructed 3D ultrasound volumes. High-resolution 3D reconstruction of a large area during clinical interventions is challenging if the region of interest is unknown. We propose a two-stage scanning method allowing the user to perform quick low-resolution scouting followed by high-resolution live volume reconstruction. Scout scanning is accomplished by stacking 2D tracked ultrasound images into a low-resolution volume. Then, within a region of interest defined in the scout scan, live volume reconstruction can be performed by continuous scanning until sufficient image density is achieved. We implemented the workflow as a module of the open-source 3D Slicer application, within the SlicerIGT extension and building on the PLUS toolkit. Scout scanning is performed in a few seconds using 3 mm spacing to allow region of interest definition. Live reconstruction parameters are set to provide good image quality (0.5 mm spacing, hole filling enabled) and feedback is given during live scanning by regularly updated display of the reconstructed volume. Use of scout scanning may allow the physician to identify anatomical structures. Subsequent live volume reconstruction in a region of interest may assist in procedures such as targeting needle interventions or estimating brain shift during surgery.

  4. Analysis of factors influencing 18F-FDG metabolic parameters in PET/CT scan for lung cancer%肺癌18F-FDG PET/CT代谢参数的影响因素分析

    Institute of Scientific and Technical Information of China (English)

    马文超; 徐文贵; 董有文; 宋秀宇; 朱研佳

    2015-01-01

    目的:探讨肺癌患者的体重指数(body mass index,BMI)、性别对其18F-FDG PET/CT代谢参数SUV(standardized up⁃take value)、SUL(standard uptake value of lean body mass)的影响。方法:回顾性分析50例原发性肺癌患者18F-FDG PET/CT显像结果。通过AW4.6后处理工作站测量病灶部位18F-FDG代谢指标SUVmean、SUVmax;应用PETVCAR(PET volume computed assisted reading,GE Healthcare)软件自动测定病灶SULmean、SULmax、SULpeak;另于肝右后叶实质取1 cm3感兴趣区,分别测量其SUVmean、SUVmax、SULmean及SULmax,计算肺部病灶T/N比值(病灶SUVmax/肝脏SUVmean、病灶SULmax/肝脏SULmean、病灶SULpeak/肝脏SULmean)。分析患者正常肝脏、肺癌病灶及T/N比值与BMI、性别的相关性。结果:50例患者肝脏的SUVmean、SUVmax与BMI呈正相关(r=0.38、0.36,P0.05)。肺癌病灶SUVmean、SUVmax、SULmean、SULmax及SUL⁃peak与BMI均无明显相关性(P>0.05),与不同性别组相关趋势相同。而肺癌病灶的T/N比值中仅T/N比值(SUVmax)与BMI呈负相关(r=-0.28,P0.05)。结论:SUV尤其是SUVmax与BMI、性别有关,而SUL及T/N比值(SUL)不受患者BMI、性别的影响,用于恶性肿瘤代谢定量分析及治疗反应评估更加客观。该结果有助于推动以SUL为主要评价指标的PERCIST标准的临床应用及推广。%Objective:To explore the effects of body mass index (BMI) and gender on primary lung cancer 18F-FDG uptake param-eters, standardized uptake value (SUV), and standard uptake value of lean body mass (SUL). Methods:Data of 50 patients with prima-ry lung cancer confirmed by 18F-FDG positron emission tomography (PET)/computed tomography (CT) were retrospectively analyzed. AW4.6 workstation was employed to measure the SUVmean and SUVmax. Meanwhile, PETVCAR (PET Volume Computed Assisted Reading, GE Healthcare) software was used to automatically measure the SULmean, SULmax, and SULpeak

  5. TREAT Transient Analysis Benchmarking for the HEU Core

    Energy Technology Data Exchange (ETDEWEB)

    Kontogeorgakos, D. C. [Argonne National Lab. (ANL), Argonne, IL (United States); Connaway, H. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, A. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-05-01

    This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used to determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.

  6. Synthetic neuronal datasets for benchmarking directed functional connectivity metrics

    Directory of Open Access Journals (Sweden)

    João Rodrigues

    2015-05-01

    Full Text Available Background. Datasets consisting of synthetic neural data generated with quantifiable and controlled parameters are a valuable asset in the process of testing and validating directed functional connectivity metrics. Considering the recent debate in the neuroimaging community concerning the use of these metrics for fMRI data, synthetic datasets that emulate the BOLD signal dynamics have played a central role by supporting claims that argue in favor or against certain choices. Generative models often used in studies that simulate neuronal activity, with the aim of gaining insight into specific brain regions and functions, have different requirements from the generative models for benchmarking datasets. Even though the latter must be realistic, there is a tradeoff between realism and computational demand that needs to be contemplated and simulations that efficiently mimic the real behavior of single neurons or neuronal populations are preferred, instead of more cumbersome and marginally precise ones. Methods. This work explores how simple generative models are able to produce neuronal datasets, for benchmarking purposes, that reflect the simulated effective connectivity and, how these can be used to obtain synthetic recordings of EEG and fMRI BOLD signals. The generative models covered here are AR processes, neural mass models consisting of linear and nonlinear stochastic differential equations and populations with thousands of spiking units. Forward models for EEG consist in the simple three-shell head model while the fMRI BOLD signal is modeled with the Balloon-Windkessel model or by convolution with a hemodynamic response function. Results. The simulated datasets are tested for causality with the original spectral formulation for Granger causality. Modeled effective connectivity can be detected in the generated data for varying connection strengths and interaction delays. Discussion. All generative models produce synthetic neuronal data with

  7. Accuracy and Uncertainty Analysis of PSBT Benchmark Exercises Using a Subchannel Code MATRA

    Directory of Open Access Journals (Sweden)

    Dae-Hyun Hwang

    2012-01-01

    Full Text Available In the framework of the OECD/NRC PSBT benchmark, the subchannel grade void distribution data and DNB data were assessed by a subchannel code, MATRA. The prediction accuracy and uncertainty of the zone-averaged void fraction at the central region of the 5 × 5 test bundle were evaluated for the steady-state and transient benchmark data. Optimum values of the turbulent mixing parameter were evaluated for the subchannel exit temperature distribution benchmark. The influence of the mixing vanes on the subchannel flow distribution was investigated through a CFD analysis. In addition, a regionwise turbulent mixing model was examined to account for the nonhomogeneous mixing characteristics caused by the vane effect. The steady-state DNB benchmark data with uniform and nonuniform axial power shapes were evaluated by employing various DNB prediction models: EPRI bundle CHF correlation, AECL-IPPE 1995 CHF lookup table, and representative mechanistic DNB models such as a sublayer dryout model and a bubble crowding model. The DNBR prediction uncertainties for various DNB models were evaluated from a Monte-Carlo simulation for a selected steady-state condition.

  8. Toward Establishing a Realistic Benchmark for Airframe Noise Research: Issues and Challenges

    Science.gov (United States)

    Khorrami, Mehdi R.

    2010-01-01

    The availability of realistic benchmark configurations is essential to enable the validation of current Computational Aeroacoustic (CAA) methodologies and to further the development of new ideas and concepts that will foster the technologies of the next generation of CAA tools. The selection of a real-world configuration, the subsequent design and fabrication of an appropriate model for testing, and the acquisition of the necessarily comprehensive aeroacoustic data base are critical steps that demand great care and attention. In this paper, a brief account of the nose landing-gear configuration, being proposed jointly by NASA and the Gulfstream Aerospace Company as an airframe noise benchmark, is provided. The underlying thought processes and the resulting building block steps that were taken during the development of this benchmark case are given. Resolution of critical, yet conflicting issues is discussed - the desire to maintain geometric fidelity versus model modifications required to accommodate instrumentation; balancing model scale size versus Reynolds number effects; and time, cost, and facility availability versus important parameters like surface finish and installation effects. The decisions taken during the experimental phase of a study can significantly affect the ability of a CAA calculation to reproduce the prevalent flow conditions and associated measurements. For the nose landing gear, the most critical of such issues are highlighted and the compromises made to resolve them are discussed. The results of these compromises will be summarized by examining the positive attributes and shortcomings of this particular benchmark case.

  9. Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction

    Science.gov (United States)

    Krueger, Ronald

    2011-01-01

    An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.

  10. The conical scan radiometer

    Science.gov (United States)

    Prosch, T.; Hennings, D.

    1982-07-01

    A satellite-borne conical scan radiometer (CSR) is proposed, offering multiangular and multispectral measurements of Earth radiation fields, including the total radiances, which are not available from conventional radiometers. Advantages of the CSR for meteorological studies are discussed. In comparison to conventional cross track scanning instruments, the CSR is unique with respect to the selected picture element size which is kept constant by means of a specially shaped detector matrix at all scan angles. The conical scan mode offers the chance to improve angular sampling. Angular sampling gaps of previous satellite-borne radiometers can be interpolated and complemented by CSR data. Radiances are measured through 10 radiometric channels which are selected to study cloudiness, water vapor, ozone, surface albedo, ground and mean stratospheric temperature, and aerosols.

  11. Pediatric CT Scans

    Science.gov (United States)

    The Radiation Epidemiology Branch and collaborators have initiated a retrospective cohort study to evaluate the relationship between radiation exposure from CT scans conducted during childhood and adolescence and the subsequent development of cancer.

  12. Pelvic CT scan

    Science.gov (United States)

    ... Abscess (collection of pus) Bladder stones Broken bone Cancer Diverticulitis ... scans over time may increase your risk of cancer. But the risk from any ... to contrast dye. Let your provider know if you have ever ...

  13. Sinus CT scan

    Science.gov (United States)

    ... may be due to: Birth defects Bone fractures Cancer Polyps in the sinuses Sinus infection (sinusitis) ... scans over time may increase your risk for cancer. However, the risk ... to contrast dye. Let your provider know if you have ever ...

  14. Building with Benchmarks: The Role of the District in Philadelphia's Benchmark Assessment System

    Science.gov (United States)

    Bulkley, Katrina E.; Christman, Jolley Bruce; Goertz, Margaret E.; Lawrence, Nancy R.

    2010-01-01

    In recent years, interim assessments have become an increasingly popular tool in districts seeking to improve student learning and achievement. Philadelphia has been at the forefront of this change, implementing a set of Benchmark assessments aligned with its Core Curriculum district-wide in 2004. In this article, we examine the overall context…

  15. Benchmark and gap analysis of current mask carriers vs future requirements: example of the carrier contamination

    Science.gov (United States)

    Fontaine, H.; Davenet, M.; Cheung, D.; Hoellein, I.; Richsteiger, P.; Dejaune, P.; Torsy, A.

    2007-02-01

    In the frame of the European Medea+ 2T302 MUSCLE project, an extensive mask carriers benchmark was carried out in order to evaluate whether some containers answer to the 65nm technology needs. Ten different containers, currently used or expected in the future all along the mask supply chain (blank, maskhouse and fab carriers) were selected at different steps of their life cycle (new, aged, aged & cleaned). The most critical parameters identified for analysis versus future technologies were: automation, particle contamination, chemical contamination (organic outgassing, ionic contamination), cleanability, ESD, airtightness and purgeability. Furthermore, experimental protocols corresponding to suitable methods were then developed and implemented to test each criterion. The benchmark results are presented giving a "state of the art" of mask carriers currently available and allowing a gap analysis for the tested parameters related to future needs. This approach is detailed through the particular case of carrier contamination measurements. Finally, this benchmark / gap analysis leads to propose advisable mask carrier specifications (and the test protocols associated) on various key parameters which can also be taken as guidelines for a standardization perspective for the 65nm technology. This also indicates that none of tested carriers fulfills all the specifications proposed.

  16. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  17. Scanning ultrafast electron microscopy

    OpenAIRE

    Yang, Ding-Shyue; Mohammed, Omar F.; Zewail, Ahmed H.

    2010-01-01

    Progress has been made in the development of four-dimensional ultrafast electron microscopy, which enables space-time imaging of structural dynamics in the condensed phase. In ultrafast electron microscopy, the electrons are accelerated, typically to 200 keV, and the microscope operates in the transmission mode. Here, we report the development of scanning ultrafast electron microscopy using a field-emission-source configuration. Scanning of pulses is made in the single-electron mode, for whic...

  18. Experimental study about the effect of different scan parameters and post-processing methods of multi-detector CT on metallic artifacts%多排螺旋 CT 不同扫描参数及图像后处理方式对金属伪影的影响

    Institute of Scientific and Technical Information of China (English)

    储斌; 邹月芬; 王传兵

    2015-01-01

    目的:探讨多排螺旋 CT(MDCT)扫描参数及图像后处理方式对金属植入物伪影的影响,以期获得最佳方案减少金属伪影。方法制作钛合金钢板水模,将其置入 MDCT 下通过调节不同参数进行扫描(包括管电压、管电流、螺距、焦点),然后测定所得图像伪影面积大小并评价其相关性;对常规扫描参数下获得的15例椎体内固定术后患者图像进行后处理,分别使用常规方式和优化方式重建,包括多平面重建(MPR)、调节窗宽窗位及容积再现技术(VRT),然后评价图像质量。结果不同管电压、螺距、焦点下,伪影面积差异有统计学意义,随着管电压升高,伪影面积逐渐减小;随着螺距减小,伪影面积逐渐减小;使用小焦点扫描较大焦点伪影面积小;不同管电流下,伪影面积差异无统计学意义。图像后处理时骨窗重建选择软组织算法(Kernel B40f)、观察时选择窗位600 HU、窗宽2000 HU 及 VRT 三维重建可减少金属伪影。结论选择适当的 MDCT 扫描参数及图像后处理方式可以减少金属伪影,提高图像质量,有助于影像诊断。%Objective To explore the effect of multi-detector computed tomography (MDCT)scan parameters and imaging post-processing methods on metallic artifacts,so as to achieve the perfection in reduction of artifacts caused by metallic implants.Methods Water phantom with titanium alloy steel plate was scanned under MDCT with alternating scan parameters including tube voltage, tube current,pitch and focal spot.The areas of image artifacts were measured and correlated with the scan parameters.Another 1 5 postoperative patients with vertebral implantation scaned with the conventional scan parameters were also included in the study.Ima-ging post-processing including multi-planar reconstruction (MPR),window width and level adjustment,and volume rendering tech-nique (VRT)was performed by using optimized and

  19. Advanced Network Scanning

    Directory of Open Access Journals (Sweden)

    Ashiqur Rahman

    2016-07-01

    Full Text Available Network scanning is à procedure for identifying active hosts on a network, either for the purpose of attacking them or for network security assessment. Scanning procedures, such as ping sweeps and port scans, return information about which IP addresses map to live hosts that are active on the Internet and what services they offer. Another scanning method, inverse mapping, returns information about what IP addresses do not map to live hosts; this enables an attacker to make assumptions about viable addresses. Scanning is one of three components of intelligence gathering for an attacker. In the foot printing phase, the attacker creates a profile of the target organization, with information such as its domain name system (DNS and e-mail servers, and its IP address range. Most of this information is available online. In the scanning phase, the attacker finds information about the specific IP addresses that can be accessed over the Internet, their operating systems, the system architecture, and the services running on each computer. In the enumeration phase, the attacker gathers information such as network user and group names, routing tables, and Simple Network Management Protocol (SNMP data

  20. Benchmark analysis of KRITZ-2 critical experiments

    International Nuclear Information System (INIS)

    In the KRITZ-2 critical experiments, criticality and pin power distributions were measured at room temperature and high temperature (about 245 degC) for three different cores (KRITZ-2:1, KRITZ-2:13, KRITZ-2:19) loading slightly enriched UO2 or MOX fuels. Recently, international benchmark problems were provided by ORNL and OECD/NEA based on the KRITZ-2 experimental data. The published experimental data for the system with slightly enriched fuels at high temperature are rare in the world and they are valuable for nuclear data testing. Thus, the benchmark analysis was carried out with a continuous-energy Monte Carlo code MVP and its four nuclear data libraries based on JENDL-3.2, JENDL-3.3, JEF-2.2 and ENDF/B-VI.8. As a result, fairly good agreements with the experimental data were obtained with any libraries for the pin power distributions. However, the JENDL-3.3 and ENDF/B-VI.8 give under-prediction of criticality and too negative isothermal temperature coefficients for slightly enriched UO2 cores, although the older nuclear data JENDL-3.2 and JEF-2.2 give rather good agreements with the experimental data. From the detailed study with an infinite unit cell model, it was found that the differences among the results with different libraries are mainly due to the different fission cross section of U-235 in the energy range below 1.0 eV. (author)

  1. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  2. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  3. Swiss electricity grid - Benchmarking pilot project

    International Nuclear Information System (INIS)

    This article is a short version of the ENET number 210369. This report for the Swiss Federal Office of Energy (SFOE) describes a benchmarking pilot project carried out as a second phase in the development of a formula for the regulation of an open electricity market in Switzerland. It follows on from an initial phase involving the definition of a 'blue print' and a basic concept. The aims of the pilot project - to check out the practicability of the concept - are discussed. The collection of anonymised data for the benchmarking model from over 30 electricity utilities operating on all 7 Swiss grid levels and their integration in the three areas 'Technology', 'Grid Costs' and 'Capital Invested' are discussed in detail. In particular, confidentiality and data protection aspects are looked at. The methods used in the analysis of the data are described and the results of an efficiency analysis of various utilities are presented. The report is concluded with a listing of questions concerning data collection and analysis as well as operational and capital costs that are still to be answered

  4. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  5. Direct data access protocols benchmarking on DPM

    Science.gov (United States)

    Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina

    2015-12-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.

  6. Multisensor benchmark data for riot control

    Science.gov (United States)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  7. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc.

  8. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    Science.gov (United States)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  9. THE IMPORTANCE OF BENCHMARKING IN MAKING MANAGEMENT DECISIONS

    OpenAIRE

    Adriana-Mihaela IONESCU; Cristina Elena BIGIOI

    2016-01-01

    Launching a new business or project leads managers to make decisions and choose strategies that will then apply in their company. Most often, they take decisions only on instinct, but there are also companies that use benchmarking studies. Benchmarking is a highly effective management tool and is useful in the new competitive environment that has emerged from the need of organizations to constantly improve their performance in order to be competitive. Using this benchmarking process, organiza...

  10. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  11. Indian Management Education and Benchmarking Practices: A Conceptual Framework

    OpenAIRE

    Dr. Dharmendra MEHTA; Er. Sunayana SONI; Dr. Naveen K MEHTA; Dr. Rajesh K MEHTA

    2015-01-01

    Benchmarking can be defined as a process through which practices are analyzed to provide a standard measurement (‘benchmark’) of effective performance within an organization (such as a university/institute). Benchmarking is also used to compare performance with other organizations and other sectors. As management education is passing through challenging times so some modern management tool like benchmarking is required to improve the quality of management education and to overcome the challen...

  12. DEVELOPMENT OF A MARKET BENCHMARK PRICE FOR AGMAS PERFORMANCE EVALUATIONS

    OpenAIRE

    Good, Darrel L.; Irwin, Scott H.; Jackson, Thomas E.

    1998-01-01

    The purpose of this research report is to identify the appropriate market benchmark price to use to evaluate the pricing performance of market advisory services that are included in the annual AgMAS pricing performance evaluations. Five desirable properties of market benchmark prices are identified. Three potential specifications of the market benchmark price are considered: the average price received by Illinois farmers, the harvest cash price, and the average cash price over a two-year crop...

  13. 42 CFR 422.258 - Calculation of benchmarks.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly benchmark amount” means, for a month in a year: (1) For MA local plans with service areas entirely within...

  14. The importance of an accurate benchmark choice: the spanish case

    OpenAIRE

    Ruiz Campo, Sofía; Monjas Barroso, Manuel

    2012-01-01

    The performance of a fund cannot be judged unless it is first measured, and measurement is not possible without an objective frame of reference. A benchmark serves as a reliable and consistent gauge of the multiple dimensions of performance: return, risk and correlation. The benchmark must be a fair target for investment managers and be representative of the relevant opportunity set. The objective of this paper is to analyse whether the different benchmarks generally used to me...

  15. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint US/Russian Progress Report for Fiscal 1997. Volume 3 - Calculations Performed in the Russian Federation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-06-01

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the Russian Federation during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the contaminated benchmarks that the United States and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  16. Benchmarking in national health service procurement in Scotland.

    Science.gov (United States)

    Walker, Scott; Masson, Ron; Telford, Ronnie; White, David

    2007-11-01

    The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland. PMID:17958971

  17. Hospital Energy Benchmarking Guidance - Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  18. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  19. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Primm III, RT

    2002-05-29

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  20. Benchmark hydrogeophysical data from a physical seismic model

    Science.gov (United States)

    Lorenzo, Juan M.; Smolkin, David E.; White, Christopher; Chollett, Shannon R.; Sun, Ting

    2013-01-01

    Theoretical fluid flow models are used regularly to predict and analyze porous media flow but require verification against natural systems. Seismic monitoring in a controlled laboratory setting at a nominal scale of 1:1000 in the acoustic frequency range can help improve fluid flow models as well as elasto-granular models for uncompacted saturated-unsaturated soils. A mid-scale sand tank allows for many highly repeatable, yet flexible, experimental configurations with different material compositions and pump rates while still capturing phenomena such as patchy saturation, flow fingering, or layering. The tank (˜6×9×0.44 m) contains a heterogeneous sand pack (1.52-1.7 phi). In a set of eight benchmark experiments the water table is raised inside the sand body at increments of ˜0.05 m. Seismic events (vertical component) are recorded by a pseudowalkaway 64-channel accelerometer array (20 Hz-20 kHz), at 78 kS/s, in 100- scan stacks so as to optimize signal-to-noise ratio. Three screened well sites monitor water depth (+/-3 mm) inside the sand body. Seismic data sets in SEG Y format are publicly downloadable from the internet (http://github.com/cageo/Lorenzo-2012), in order to allow comparisons of different seismic and fluid flow analyses. The capillary fringe does not appear to completely saturate, as expected, because the interpreted compressional-wave velocity values remain so low (water levels there is no large seismic impedance contrast across the top of the water table to generate a clear reflector. Preliminary results indicate an immediate need for several additional experiments whose data sets will be added to the online database. Future benchmark data sets will grow with a control data set to show conditions in the sand body before water levels rise, and a surface 3D data set. In later experiments, buried sensors will help reduce seismic attenuation effects and in-situ saturation sensors will provide calibration values.