WorldWideScience

Sample records for benchmark parameter scan

  1. Development of a benchmark parameter scan for Higgs bosons in the NMSSM Model and a study of the sensitivity for H{yields}AA{yields}4{tau} in vector boson fusion with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Rottlaender, Iris

    2008-08-15

    An evaluation of the discovery potential for NMSSM Higgs bosons of the ATLAS experiment at the LHC is presented. For this purpose, seven two-dimensional benchmark planes in the six-dimensional parameter space of the NMSSM Higgs sector are defined. These planes include different types of phenomenology for which the discovery of NMSSM Higgs bosons is especially challenging and which are considered typical for the NMSSM. They are subsequently used to give a detailed evaluation of the Higgs boson discovery potential based on Monte Carlo studies from the ATLAS collaboration. Afterwards, the possibility of discovering NMSSM Higgs bosons via the H{sub 1}{yields}A{sub 1}A{sub 1}{yields}4{tau}{yields}4{mu}+8{nu} decay chain and with the vector boson fusion production mode is investigated. A particular emphasis is put on the mass reconstruction from the complex final state. Furthermore, a study of the jet reconstruction performance at the ATLAS experiment which is of crucial relevance for vector boson fusion searches is presented. A good detectability of the so-called tagging jets that originate from the scattered partons in the vector boson fusion process is of critical importance for an early Higgs boson discovery in many models and also within the framework of the NMSSM. (orig.)

  2. Gaia FGK Benchmark Stars and their reference parameters

    CERN Document Server

    Jofre, Paula; Blanco-Cuaresma, Sergi; Soubiran, Caroline

    2013-01-01

    In this article we summarise on-going work on the so-called Gaia FGK Benchmark Stars. This work consists of the determination of their atmospheric parameters and of the construction of a high-resolution spectral library. The definition of such a set of reference stars has become crucial in the current era of large spectroscopic surveys. Only with homogeneous and well documented stellar parameters can one exploit these surveys consistently and understand the structure and history of the Milky Way and therefore other of galaxies in the Universe.

  3. Multi-parameters scanning in HTI media

    KAUST Repository

    Masmoudi, Nabil

    2014-08-05

    Building credible anisotropy models is crucial in imaging. One way to estimate anisotropy parameters is to relate them analytically to traveltime, which is challenging in inhomogeneous media. Using perturbation theory, we develop traveltime approximations for transversely isotropic media with horizontal symmetry axis (HTI) as explicit functions of the anellipticity parameter η and the symmetry axis azimuth ϕ in inhomogeneous background media. Specifically, our expansion assumes an inhomogeneous elliptically anisotropic background medium, which may be obtained from well information and stacking velocity analysis in HTI media. This formulation has advantages on two fronts: on one hand, it alleviates the computational complexity associated with solving the HTI eikonal equation, and on the other hand, it provides a mechanism to scan for the best fitting parameters η and ϕ without the need for repetitive modeling of traveltimes, because the traveltime coefficients of the expansion are independent of the perturbed parameters η and ϕ. The accuracy of our expansion is further enhanced by the use of shanks transform. We show the effectiveness of our scheme with tests on a 3D model and we propose an approach for multi-parameters scanning in TI media.

  4. Scanning anisotropy parameters in complex media

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-03-21

    Parameter estimation in an inhomogeneous anisotropic medium offers many challenges; chief among them is the trade-off between inhomogeneity and anisotropy. It is especially hard to estimate the anisotropy anellipticity parameter η in complex media. Using perturbation theory and Taylor’s series, I have expanded the solutions of the anisotropic eikonal equation for transversely isotropic (TI) media with a vertical symmetry axis (VTI) in terms of the independent parameter η from a generally inhomogeneous elliptically anisotropic medium background. This new VTI traveltime solution is based on a set of precomputed perturbations extracted from solving linear partial differential equations. The traveltimes obtained from these equations serve as the coefficients of a Taylor-type expansion of the total traveltime in terms of η. Shanks transform is used to predict the transient behavior of the expansion and improve its accuracy using fewer terms. A homogeneous medium simplification of the expansion provides classical nonhyperbolic moveout descriptions of the traveltime that are more accurate than other recently derived approximations. In addition, this formulation provides a tool to scan for anisotropic parameters in a generally inhomogeneous medium background. A Marmousi test demonstrates the accuracy of this approximation. For a tilted axis of symmetry, the equations are still applicable with a slightly more complicated framework because the vertical velocity and δ are not readily available from the data.

  5. T3PS: Tool for Parallel Processing in Parameter Scans

    CERN Document Server

    Maurer, Vinzenz

    2015-01-01

    T3PS is a program that can be used to quickly design and perform parameter scans while easily taking advantage of the multi-core architecture of current processors. It takes an easy to read and write parameter scan definition file format as input. Based on the parameter ranges and other options contained therein, it distributes the calculation of the parameter space over multiple processes and possibly computers. The derived data is saved in a plain text file format readable by most plotting software. The supported scanning strategies include: grid scan, random scan, Markov Chain Monte Carlo, numerical optimization. Several example parameter scans are shown and compared with results in the literature.

  6. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  7. Clusters as benchmarks for measuring fundamental stellar parameters

    CERN Document Server

    Bell, Cameron P M

    2016-01-01

    In this contribution I will discuss fundamental stellar parameters as determined from young star clusters; specifically those with ages less than or approximately equal to that of the Pleiades. I will focus primarily on the use of stellar evolutionary models to determine the ages and masses of stars, as well as discuss the limitations of such models using a combination of both young clusters and eclipsing binary systems. In addition, I will also highlight a few interesting recent results from large on-going spectroscopic surveys (specifically Gaia-ESO and APOGEE/IN-SYNC) which are continuing to challenge our understanding of the formation and early evolutionary stages of young clusters.

  8. Optimization of Voxelization Parameters in Geant4 Tracking and Improvement of the Shooter Benchmarking Program

    CERN Document Server

    Siegel, Zachary

    2013-01-01

    The geometry-based tracking of the ubiquitous particle physics simulation toolkit Geant4 utilizes the idea of voxels, which effectively partition regions into multi-dimensional slices that can decrease simulation time. The extent of voxelization and the size of the voxels is determined by a set of parameters, which until now, defaulted to arbitrary numbers. In this report I document how I tested different values for these parameters and determined which values should be the default. I modified the existing G01 Geant4 example program to get an initial look at how the performance depended on the parameters. Then I modified the Shooter benchmark program, which lacks extraneous physics processes, to collect more refined data and to provide a tool for future testers to perform comprehensive benchmarks. To this end, I created a new geometry, added features to aid in testing over ranges of parameters, and setup the default tests to provide a good sampling of different simulation scenarios.

  9. Gravity combined with laser-scan in Grotta Gigante: a benchmark cave for gravity studies

    Science.gov (United States)

    Pivetta, Tommaso; Braitenberg, Carla

    2014-05-01

    Laser scanning has become one of the most important topographic techniques in the last decades, due to its ability to reconstruct complex surfaces with high resolution and precision and due to its fast acquisition time. Recently a laser-scan survey has been acquired (Fingolo et al., 2011) in the "Grotta Gigante" cave near Trieste, Italy, the biggest cave worldwide according to the Guinness Awards. In this paper this survey is used to obtain a 3D discretization of the cave with prisms. Then through this new model, with the densities derived from campaign measurements, the exact gravimetric effect of the structure was computed (Nagy et al., 2000) and compared with the gravity observation at the surface. The transition from the cloud of laser-scan points to the prism model was carried out by different computer elaborations; first of all the reduction of the data density through an averaging process that allows to pass from over 10000 points/m2 to less than 10points/m2. Then the whole dataset was filtered from the outliers by the means of a simple quadratic surface that fit the data (Turner, 1999). The reduced data points should be divided into the 2 surfaces of top and bottom, that are used to define the prisms. This step was performed using the local regression method (Loess) to calculate a surface located halfway between top and bottom points. Once the top and bottom interfaces were obtained it was possible to get the final prism representation and calculate the gravity signal. The observed Bouguer field is explained very well by our model and the residuals are used to evaluate possible secondary caves. The final prism model together with the gravity database on surface and inside the cave form a perfect benchmark to test forward and inverse potential field algorithms. References Fingolo M., Facco L., Ceccato A., Breganze C., Paganini P., Cezza M., Grotta Gigante di Trieste. Tra realtà virtuale e rilievi 3D ad alta risoluzione, Veneto Geologi, 75, pp.21-25, 2011

  10. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2015-01-01

    to generate optimized cellular scanning strategies and processing parameters, with an objective of reducing thermal asymmetries and mechanical deformations. The optimized scanning strategies are used for selective laser melting of the standard samples, and experimental and numerical results are compared....... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths...

  11. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    Energy Technology Data Exchange (ETDEWEB)

    Kljenak, Ivo, E-mail: ivo.kljenak@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Kuznetsov, Mikhail, E-mail: mike.kuznetsov@kit.edu [Karlsruhe Institute of Technology, Kaiserstraße 12, 76131 Karlsruhe (Germany); Kostka, Pal, E-mail: kostka@nubiki.hu [NUBIKI Nuclear Safety Research Institute, Konkoly-Thege Miklós út 29-33, 1121 Budapest (Hungary); Kubišova, Lubica, E-mail: lubica.kubisova@ujd.gov.sk [Nuclear Regulatory Authority of the Slovak Republic, Bajkalská 27, 82007 Bratislava (Slovakia); Maltsev, Mikhail, E-mail: maltsev_MB@aep.ru [JSC Atomenergoproekt, 1, st. Podolskykh Kursantov, Moscow (Russian Federation); Manzini, Giovanni, E-mail: giovanni.manzini@rse-web.it [Ricerca sul Sistema Energetico, Via Rubattino 54, 20134 Milano (Italy); Povilaitis, Mantas, E-mail: mantas.p@mail.lei.lt [Lithuania Energy Institute, Breslaujos g.3, 44403 Kaunas (Lithuania)

    2015-03-15

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description.

  12. Fundamental M-dwarf parameters from high-resolution spectra using PHOENIX ACES models: I. Parameter accuracy and benchmark stars

    CERN Document Server

    Passegger, Vera Maria; Reiners, Ansgar

    2016-01-01

    M-dwarf stars are the most numerous stars in the Universe; they span a wide range in mass and are in the focus of ongoing and planned exoplanet surveys. To investigate and understand their physical nature, detailed spectral information and accurate stellar models are needed. We use a new synthetic atmosphere model generation and compare model spectra to observations. To test the model accuracy, we compared the models to four benchmark stars with atmospheric parameters for which independent information from interferometric radius measurements is available. We used $\\chi^2$ -based methods to determine parameters from high-resolution spectroscopic observations. Our synthetic spectra are based on the new PHOENIX grid that uses the ACES description for the equation of state. This is a model generation expected to be especially suitable for the low-temperature atmospheres. We identified suitable spectral tracers of atmospheric parameters and determined the uncertainties in $T_{\\rm eff}$, $\\log{g}$, and [Fe/H] resul...

  13. Benchmark experiment for physics parameters of metallic-fueled LMFBR at FCA

    Energy Technology Data Exchange (ETDEWEB)

    Iijima, S.; Oigawa, H.; Sakurai, T.; Nemoto, T.; Okajima, S. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-09-01

    The calculated prediction for reactor physics parameters in a metallic-fueled LMFBR was tested using the benchmark experiments performed at FCA. The reactivity feedback parameters such as sodium void worth, Doppler reactivity worth and {sup 238}U-capture-to-{sup 239}Pu -fission ratio have been measured. The fuel expansion reactivity has also measured. Direct comparison with the results from similar oxide fuel assembly was made. Analysis was done with the JENDL-2 cross section library and JENDL-3.2. Prediction of reactor physics parameters with JENDL-3.2 in the metallic-fueled core agreed reasonably well with the measured values and showed similar trend to the results in the oxide fuel core. (author)

  14. Multipinhole SPECT helical scan parameters and imaging volume

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao [Department of Nuclear Medicine, State University of New York at Buffalo, Buffalo, New York 14214 (United States); Wei, Qingyang; Dai, Tiantian; Ma, Tianyu [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Lecomte, Roger [Department of Nuclear Medicine and Radiobiology, Sherbrooke Molecular Imaging Center, Université de Sherbrooke, Sherbrooke, Quebec J1H 5N4 (Canada)

    2015-11-15

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluated by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.

  15. Simultaneous Thermodynamic and Kinetic Parameters Determination Using Differential Scanning Calorimetry

    Directory of Open Access Journals (Sweden)

    Nader Frikha

    2011-01-01

    Full Text Available Problem statement: The determination of reaction kinetics is of major importance, as for industrial reactors optimization as for environmental reasons or energy limitations. Although calorimetry is often used for the determination of thermodynamic parameters alone, the question that arises is: how can we apply the Differential Scanning Calorimetry for the determination of kinetic parameters. The objective of this study consists to proposing an original methodology for the simultaneous determination of thermodynamic and kinetic parameters, using a laboratory scale Differential Scanning Calorimeter (DSC. The method is applied to the dichromate-catalysed hydrogen peroxide decomposition. Approach: The methodology is based on operating of experiments carried out with a Differential Scanning Calorimeter. The interest of this approach proposed is that it requires very small quantities of reactants (about a few grams to be implemented. The difficulty lies in the fact that, using such microcalorimeters, the reactants temperature cannot directly be measured and a particular calibration procedure has thus to be developed, to determine the media temperature in an indirect way. The proposed methodology for determination of kinetics parameters is based on resolution of the coupled heat and mass balances. Results: A complete kinetic law is proposed. The Arrhenius parameters are determined as frequency factor k0 = 1.39×109 s−1 and activation energy E = 54.9 kJ mol−1. The measured enthalpy of reaction is ΔrH=−94 kJ mol−1. Conclusion: The comparison of the results obtained by such an original methodology with those obtained using a conventional laboratory scale reactor calorimetry, for the kinetics determination of, shows that this new approach is very relevant.

  16. Scanning anisotropy parameters in horizontal transversely isotropic media

    KAUST Repository

    Masmoudi, Nabil

    2016-10-12

    The horizontal transversely isotropic model, with arbitrary symmetry axis orientation, is the simplest effective representative that explains the azimuthal behaviour of seismic data. Estimating the anisotropy parameters of this model is important in reservoir characterisation, specifically in terms of fracture delineation. We propose a travel-time-based approach to estimate the anellipticity parameter η and the symmetry axis azimuth ϕ of a horizontal transversely isotropic medium, given an inhomogeneous elliptic background model (which might be obtained from velocity analysis and well velocities). This is accomplished through a Taylor\\'s series expansion of the travel-time solution (of the eikonal equation) as a function of parameter η and azimuth angle ϕ. The accuracy of the travel time expansion is enhanced by the use of Shanks transform. This results in an accurate approximation of the solution of the non-linear eikonal equation and provides a mechanism to scan simultaneously for the best fitting effective parameters η and ϕ, without the need for repetitive modelling of travel times. The analysis of the travel time sensitivity to parameters η and ϕ reveals that travel times are more sensitive to η than to the symmetry axis azimuth ϕ. Thus, η is better constrained from travel times than the azimuth. Moreover, the two-parameter scan in the homogeneous case shows that errors in the background model affect the estimation of η and ϕ differently. While a gradual increase in errors in the background model leads to increasing errors in η, inaccuracies in ϕ, on the other hand, depend on the background model errors. We also propose a layer-stripping method valid for a stack of arbitrary oriented symmetry axis horizontal transversely isotropic layers to convert the effective parameters to the interval layer values.

  17. Benchmarking parameter-free AMaLGaM on functions with and without noise.

    Science.gov (United States)

    Bosman, Peter A N; Grahl, Jörn; Thierens, Dirk

    2013-01-01

    We describe a parameter-free estimation-of-distribution algorithm (EDA) called the adapted maximum-likelihood Gaussian model iterated density-estimation evolutionary algorithm (AMaLGaM-ID[Formula: see text]A, or AMaLGaM for short) for numerical optimization. AMaLGaM is benchmarked within the 2009 black box optimization benchmarking (BBOB) framework and compared to a variant with incremental model building (iAMaLGaM). We study the implications of factorizing the covariance matrix in the Gaussian distribution, to use only a few or no covariances. Further, AMaLGaM and iAMaLGaM are also evaluated on the noisy BBOB problems and we assess how well multiple evaluations per solution can average out noise. Experimental evidence suggests that parameter-free AMaLGaM can solve a wide range of problems efficiently with perceived polynomial scalability, including multimodal problems, obtaining the best or near-best results among all algorithms tested in 2009 on functions such as the step-ellipsoid and Katsuuras, but failing to locate the optimum within the time limit on skew Rastrigin-Bueche separable and Lunacek bi-Rastrigin in higher dimensions. AMaLGaM is found to be more robust to noise than iAMaLGaM due to the larger required population size. Using few or no covariances hinders the EDA from dealing with rotations of the search space. Finally, the use of noise averaging is found to be less efficient than the direct application of the EDA unless the noise is uniformly distributed. AMaLGaM was among the best performing algorithms submitted to the BBOB workshop in 2009.

  18. Benchmarking Naval Shipbuilding With 3D Laser Scanning, Additive Manufacturing, and Collaborative Product Lifecycle Management

    Science.gov (United States)

    2016-04-30

    Postgraduate School (Monterey, CA) and teaches executive seminars in quantitative risk analysis, decision sciences, real options, simulation, portfolio ...full benefits of new technologies such as Three Dimensional Scanning (3DLS), Product Lifecycle Management (PLM), and Additive Manufacturing (AM) to...technology adoption and use are important to capturing the full benefits of these technologies. Our research project examines the use of 3DLS, PLM

  19. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2015-01-01

    Selective laser melting is yet to become a standardized industrial manufacturing technique. The process continues to suffer from defects such as distortions, residual stresses, localized deformations and warpage caused primarily due to the localized heating, rapid cooling and high temperature...... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths...... and process parameters for selective laser melting of a standard sample is introduced. The processing of the sample is simulated by sequentially coupling a calibrated 3D pseudo-analytical thermal model with a 3D finite element mechanical model.The optimized processing parameters are subjected to a Monte Carlo...

  20. Adaptive Matching of the Scanning Aperture of the Environment Parameter

    Science.gov (United States)

    Choni, Yu. I.; Yunusov, N. N.

    2016-04-01

    We analyze a matching system for the scanning aperture antenna radiating through a layer with unpredictably changing parameters. Improved matching has been achieved by adaptive motion of a dielectric plate in the gap between the aperture and the radome. The system is described within the framework of an infinite layered structure. The validity of the model has been confirmed by numerical simulation using CST Microwave Studio software and by an experiment. It is shown that the reflection coefficient at the input of some types of a matching device, which is due to the deviation of the load impedance from the nominal value, is determined by a compact and versatile formula. The potential efficiency of the proposed matching system is shown by a specific example, and its dependence on the choice of the starting position of the dielectric plate is demonstrated.

  1. T3PS v1.0: Tool for Parallel Processing in Parameter Scans

    Science.gov (United States)

    Maurer, Vinzenz

    2016-01-01

    T3PS is a program that can be used to quickly design and perform parameter scans while easily taking advantage of the multi-core architecture of current processors. It takes an easy to read and write parameter scan definition file format as input. Based on the parameter ranges and other options contained therein, it distributes the calculation of the parameter space over multiple processes and possibly computers. The derived data is saved in a plain text file format readable by most plotting software. The supported scanning strategies include: grid scan, random scan, Markov Chain Monte Carlo, numerical optimization. Several example parameter scans are shown and compared with results in the literature.

  2. SCAN-based hybrid and double-hybrid density functionals from models without fitted parameters

    OpenAIRE

    Hui, Kerwin; Chai, Jeng-Da

    2015-01-01

    By incorporating the nonempirical SCAN semilocal density functional [Sun, Ruzsinszky, and Perdew, Phys. Rev. Lett. 115, 036402 (2015)] in the underlying expression of four existing hybrid and double-hybrid models, we propose one hybrid (SCAN0) and three double-hybrid (SCAN0-DH, SCAN-QIDH, and SCAN0-2) density functionals, which are free from any fitted parameters. The SCAN-based double-hybrid functionals consistently outperform their parent SCAN semilocal functional for self-interaction probl...

  3. Using synthetic peptides to benchmark peptide identification software and search parameters for MS/MS data analysis

    Directory of Open Access Journals (Sweden)

    Andreas Quandt

    2014-12-01

    Full Text Available Tandem mass spectrometry and sequence database searching are widely used in proteomics to identify peptides in complex mixtures. Here we present a benchmark study in which a pool of 20,103 synthetic peptides was measured and the resulting data set was analyzed using around 1800 different software and parameter set combinations. The results indicate a strong relationship between the performance of an analysis workflow and the applied parameter settings. We present and discuss strategies to optimize parameter settings in order to significantly increase the number of correctly assigned fragment ion spectra and to make the analysis method robust.

  4. SCAN-based hybrid and double-hybrid density functionals from parameter-free models

    CERN Document Server

    Hui, Kerwin

    2015-01-01

    By incorporating the nonempirical SCAN semilocal density functional [Sun, Ruzsinszky, and Perdew, Phys. Rev. Lett. 115, 036402 (2015)] in the underlying expression, we propose one hybrid (SCAN0) and three double-hybrid (SCAN0-DH, SCAN-QIDH, and SCAN0-2) density functionals, which are free of any empirical parameter. The SCAN-based hybrid and double-hybrid functionals consistently outperform their parent SCAN semilocal functional for a wide range of applications. The SCAN-based semilocal, hybrid, and double-hybrid functionals generally perform better than the corresponding PBE-based functionals. In addition, the SCAN0-2 and SCAN-QIDH double-hybrid functionals significantly reduce the qualitative failures of the SCAN semilocal functional, such as the self-interaction error and noncovalent interaction error, extending the applicability of the SCAN-based functionals to a very diverse range of systems.

  5. Parameter estimation for slit-type scanning sensors

    Science.gov (United States)

    Fowler, J. W.; Rolfe, E. G.

    1981-01-01

    The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.

  6. Combining Total Monte Carlo and Benchmarks for nuclear data uncertainty propagation on an LFRs safety parameters

    CERN Document Server

    Alhassan, Erwin; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael

    2013-01-01

    Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.

  7. Benchmarking environmental and operational parameters through eco-efficiency criteria for dairy farms.

    Science.gov (United States)

    Iribarren, Diego; Hospido, Almudena; Moreira, María Teresa; Feijoo, Gumersindo

    2011-04-15

    Life Cycle Assessment (LCA) is often used for the environmental evaluation of agri-food systems due to its holistic perspective. In particular, the assessment of milk production at farm level requires the evaluation of multiple dairy farms to guarantee the representativeness of the study when a regional perspective is adopted. This article shows the joint implementation of LCA and Data Envelopment Analysis (DEA) in order to avoid the formulation of an average farm, therefore preventing standard deviations associated with the use of average inventory data while attaining the characterization and benchmarking of the operational and environmental performance of dairy farms. Within this framework, 72 farms located in Galicia (NW Spain) were subject to an LCA+DEA study which led to identify those farms with an efficient operation. Furthermore, target input consumption levels were benchmarked for each inefficient farm, and the corresponding target environmental impacts were calculated so that eco-efficiency criteria were verified. Thus, average reductions of up to 38% were found for input consumption levels, leading to impact reductions above 20% for every environmental impact category. Finally, the economic savings arising from efficient farming practices were also estimated. Economic savings of up to 0.13€ per liter of raw milk were calculated, which means extra profits of up to 40% of the final raw milk price.

  8. Precision and Accuracy Parameters in Structured Light 3-D Scanning

    DEFF Research Database (Denmark)

    Eiríksson, Eyþór Rúnar; Wilm, Jakob; Pedersen, David Bue

    2016-01-01

    Structured light systems are popular in part because they can be constructed from off-the-shelf low cost components. In this paper we quantitatively show how common design parameters affect precision and accuracy in such systems, supplying a much needed guide for practitioners. Our quantitative m...

  9. Determining avalanche modelling input parameters using terrestrial laser scanning technology

    OpenAIRE

    2013-01-01

    International audience; In dynamic avalanche modelling, data about the volumes and areas of the snow released, mobilized and deposited are key input parameters, as well as the fracture height. The fracture height can sometimes be measured in the field, but it is often difficult to access the starting zone due to difficult or dangerous terrain and avalanche hazards. More complex is determining the areas and volumes of snow involved in an avalanche. Such calculations require high-resolution spa...

  10. FAITH: Scanning of Rich Web Applications for Parameter Tampering Vulnerabilities

    CERN Document Server

    Fung, Adonis P H; Wong, T Y

    2012-01-01

    Modern HTML forms are designed to generate form controls dynamically and submit over AJAX as a result of recent advances in Javascript programming techniques. Existing scanners are constrained by interacting only with traditional forms, and vulnerabilities are often left undetected even after scrutiny. In this paper, we overcome a number of client-side challenges that used to make automated fuzzing of form submissions difficult and unfaithful. We build FAITH, a pragmatic scanner for uncovering parameter tampering vulnerabilities in real-world rich web applications. It is the first scanner that enables fuzzing in most kinds of form submissions while faithfully preserving the required user actions, HTML 5, AJAX, anti-CSRF tokens and dynamic form updates. The importance of this work is demonstrated by the severe vulnerabilities uncovered, including a way to bypass the most-trusted One-Time Password (OTP) in one of the largest multinational banks. These vulnerabilities cannot be detected by existing scanners.

  11. Analysis of the Impact of Source Region Structures on Seismological Parameter Scanning

    Institute of Scientific and Technical Information of China (English)

    Chen Yuwei; Wang Xingzhou; Miao Peng; Chen Anguo; Li Lingli; Hong Dequan

    2010-01-01

    By taking moderate-strong earthquakes in South,North and West China as the research subjects and taking into consideration the fault strikes in these regions,this paper selects 8kinds of seismology indexes with clear physical significance and strong independence to carry out spatial scanning of the parallel,vertical and oblique slip of fault along the fault strike.Based on the size of correlation coefficients between the scanning curve and source region curve we quantitatively analyze the difference between scan results among different slip modes and study the impact of fault strike in different tectonic divisions on scanning results and variation rules of seismological parameters.The results show that not only does the change of spatial parameters have a great influence on seismological parameter scanning,but so does the fault strike in the source region.This paper presents the optimum condition parameters with least spatial influencing scanning scope for different magnitude seismology indexes and analyzes the possible influence of fault strike on seismological parameter scanning results.

  12. TORT solutions to the NEA suite of benchmarks for 3D transport methods and codes over a range in parameter space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B. [Department of Mechanical and Nuclear Engineering, Penn State University, University Park, PA 16802 (United States)], E-mail: bekarkb@ornl.gov; Azmy, Yousry Y. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States)], E-mail: yyazmy@ncsu.edu

    2009-04-15

    We present the TORT solutions to the 3D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40 x 40 x 40, 200 angles) to the finest model (160 x 160 x 160, 800 angles). The MCNP reference solution is used for evaluating the effect of model-refinement on the accuracy of the TORT solutions. The presented results show that the majority of benchmark quantities are computed with good accuracy by TORT, and that the accuracy improves with model refinement. However, this deliberately severe test has exposed some deficiencies in both deterministic and stochastic solution approaches. Specifically, TORT fails to converge the inner iterations in some benchmark configurations while MCNP produces zero tallies, or drastically poor statistics for some benchmark quantities. We conjecture that TORT's failure to converge is driven by ray effects in configurations with low scattering ratio and/or highly skewed computational cells, i.e. aspect ratio far from unity. The failure of MCNP occurs in quantities tallied over a very small area or volume in physical space, or quantities tallied many ({approx}25) mean free paths away from the source. Hence automated, robust, and reliable variance reduction techniques are essential for obtaining high quality reference values of the benchmark quantities. Preliminary results of the benchmark exercise indicate that the occasionally poor performance of TORT is shared with other deterministic codes. Armed with this information, method developers can now direct their attention to regions in parameter space where such failures occur and design alternative solution approaches for such instances.

  13. Design of operation parameters of a high speed TDI CCD line scan camera

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper analyzes the operation parameters of the time delay and integration (TDI) line scan CCD camera, such as resolution, line rate, clock frequency, etc. and their mathematical relationship is deduced. By analyzing and calculating these parameters, the working clocks of the TDI CCD line scan camera are designed,which guarantees the synchronization of the line scan rate and the camera movement speed. The IL - E2 TDI CCD of DALSA Co. is used as the sensor of the camera in the paper. The working clock generator used for the TDI CCD sensor is realized by using the programmable logic device (PLD). The experimental results show that the working clock generator circuit satisfies the requirement of high speed TDI CCD line scan camera.

  14. Data Based Parameter Estimation Method for Circular-scanning SAR Imaging

    Directory of Open Access Journals (Sweden)

    Chen Gong-bo

    2013-06-01

    Full Text Available The circular-scanning Synthetic Aperture Radar (SAR is a novel working mode and its image quality is closely related to the accuracy of the imaging parameters, especially considering the inaccuracy of the real speed of the motion. According to the characteristics of the circular-scanning mode, a new data based method for estimating the velocities of the radar platform and the scanning-angle of the radar antenna is proposed in this paper. By referring to the basic conception of the Doppler navigation technique, the mathematic model and formulations for the parameter estimation are firstly improved. The optimal parameter approximation based on the least square criterion is then realized in solving those equations derived from the data processing. The simulation results verified the validity of the proposed scheme.

  15. A simulation study on proton computed tomography (CT) stopping power accuracy using dual energy CT scans as benchmark

    DEFF Research Database (Denmark)

    Hansen, David Christoffer; Seco, Joao; Sørensen, Thomas Sangild;

    2015-01-01

    Background. Accurate stopping power estimation is crucial for treatment planning in proton therapy, and the uncertainties in stopping power are currently the largest contributor to the employed dose margins. Dual energy x-ray computed tomography (CT) (clinically available) and proton CT (in...... development) have both been proposed as methods for obtaining patient stopping power maps. The purpose of this work was to assess the accuracy of proton CT using dual energy CT scans of phantoms to establish reference accuracy levels. Material and methods. A CT calibration phantom and an abdomen cross section...... of detectors and the corresponding noise characteristics. Stopping power maps were calculated for all three scans, and compared with the ground truth stopping power from the phantoms. Results. Proton CT gave slightly better stopping power estimates than the dual energy CT method, with root mean square errors...

  16. Study on the parameters of the scanning system for the 300 keV electron accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Leo, K. W.; Chulan, R. M., E-mail: leo@nm.gov.my; Hashim, S. A.; Baijan, A. H.; Sabri, R. M.; Mohtar, M.; Glam, H.; Lojius, L.; Zahidee, M.; Azman, A.; Zaid, M. [Malaysian Nuclear Agency, Bangi, 43000 Kajang. Selangor (Malaysia)

    2016-01-22

    This paper describes the method to identify the magnetic coil parameters of the scanning system. This locally designed low energy electron accelerator with the present energy of 140 keV will be upgraded to 300 keV. In this accelerator, scanning system is required to deflect the energetic electron beam across a titanium foil in vertical and horizontal direction. The excitation current of the magnetic coil is determined by the energy of the electron beam. Therefore, the magnetic coil parameters must be identified to ensure the matching of the beam energy and excitation coil current. As the result, the essential parameters of the effective lengths for X-axis and Y-axis have been found as 0.1198 m and 0.1134 m and the required excitation coil currents which is dependenton the electron beam energies have be identified.

  17. Comparison of a novel surface laser scanning anthropometric technique to traditional methods for facial parameter measurements.

    Science.gov (United States)

    Joe, Paula S; Ito, Yasushi; Shih, Alan M; Oestenstad, Riedar K; Lungu, Claudiu T

    2012-01-01

    This study was designed to determine if three-dimensional (3D) laser scanning techniques could be used to collect accurate anthropometric measurements, compared with traditional methods. The use of an alternative 3D method would allow for quick collection of data that could be used to change the parameters used for facepiece design, improving fit and protection for a wider variety of faces. In our study, 10 facial dimensions were collected using both the traditional calipers and tape method and a Konica-Minolta Vivid9i laser scanner. Scans were combined using RapidForm XOR software to create a single complete facial geometry of the subject as a triangulated surface with an associated texture image from which to obtain measurements. A paired t-test was performed on subject means in each measurement by method. Nine subjects were used in this study: five males (one African-American and four Caucasian females) and four females displaying a range of facial dimensions. Five measurements showed significant differences (pLaser scanning measurements showed high precision and accuracy when compared with traditional methods. Significant differences found can be very small changes in measurements and are unlikely to present a practical difference. The laser scanning technique demonstrated reliable and quick anthropometric data collection for use in future projects in redesigning respirators.

  18. Bayesian inference of genetic parameters for ultrasound scanning traits of Kivircik lambs.

    Science.gov (United States)

    Cemal, I; Karaman, E; Firat, M Z; Yilmaz, O; Ata, N; Karaca, O

    2017-03-01

    Ultrasound scanning traits have been adapted in selection programs in many countries to improve carcass traits for lean meat production. As the genetic parameters of the traits interested are important for breeding programs, the estimation of these parameters was aimed at the present investigation. The estimated parameters were direct and maternal heritability as well as genetic correlations between the studied traits. The traits were backfat thickness (BFT), skin+backfat thickness (SBFT), eye muscle depth (MD) and live weights at the day of scanning (LW). The breed investigated was Kivircik, which has a high quality of meat. Six different multi-trait animal models were fitted to determine the most suitable model for the data using Bayesian approach. Based on deviance information criterion, a model that includes direct additive genetic effects, maternal additive genetic effects, direct maternal genetic covariance and maternal permanent environmental effects revealed to be the most appropriate for the data, and therefore, inferences were built on the results of that model. The direct heritability estimates for BFT, SBFT, MD and LW were 0.26, 0.26, 0.23 and 0.09, whereas the maternal heritability estimates were 0.27, 0.27, 0.24 and 0.20, respectively. Negative genetic correlations were obtained between direct and maternal effects for BFT, SBFT and MD. Both direct and maternal genetic correlations between traits were favorable, whereas BFT-MD and SBFT-MD had negligible direct genetic correlation. The highest direct and maternal genetic correlations were between BFT and SBFT (0.39) and between MD and LW (0.48), respectively. Our results, in general, indicated that maternal effects should be accounted for in estimation of genetic parameters of ultrasound scanning traits in Kivircik lambs, and SBFT can be used as a selection criterion to improve BFT.

  19. Sub-0.1 nm-resolution quantitative scanning transmission electron microscopy without adjustable parameters

    Energy Technology Data Exchange (ETDEWEB)

    Dwyer, C. [Monash Centre for Electron Microscopy, Monash University, Victoria 3800 (Australia); Department of Materials Engineering, Monash University, Victoria 3800 (Australia); ARC Centre of Excellence for Design in Light Metals, Monash University, Victoria 3800 (Australia); Maunders, C. [Department of Materials Engineering, Monash University, Victoria 3800 (Australia); Zheng, C. L. [Monash Centre for Electron Microscopy, Monash University, Victoria 3800 (Australia); Weyland, M.; Etheridge, J. [Monash Centre for Electron Microscopy, Monash University, Victoria 3800 (Australia); Department of Materials Engineering, Monash University, Victoria 3800 (Australia); Tiemeijer, P. C. [FEI Electron Optics, P.O. Box 80066, 5600 KA Eindhoven (Netherlands)

    2012-05-07

    Atomic-resolution imaging in the scanning transmission electron microscope (STEM) constitutes a powerful tool for nanostructure characterization. Here, we demonstrate the quantitative interpretation of atomic-resolution high-angle annular dark-field (ADF) STEM images using an approach that does not rely on adjustable parameters. We measure independently the instrumental parameters that affect sub-0.1 nm-resolution ADF images, quantify their individual and collective contributions to the image intensity, and show that knowledge of these parameters enables a quantitative interpretation of the absolute intensity and contrast across all accessible spatial frequencies. The analysis also provides a method for the in-situ measurement of the STEM's effective source distribution.

  20. Validation of CENDL and JEFF evaluated nuclear data files for TRIGA calculations through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors

    Energy Technology Data Exchange (ETDEWEB)

    Uddin, M.N. [Department of Physics, Jahangirnagar University, Dhaka (Bangladesh); Sarker, M.M. [Reactor Physics and Engineering Division, Institute of Nuclear Science and Technology, Atomic Energy Research Establishment, Savar, GPO Box 3787, Dhaka 1000 (Bangladesh); Khan, M.J.H. [Reactor Physics and Engineering Division, Institute of Nuclear Science and Technology, Atomic Energy Research Establishment, Savar, GPO Box 3787, Dhaka 1000 (Bangladesh)], E-mail: jahirulkhan@yahoo.com; Islam, S.M.A. [Department of Physics, Jahangirnagar University, Dhaka (Bangladesh)

    2009-10-15

    The aim of this paper is to present the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through the analysis of the integral parameters of TRX and BAPL benchmark lattices of thermal reactors for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. In this process, the 69-group cross-section library for lattice code WIMS was generated using the basic evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 with the help of nuclear data processing code NJOY99.0. Integral measurements on the thermal reactor lattices TRX-1, TRX-2, BAPL-UO{sub 2}-1, BAPL-UO{sub 2}-2 and BAPL-UO{sub 2}-3 served as standard benchmarks for testing nuclear data files and have also been selected for this analysis. The integral parameters of the said lattices were calculated using the lattice transport code WIMSD-5B based on the generated 69-group cross-section library. The calculated integral parameters were compared to the measured values as well as the results of Monte Carlo Code MCNP. It was found that in most cases, the values of integral parameters show a good agreement with the experiment and MCNP results. Besides, the group constants in WIMS format for the isotopes U-235 and U-238 between two data files have been compared using WIMS library utility code WILLIE and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through benchmarking the integral parameters of TRX and BAPL lattices and can also be essential to implement further neutronic analysis of TRIGA Mark-II research reactor at AERE, Dhaka, Bangladesh.

  1. Electron transport parameters in CO$_2$: scanning drift tube measurements and kinetic computations

    CERN Document Server

    Vass, M; Loffhagen, D; Pinhao, N; Donko, Z

    2016-01-01

    This work presents transport coefficients of electrons (bulk drift velocity, longitudinal diffusion coefficient, and effective ionization frequency) in CO2 measured under time-of-flight conditions over a wide range of the reduced electric field, 15Td <= E/N <= 2660Td in a scanning drift tube apparatus. The data obtained in the experiments are also applied to determine the effective steady-state Townsend ionization coefficient. These parameters are compared to the results of previous experimental studies, as well as to results of various kinetic computations: solutions of the electron Boltzmann equation under different approximations (multiterm and density gradient expansions) and Monte Carlo simulations. The experimental data extend the range of E/N compared with previous measurements and are consistent with most of the transport parameters obtained in these earlier studies. The computational results point out the range of applicability of the respective approaches to determine the different measured tr...

  2. The effect of pupil dilation on AL-Scan biometric parameters.

    Science.gov (United States)

    Can, Ertuğrul; Duran, Mustafa; Çetinkaya, Tuğba; Arıtürk, Nurşen

    2016-04-01

    The purpose of this study was to investigate the effects of pupil dilation on the parameters of the AL-Scan (Nidek Co., Ltd, Gamagori, Japan). We compared the measurements of axial length (AL), anterior chamber depth (ACD), central corneal keratometry reading, pupil diameter, and intraocular lens (IOL) power of 72 eyes of 72 healthy volunteers and patients scheduled for cataract surgery before and 45 min after instillation of cyclopentolate hydrochloride 1 % using the AL-Scan. Intraobserver repeatability was assessed by taking three consecutive recordings of ACD and AL. Only ACD readings were significantly different between predilation and postdilation (P  0.001). Only two cases in the study demonstrated changes in IOL power higher than 0.5 D. The intraobserver repeatability of both devices was good (CV values for ACD and AL were 0.16 and 0.20 %, respectively). Dilated pupil size did not affect the measurement of IOL power using the A-Scan optical biometer, but increase in ACD after dilation should be taken into account when performing refractive surgeries in which ACD is very important such as phakic anterior chamber IOL implantation.

  3. Combining Total Monte Carlo and Benchmarks for Nuclear Data Uncertainty Propagation on a Lead Fast Reactor's Safety Parameters

    Science.gov (United States)

    Alhassan, E.; Sjöstrand, H.; Duan, J.; Gustavsson, C.; Koning, A. J.; Pomp, S.; Rochman, D.; Österlund, M.

    2014-04-01

    Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of 239Pu random ENDF-formatted libraries generated using the TALYS based system were processed into ACE format with NJOY-99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The mean of the keff distribution obtained was compared with the major nuclear data libraries, JEFF-3.1.1, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on χ2 values obtained using the 239Pu Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.

  4. Algorithm for the Automatic Estimation of Agricultural Tree Geometric Parameters Using Airborne Laser Scanning Data

    Science.gov (United States)

    Hadaś, E.; Borkowski, A.; Estornell, J.

    2016-06-01

    The estimation of dendrometric parameters has become an important issue for the agricultural planning and management. Since the classical field measurements are time consuming and inefficient, Airborne Laser Scanning (ALS) data can be used for this purpose. Point clouds acquired for orchard areas allow to determine orchard structures and geometric parameters of individual trees. In this research we propose an automatic method that allows to determine geometric parameters of individual olive trees using ALS data. The method is based on the α-shape algorithm applied for normalized point clouds. The algorithm returns polygons representing crown shapes. For points located inside each polygon, we select the maximum height and the minimum height and then we estimate the tree height and the crown base height. We use the first two components of the Principal Component Analysis (PCA) as the estimators for crown diameters. The α-shape algorithm requires to define the radius parameter R. In this study we investigated how sensitive are the results to the radius size, by comparing the results obtained with various settings of the R with reference values of estimated parameters from field measurements. Our study area was the olive orchard located in the Castellon Province, Spain. We used a set of ALS data with an average density of 4 points m-2. We noticed, that there was a narrow range of the R parameter, from 0.48 m to 0.80 m, for which all trees were detected and for which we obtained a high correlation coefficient (> 0.9) between estimated and measured values. We compared our estimates with field measurements. The RMSE of differences was 0.8 m for the tree height, 0.5 m for the crown base height, 0.6 m and 0.4 m for the longest and shorter crown diameter, respectively. The accuracy obtained with the method is thus sufficient for agricultural applications.

  5. Financial Benchmarking

    OpenAIRE

    2012-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  6. Investigation of scanning parameters for thyroid fine needle aspiration cytology specimens: A pilot study

    Directory of Open Access Journals (Sweden)

    Maheswari S Mukherjee

    2015-01-01

    Full Text Available Background: Interest in developing more feasible and affordable applications of virtual microscopy in the field of cytology continues to grow. Aims: The aim of this study was to investigate the scanning parameters for the thyroid fine needle aspiration (FNA cytology specimens. Subjects and Methods: A total of twelve glass slides from thyroid FNA cytology specimens were digitized at ×40 with 1 micron (μ interval using seven focal plane (FP levels (Group 1, five FP levels (Group 2, and three FP levels (Group 3 using iScan Coreo Au scanner (Ventana, AZ, USA producing 36 virtual images (VI. With an average wash out period of 2 days, three participants diagnosed the preannotated cells of Groups 1, 2, and 3 using BioImagene′s Image Viewer (version 3.1 (Ventana, Inc., Tucson, AZ, USA, and the corresponding 12 glass slides (Group 4 using conventional light microscopy. Results: All three raters correctly identified and showed complete agreement on the glass and VI for: 86% of the cases at FP Level 3, 83% of the cases at both the FP Levels 5 and 7. The intra-observer concordance between the glass slides and VI for all three raters was highest (97% for Level 3 and glass, same (94% for Level 5 and glass; and Level 7 and glass. The inter-rater reliability was found to be highest for the glass slides, and three FP levels (77%, followed by five FP levels (69.5%, and seven FP levels (69.1%. Conclusions: This pilot study found that among the three different FP levels, the VI digitized using three FP levels had slightly higher concordance, intra-observer concordance, and inter-rater reliability. Scanning additional levels above three FP levels did not improve concordance. We believe that there is no added benefit of acquiring five FP levels or more especially when considering the file size, and storage costs. Hence, this study reports that FP level three and 1 μ could be the potential scanning parameters for the thyroid FNA cytology specimens.

  7. [Duplex scanning of hemodynamic parameters of the celiac trunk and superior mesenteric artery in healthy volunteers].

    Science.gov (United States)

    Kuntsevich, G I; Shilenok, D V

    1993-07-01

    The possibility of studying the hemodynamics in the visceral arteries of the abdominal aorta by duplex scanning was demonstrated. The results of examination of 30 healthy persons are discussed. Characteristic features of the blood flow spectrogram of the celiac trunk and superior mesenteric artery were revealed. According to the spectrogram, the flow of blood in the celiac trunk is characterized by rapidly increasing peak systolic rate and slowly diminishing diastolic rate to approximately 1/3 of the maximal value of systole. The character of the blood flow in the superior mesenteric artery is distinguished by a lesser peak systolic rate and the presence of a short-lived reverse rate before the sloping diastolic curve. Normal values of the blood flow volume rate were determined, it was 649 +/- 25.4 ml/min in the celiac trunk and 395 +/- 20.5 ml/min in the superior mesenteric artery. Among the advantages of the duplex scanning method are noninvasiveness and safety and the possibility of dynamic study of the hemodynamic parameters.

  8. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  9. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  10. Parametric modeling and optimization of laser scanning parameters during laser assisted machining of Inconel 718

    Science.gov (United States)

    Venkatesan, K.; Ramanujam, R.; Kuppan, P.

    2016-04-01

    This paper presents a parametric effect, microstructure, micro-hardness and optimization of laser scanning parameters (LSP) on heating experiments during laser assisted machining of Inconel 718 alloy. The laser source used for experiments is a continuous wave Nd:YAG laser with maximum power of 2 kW. The experimental parameters in the present study are cutting speed in the range of 50-100 m/min, feed rate of 0.05-0.1 mm/rev, laser power of 1.25-1.75 kW and approach angle of 60-90°of laser beam axis to tool. The plan of experiments are based on central composite rotatable design L31 (43) orthogonal array. The surface temperature is measured via on-line measurement using infrared pyrometer. Parametric significance on surface temperature is analysed using response surface methodology (RSM), analysis of variance (ANOVA) and 3D surface graphs. The structural change of the material surface is observed using optical microscope and quantitative measurement of heat affected depth that are analysed by Vicker's hardness test. The results indicate that the laser power and approach angle are the most significant parameters to affect the surface temperature. The optimum ranges of laser power and approach angle was identified as 1.25-1.5 kW and 60-65° using overlaid contour plot. The developed second order regression model is found to be in good agreement with experimental values with R2 values of 0.96 and 0.94 respectively for surface temperature and heat affected depth.

  11. Modelling anaerobic co-digestion in Benchmark Simulation Model No. 2: Parameter estimation, substrate characterisation and plant-wide integration.

    Science.gov (United States)

    Arnell, Magnus; Astals, Sergi; Åmand, Linda; Batstone, Damien J; Jensen, Paul D; Jeppsson, Ulf

    2016-07-01

    Anaerobic co-digestion is an emerging practice at wastewater treatment plants (WWTPs) to improve the energy balance and integrate waste management. Modelling of co-digestion in a plant-wide WWTP model is a powerful tool to assess the impact of co-substrate selection and dose strategy on digester performance and plant-wide effects. A feasible procedure to characterise and fractionate co-substrates COD for the Benchmark Simulation Model No. 2 (BSM2) was developed. This procedure is also applicable for the Anaerobic Digestion Model No. 1 (ADM1). Long chain fatty acid inhibition was included in the ADM1 model to allow for realistic modelling of lipid rich co-substrates. Sensitivity analysis revealed that, apart from the biodegradable fraction of COD, protein and lipid fractions are the most important fractions for methane production and digester stability, with at least two major failure modes identified through principal component analysis (PCA). The model and procedure were tested on bio-methane potential (BMP) tests on three substrates, each rich on carbohydrates, proteins or lipids with good predictive capability in all three cases. This model was then applied to a plant-wide simulation study which confirmed the positive effects of co-digestion on methane production and total operational cost. Simulations also revealed the importance of limiting the protein load to the anaerobic digester to avoid ammonia inhibition in the digester and overloading of the nitrogen removal processes in the water train. In contrast, the digester can treat relatively high loads of lipid rich substrates without prolonged disturbances.

  12. Combining Total Monte Carlo and Benchmarks for Nuclear Data Uncertainty Propagation on a Lead Fast Reactor's Safety Parameters

    OpenAIRE

    Alhassan, Erwin; Sjöstrand, Henrik; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael

    2014-01-01

    Analyses are carried out to assess the impact of nuclear data uncertainties on some reactor safety parameters for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-format libraries, generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo code to obtain distribution in reactor safety parameters. The distribution in keff obtained was compar...

  13. Derivation of tree stem structural parameters from static terrestrial laser scanning data

    Science.gov (United States)

    Tian, Wei; Lin, Yi; Liu, Yajing; Niu, Zheng

    2014-11-01

    Accurate tree-level characteristic information is increasingly demanded for forest management and environment protection. The cutting-edge remote sensing technique of terrestrial laser scanning (TLS) shows the potential of filling this gap. This study focuses on exploring the methods for deriving various tree stem structural parameters, such as stem position, diameter at breast height (DBH), the degree of stem shrinkage, and the elevation angle and azimuth angle of stem inclination. The data for test was collected with a Leica HDS6100 TLS system in Seurasaari, Southern Finland in September 2010. In the field, the reference positions and DBHs of 100 trees were measured manually. The isolation of individual trees is based on interactive segmentation of point clouds. The estimation of stem position and DBH is based on the schematic of layering and then least-square-based circle fitting in each layer. The slope of robust fit line between the height of each layer and DBH is used to characterize the stem shrinkage. The elevation angle of stem inclination is described by the angle between the ground plane and the fitted stem axis. The angle between the north direction and the fitted stem axis gives the azimuth angle of stem inclination. The estimation of the DBHs performed with R square (R2) of 0.93 and root mean square error (RMSE) of 0.038m.The average angle corresponding to stem shrinkage is -1.86°. The elevation angles of stem inclinations are ranged from 31° to 88.3°. The results have basically validated TLS for deriving multiple structural parameters of stem, which help better grasp tree specialties.

  14. Radial velocity observations of the 2015 Mar. 20 eclipse. A benchmark Rossiter-McLaughlin curve with zero free parameters

    Science.gov (United States)

    Reiners, A.; Lemke, U.; Bauer, F.; Beeck, B.; Huke, P.

    2016-10-01

    progress, accurate observations of solar line profiles across the solar disk are suggested. We publish our RVs taken during solar eclipse as a benchmark curve for codes calculating the RM effect and for models of solar surface velocities and line profiles.

  15. Identification of critical parameters for PEMFC stack performance characterization and control strategies for reliable and comparable stack benchmarking

    DEFF Research Database (Denmark)

    Mitzel, Jens; Gülzow, Erich; Kabza, Alexander;

    2016-01-01

    in an average cell voltage deviation of 21 mV. Test parameters simulating different stack applications are summarized. The stack demonstrated comparable average cell voltage of 0.63 V for stationary and portable conditions. For automotive conditions, the voltage increased to 0.69 V, mainly caused by higher...

  16. Identification of critical parameters for PEMFC stack performance characterization and control strategies for reliable and comparable stack benchmarking

    DEFF Research Database (Denmark)

    Mitzel, Jens; Gülzow, Erich; Kabza, Alexander;

    2016-01-01

    for the control strategy are summarized. This ensures result comparability as well as stable test conditions. E.g., the stack temperature fluctuation is minimized to about 1 °C. The experiments demonstrate that reactants pressures differ up to 12 kPa if pressure control positions are varied, resulting...... in an average cell voltage deviation of 21 mV. Test parameters simulating different stack applications are summarized. The stack demonstrated comparable average cell voltage of 0.63 V for stationary and portable conditions. For automotive conditions, the voltage increased to 0.69 V, mainly caused by higher...

  17. A benchmark on the calculation of kinetic parameters based on reactivity effect experiments in the CROCUS reactor

    Energy Technology Data Exchange (ETDEWEB)

    Paratte, J.M. [Laboratory for Reactor Physics and Systems Behaviour (LRS), Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Frueh, R. [Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Kasemeyer, U. [Laboratory for Reactor Physics and Systems Behaviour (LRS), Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Kalugin, M.A. [Kurchatov Institute, 123182 Moscow (Russian Federation); Timm, W. [Framatome-ANP, D-91050 Erlangen (Germany); Chawla, R. [Laboratory for Reactor Physics and Systems Behaviour (LRS), Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland)

    2006-05-15

    Measurements in the CROCUS reactor at EPFL, Lausanne, are reported for the critical water level and the inverse reactor period for several different sets of delayed supercritical conditions. The experimental configurations were also calculated by four different calculation methods. For each of the supercritical configurations, the absolute reactivity value has been determined in two different ways, viz.: (i) through direct comparison of the multiplication factor obtained employing a given calculation method with the corresponding value for the critical case (calculated reactivity: {rho} {sub calc}); (ii) by application of the inhour equation using the kinetic parameters obtained for the critical configuration and the measured inverse reactor period (measured reactivity: {rho} {sub meas}). The calculated multiplication factors for the reference critical configuration, as well as {rho} {sub calc} for the supercritical cases, are found to be in good agreement. However, the values of {rho} {sub meas} produced by two of the applied calculation methods differ appreciably from the corresponding {rho} {sub calc} values, clearly indicating deficiencies in the kinetic parameters obtained from these methods.

  18. Optimized treatment parameters to account for interfractional variability in scanned ion beam therapy of lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Brevet, Romain

    2015-02-04

    Scanned ion beam therapy of lung tumors is severely limited in its clinical applicability by intrafractional organ motion, interference effects between beam and tumor motion (interplay) as well as interfractional anatomic changes. To compensate for dose deterioration by intrafractional motion, motion mitigation techniques, such as gating have been developed. The latter confines the irradiation to a predetermined breathing state, usually the stable end-exhale phase. However, optimization of the treatment parameters is needed to further improve target dose coverage and normal tissue sparing. The aim of the study presented in this dissertation was to determine treatment planning parameters that permit to recover good target coverage and homogeneity during a full course of lung tumor treatments. For 9 lung tumor patients from MD Anderson Cancer Center (MDACC), a total of 70 weekly time-resolved computed tomography (4DCT) datasets were available, which depict the evolution of the patient anatomy over the several fractions of the treatment. Using the GSI in-house treatment planning system (TPS) TRiP4D, 4D simulations were performed on each weekly 4DCT for each patient using gating and optimization of a single treatment plan based on a planning CT acquired prior to treatment. It was found that using a large beam spot size, a short gating window (GW), additional margins and multiple fields permitted to obtain the best results, yielding an average target coverage (V95) of 96.5%. Two motion mitigation techniques, one approximating the rescanning process (multiple irradiations of the target with a fraction of the planned dose) and one combining the latter and gating, were then compared to gating. Both did neither show an improvement in target dose coverage nor in normal tissue sparing. Finally, the total dose delivered to each patient in a simulation of a fractioned treatment was calculated and clinical requirements in terms of target coverage and normal tissue sparing were

  19. Benchmarking of a treatment planning system for spot scanning proton therapy: Comparison and analysis of robustness to setup errors of photon IMRT and proton SFUD treatment plans of base of skull meningioma

    Energy Technology Data Exchange (ETDEWEB)

    Harding, R., E-mail: ruth.harding2@wales.nhs.uk [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF, United Kingdomand Abertawe Bro Morgannwg University Health Board, Medical Physics and Clinical Engineering, Swansea SA2 8QA (United Kingdom); Trnková, P.; Lomax, A. J. [Paul Scherrer Institute, Centre for Proton Therapy, Villigen 5232 (Switzerland); Weston, S. J.; Lilley, J.; Thompson, C. M.; Cosgrove, V. P. [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF (United Kingdom); Short, S. C. [Leeds Institute of Molecular Medicine, Oncology and Clinical Research, Leeds LS9 7TF, United Kingdomand St James’s Institute of Oncology, Oncology, Leeds LS9 7TF (United Kingdom); Loughrey, C. [St James’s Institute of Oncology, Oncology, Leeds LS9 7TF (United Kingdom); Thwaites, D. I. [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF, United Kingdomand Institute of Medical Physics, School of Physics, University of Sydney, Sydney NSW 2006 (Australia)

    2014-11-01

    Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was to benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.

  20. Stripping chronopotentiometry at scanned deposition potential (SSCP). Part 2. Determination of metal ion speciation parameters

    NARCIS (Netherlands)

    Leeuwen, van H.P.; Town, R.M.

    2003-01-01

    Stripping chronopotentiometry at scanned deposition potential (SSCP) generates curves that are fundamentally different in form from classical polarographic waves. Still, despite their steeper slope and non-linear log plot, the shift in the SSCP half-wave deposition potential can be interpreted in a

  1. TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes over a Range in Parameter Space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B.; Azmy, Yousry Y. [Department of Mechanical and Nuclear Engineering, Penn State University, University Park, PA 16802 (United States)

    2008-07-01

    We present the TORT solutions to the 3-D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40x40x40, 200 angles) to the finest model (160x160x160, 800 angles), and employed the results of the finest computational model as reference values for evaluating the mesh-refinement effects. The presented results show that the solutions for most cases in the suite of benchmarks as computed by TORT are in the asymptotic regime. (authors)

  2. Revisiting the TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes Over a Range in Parameter Space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B [ORNL; Azmy, Yousry [North Carolina State University

    2009-01-01

    Improved TORT solutions to the 3D transport codes' suite of benchmarks exercise are presented in this study. Preliminary TORT solutions to this benchmark indicate that the majority of benchmark quantities for most benchmark cases are computed with good accuracy, and that accuracy improves with model refinement. However, TORT fails to compute accurate results for some benchmark cases with aspect ratios drastically different from 1, possibly due to ray effects. In this work, we employ the standard approach of splitting the solution to the transport equation into an uncollided flux and a fully collided flux via the code sequence GRTUNCL3D and TORT to mitigate ray effects. The results of this code sequence presented in this paper show that the accuracy of most benchmark cases improved substantially. Furthermore, the iterative convergence problems reported for the preliminary TORT solutions have been resolved by bringing the computational cells' aspect ratio closer to unity and, more importantly, by using 64-bit arithmetic precision in the calculation sequence. Results of this study are also reported.

  3. Radioiodine scan index: A simplified, quantitative treatment response parameter for metastatic thyroid carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jong Ryool; Ahn, Byeong Cheol; Jeong, Shin Young; Lee, Sang Woo; Lee, Jae Tae [Dept. of Nuclear Medicine, Kyungpook National University School of Medicine and Hospital, Daegu (Korea, Republic of)

    2015-09-15

    We aimed to develop and validate a simplified, novel quantification method for radioiodine whole-body scans (WBSs) as a predictor for the treatment response in differentiated thyroid carcinoma (DTC) patients with distant metastasis. We retrospectively reviewed serial WBSs after radioiodine treatment from 2008 to 2011 in patients with metastatic DTC. For standardization of TSH simulation, only a subset of patients whose TSH level was fully enhanced (TSH > 80 mU/l) was enrolled. The radioiodine scan index (RSI) was calculated by the ratio of tumor-to-brain uptake. We compared correlations between the RSI and TSH-stimulated serum thyroglobulin (TSH{sub sT}g) level and between the RSI and Tg reduction rate of consecutive radioiodine treatments. A total of 30 rounds of radioiodine treatment for 15 patients were eligible. Tumor histology was 11 papillary and 4 follicular subtypes. The TSH{sub sT}g level was mean 980 ng/ml (range, 0.5–11,244). The Tg reduction rate after treatment was a mean of −7 % (range, −90 %–210 %). Mean RSI was 3.02 (range, 0.40–10.97). RSI was positively correlated with the TSH{sub sT}g level (R2 = 0.3084, p = 0.001) and negatively correlated with the Tg reduction rate (R2 = 0.2993, p = 0.037). The regression equation to predict treatment response was as follows: Tg reduction rate = −14.581 × RSI + 51.183. Use of the radioiodine scan index derived from conventional WBS is feasible to reflect the serum Tg level in patients with metastatic DTC, and it may be useful for predicting the biologic treatment response after radioiodine treatment.

  4. Scatter radiation breast exposure during head CT: impact of scanning conditions and anthropometric parameters on shielded and unshielded breast dose

    Energy Technology Data Exchange (ETDEWEB)

    Klasic, B. [Hospital for pulmonary diseases, Zagreb (Croatia); Knezevic, Z.; Vekic, B. [Rudjer Boskovic Institute, Zagreb (Croatia); Brnic, Z.; Novacic, K. [Merkur Univ. Hospital, Zagreb (Croatia)

    2006-07-01

    Constantly increasing clinical requests for CT scanning of the head on our facility continue to raise concern regarding radiation exposure of patients, especially radiosensitive tissues positioned close to the scanning plane. The aim of our prospective study was to estimate scatter radiation doses to the breast from routine head CT scans, both with and without use of lead shielding, and to establish influence of various technical and anthropometric factors on doses using statistical data analysis. In 85 patient referred to head CT for objective medical reasons, one breast was covered with lead apron during CT scanning. Radiation doses were measured at skin of both breasts and over the apron simultaneously, by the use of thermo luminescent dosimeters. The doses showed a mean reduction by 37% due to lead shielding. After we statistically analyzed our data, we observed significant correlation between under-the-shield dose and values of technical parameters. We used multiple linear regression model to describe the relationships of doses to unshielded and shielded breast respectively, with anthropometric and technical factors. Our study proved lead shielding of the breast to be effective, easy to use and leading to a significant reduction in scatter dose. (author)

  5. Methodology for Determining Optimal Exposure Parameters of a Hyperspectral Scanning Sensor

    Science.gov (United States)

    Walczykowski, P.; Siok, K.; Jenerowicz, A.

    2016-06-01

    The purpose of the presented research was to establish a methodology that would allow the registration of hyperspectral images with a defined spatial resolution on a horizontal plane. The results obtained within this research could then be used to establish the optimum sensor and flight parameters for collecting aerial imagery data using an UAV or other aerial system. The methodology is based on an user-selected optimal camera exposure parameters (i.e. time, gain value) and flight parameters (i.e. altitude, velocity). A push-broom hyperspectral imager- the Headwall MicroHyperspec A-series VNIR was used to conduct this research. The measurement station consisted of the following equipment: a hyperspectral camera MicroHyperspec A-series VNIR, a personal computer with HyperSpec III software, a slider system which guaranteed the stable motion of the sensor system, a white reference panel and a Siemens star, which was used to evaluate the spatial resolution. Hyperspectral images were recorded at different distances between the sensor and the target- from 5m to 100m. During the registration process of each acquired image, many exposure parameters were changed, such as: the aperture value, exposure time and speed of the camera's movement on the slider. Based on all of the registered hyperspectral images, some dependencies between chosen parameters had been developed: - the Ground Sampling Distance - GSD and the distance between the sensor and the target, - the speed of the camera and the distance between the sensor and the target, - the exposure time and the gain value, - the Density Number and the gain value. The developed methodology allowed us to determine the speed and the altitude of an unmanned aerial vehicle on which the sensor would be mounted, ensuring that the registered hyperspectral images have the required spatial resolution.

  6. Optimization of Parameters in 16-slice CT-‌‌scan Protocols for Reduction of the Absorbed Dose

    Directory of Open Access Journals (Sweden)

    Shahrokh Naseri

    2014-08-01

    Full Text Available Introduction In computed tomography (CT technology, an optimal radiation dose can be achieved via changing radiation parameters such as mA, pitch factor, rotation time and tube voltage (kVp for diagnostic images. Materials and Methods In this study, the brain, abdomen, and thorax scaning was performed using Toshiba 16-slice scannerand standard AAPM and CTDI phantoms. AAPM phantom was used for the measurement of image-related parameters and CTDI phantom was utilized for the calculation of absorbed dose to patients. Imaging parameters including mA (50-400 mA, pitch factor (1 and 1.5 and rotation time (range of 0.5, 0.75, 1, 1.5 and 2 seconds were considered as independent variables. The brain, abdomen and chest imaging was performed multi-slice and spiral modes. Changes in image quality parameters including contrast resolution (CR and spatial resolution (SR in each condition were measured and determined by MATLAB software. Results After normalizing data by plotting the full width at half maximum (FWHM of point spread function (PSF in each condition, it was observed that image quality was not noticeably affected by each cases. Therefore, in brain scan, the lowest patient dose was in 150 mA and rotation time of 1.5 seconds. Based on results of scanning of the abdomen and chest, the lowest patient dose was obtained by 100 mA and pitch factors of 1 and 1.5. Conclusion It was found that images with acceptable quality and reliable detection ability could be obtained using smaller doses of radiation, compared to protocols commonly used by operators.

  7. Structure Defects Interrelation of Heat-resistant Nickel Alloy Obtained by Selective Laser Melting Method and Strategy and Scanning Parameters

    Directory of Open Access Journals (Sweden)

    O. A. Bytsenko

    2016-01-01

    Full Text Available The objective was to conduct a study of the surface morphology and a chemical composition analysis of the powder of different fractional composition of a heat-resistant Ni-Co-Cr-AlTi-W-Mo-Nb alloy, and to define the patterns of change in the quantitative parameters of the structure of samples obtained by selective laser melting (SLM method with different parameters of power, laser speed, and a type of hatching (staggered, island diagonal, and solid diagonal.To study the surface morphology of the microstructure was used optical, laser-confocal and scanning electron microscopy. The elemental and local phase composition was performed by X-ray and miсro-X-ray spectrum analysis.The initial powder morphology study has found that the powder granules have a generally spherical shape, and the number of structural defects increases with increasing granule size. The microstructure of all granules has a dendritic structure. The superficial defects have a form of satellites, shapeless shield, round gas pores, and pores located in the inter-dendritic regions because of the shrinkage process.The study of the microstructure of the samples has been defined that dimensions of the structural components, pores, and micro-cracks depend on the parameters of the SLM process. With raising laser power within 160 - 190 W there is an increase in a fraction of pores and their average diameter. With further increase in laser power the volume fraction of pores is slightly reduced while their average size is, essentially, unchanged.It has been found that at the constant laser power and variable scanning speed the volume fraction of pores depends on the type of hatching. For staggered and solid diagonal hatching, at the constant laser power of 180 W with increasing scanning speed the volume fraction, at first, falls and then again grows, and for island diagonal hatching remains unchanged.When changing the laser power values within a range from 160 to 170 W for samples with

  8. Merging Terrestrial Laser Scanning Technology with Photogrammetric and Total Station Data for the Determination of Avalanche Modeling Parameters

    Science.gov (United States)

    Prokop, Alexander; Schön, Peter; Singer, Florian; Pulfer, Gaëtan; Naaim, Mohamed; Thibert, Emmanuel

    2015-04-01

    Dynamic avalanche modeling requires as input the volumes and areas of the snow released, entrained and deposited, as well as the fracture heights. Determining these parameters requires high-resolution spatial snow surface data from before and after the avalanche. In snow and avalanche research, terrestrial laser scanners are used increasingly to efficiently and accurately map snow surfaces and depths over an area of several km². In practice however, several problems may occur, which must be recognized and accounted for during post-processing and interpretation, especially under the circumstances of surveying an artificially triggered avalanche at a test site, where time pressure due to operational time constraints may also cause less than ideal circumstances and surveying setups. Thus, we combine terrestrial laser scanning with photogrammetry, total station measurements and field snow observations to document and accurately survey an artificially triggered avalanche at the Col du Lautaret test site (2058 m) in the French Alps. The ability of TLS to determine avalanche modeling input parameters efficiently and accurately is shown, and we demonstrate how, merging TLS with the other methods facilitates and improves data post-processing and interpretation. Finally, we present for this avalanche the data required for the parameterization and validation of dynamic avalanche models and discuss using newest data, how the new laser scanning device generation (e.g Riegl VZ6000) further improves such surveying campaigns.

  9. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  10. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  11. General squark flavour mixing: constraints, phenomenology and benchmarks

    CERN Document Server

    De Causmaecker, Karen; Herrmann, Bjoern; Mahmoudi, Farvah; O'Leary, Ben; Porod, Werner; Sekmen, Sezen; Strobbe, Nadja

    2015-01-01

    We present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.

  12. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  13. Benchmarking v ICT

    OpenAIRE

    Blecher, Jan

    2009-01-01

    The aim of this paper is to describe benefits of benchmarking IT in wider context and benchmarking scope at all. I specify benchmarking as a process and mention basic rules and guidelines. Further I define IT benchmarking domains and describe possibilities of their use. Best known type of IT benchmark is cost benchmark which represents only a subset of benchmark opportunities. In this paper, is cost benchmark rather an imaginary first step to benchmarking contribution to company. IT benchmark...

  14. DSP Platform Benchmarking : DSP Platform Benchmarking

    OpenAIRE

    Xinyuan, Luo

    2009-01-01

    Benchmarking of DSP kernel algorithms was conducted in the thesis on a DSP processor for teaching in the course TESA26 in the department of Electrical Engineering. It includes benchmarking on cycle count and memory usage. The goal of the thesis is to evaluate the quality of a single MAC DSP instruction set and provide suggestions for further improvement in instruction set architecture accordingly. The scope of the thesis is limited to benchmark the processor only based on assembly coding. The...

  15. SU-E-T-778: Use of the 2D MatriXX Detector for Measuring Scanned Ion Beam Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Anvar, M Varasteh; Monaco, V; Sacchi, R; Guarachi, L Fanola; Cirio, R [Istituto Nazionale di Fisica Nucleare (INFN), Division of Turin, TO (Italy); University of Torino, Turin, TO (Italy); Giordanengo, S; Marchetto, F; Vignati, A [Istituto Nazionale di Fisica Nucleare (INFN), Division of Turin, TO (Italy); Donetti, M [Istituto Nazionale di Fisica Nucleare (INFN), Division of Turin, TO (Italy); Centro Nazionale di Adroterapia Oncologica (CNAO), Pavia, PV (Italy); Ciocca, M; Panizza, D [Centro Nazionale di Adroterapia Oncologica (CNAO), Pavia, PV (Italy)

    2015-06-15

    Purpose: The quality assurance (QA) procedure has to check the most relevant beam parameters to ensure the delivery of the correct dose to patients. Film dosimetry, which is commonly used for scanned ion beam QA, does not provide immediate results. The purpose of this work is to answer whether, for scanned ion beam therapy, film dosimetry can be replaced with the 2D MatriXX detector as a real-time tool. Methods: MatriXX, equipped with 32×32 parallel plate ion-chambers, is a commercial device intended for pre-treatment verification of conventional radiation therapy.The MatriXX, placed at the isocenter, and GAFCHROMIC films, positioned on the MatriXX entrance, were exposed to 131.44 MeV proton and 221.45 MeV/u Carbon-ion beams.The OmniPro-I’mRT software, applied for the data taking of MatriXX, gives the possibility of acquiring consecutive snapshots. Using the NI LabVIEW, the data from snapshots were logged as text files for further analysis. Radiochromic films were scanned with EPSON scanner and analyzed using software programs developed in-house for comparative purposes. Results: The field dose uniformity, flatness, beam position and beam width were investigated. The field flatness for the region covering 6×6 cm{sup 2} square field was found to be better than 2%. The relative standard deviations, expected to be constant over 2×2, 4×4 and 6×6 pixels from MatriXX measurement gives a uniformity of 1.5% in good agreement with the film results.The beam center position is determined with a resolution better than 200 µm for Carbon and less than 100 µm for proton beam.The FWHM determination for a beam wider than 10 mm is satisfactory, whilst for smaller beams the determination is uncertain. Conclusion: Precise beam position and fast 2D dose distribution can be determined in real-time using MatriXX detector. The results show that MatriXX is quick and accurate enough to be used in charged-particle therapy QA.

  16. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  17. Benchmarking a DSP processor

    OpenAIRE

    Lennartsson, Per; Nordlander, Lars

    2002-01-01

    This Master thesis describes the benchmarking of a DSP processor. Benchmarking means measuring the performance in some way. In this report, we have focused on the number of instruction cycles needed to execute certain algorithms. The algorithms we have used in the benchmark are all very common in signal processing today. The results we have reached in this thesis have been compared to benchmarks for other processors, performed by Berkeley Design Technology, Inc. The algorithms were programm...

  18. Investigation of the influence of image reconstruction filter and scan parameters on operation of automatic tube current modulation systems for different CT scanners.

    Science.gov (United States)

    Sookpeng, Supawitoo; Martin, Colin J; Gentle, David J

    2015-03-01

    Variation in the user selected CT scanning parameters under automatic tube current modulation (ATCM) between hospitals has a substantial influence on the radiation doses and image quality for patients. The aim of this study was to investigate the effect of changing image reconstruction filter and scan parameter settings on tube current, dose and image quality for various CT scanners operating under ATCM. The scan parameters varied were pitch factor, rotation time, collimator configuration, kVp, image thickness and image filter convolution (FC) used for reconstruction. The Toshiba scanner varies the tube current to achieve a set target noise. Changes in the FC setting and image thickness for the first reconstruction were the major factors affecting patient dose. A two-step change in FC from smoother to sharper filters doubles the dose, but is counterbalanced by an improvement in spatial resolution. In contrast, Philips and Siemens scanners maintained tube current values similar to those for a reference image and patient, and the tube current only varied slightly for changes in individual CT scan parameters. The selection of a sharp filter increased the image noise, while use of iDose iterative reconstruction reduced the noise. Since the principles used by CT manufacturers for ATCM vary, it is important that parameters which affect patient dose and image quality for each scanner are made clear to operator to aid in optimisation.

  19. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  20. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  1. Visual information transfer. Part 1: Assessment of specific information needs. Part 2: Parameters of appropriate instrument scanning behavior

    Science.gov (United States)

    Comstock, J. R., Jr.; Kirby, R. H.; Coates, G. D.

    1985-01-01

    The present study explored eye scan behavior as a function of level of subject training. Oculometric (eye scan) measures were recorded from each of ten subjects during training trials on a CRT based flight simulation task. The task developed for the study incorporated subtasks representative of specific activities performed by pilots, but which could be performed at asymptotic levels within relatively short periods of training. Changes in eye scan behavior were examined as initially untrained subjects developed skill in the task. Eye scan predictors of performance on the task were found. Examination of eye scan in proximity to selected task events revealed differences in the distribution of looks at the instruments as a function of level of training.

  2. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiology, Osaka University Hospital, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Department of Radiation Oncology, St. Jude Children’s Research Hospital, Memphis, TN 38105 (United States)

    2016-01-15

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm{sup 3}, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm{sup 3} voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation

  3. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  4. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  5. Simultaneous multi-parameter observation of Harring-tonine-treating HL-60 cells with both two-photon and confo-cal laser scanning microscopy

    Institute of Scientific and Technical Information of China (English)

    张春阳; 李艳平; 马辉; 李素文; 薛绍白; 陈瓞延

    2001-01-01

    Harringtonine (HT), a kind of anticancer drug isolated from Chinese herb-Cephalotaxus hainanensis Li, can induce apoptosis in promyelocytic leukemia HL-60 cells. With both two-photon laser scanning microscopy and confocal laser scanning microscopy in combination with the fluores-cent probe Hoechst 33342, tetramethyrhodamine ethyl ester (TMRE) and Fluo 3-AM, we simulta-neously observed HT-induced changes in nuclear morphology, mitochondrial membrane potential and intracellular calcium concentration ([Ca2+]i) in HL-60 cells, and developed a real-time, sensitive and invasive method for simultaneous multi-parameter observation of drug- treating living cells at the level of single cell.

  6. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  7. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  8. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  9. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  10. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore al...... alternative improvement strategies. Implementations of both a parametric and a non-parametric model are presented....

  11. Development of a two-parameter slit-scan flow cytometer for screening of normal and aberrant chromosomes: application to a karyotype of Sus scrofa domestica (pig)

    Science.gov (United States)

    Hausmann, Michael; Doelle, Juergen; Arnold, Armin; Stepanow, Boris; Wickert, Burkhard; Boscher, Jeannine; Popescu, Paul C.; Cremer, Christoph

    1992-07-01

    Laser fluorescence activated slit-scan flow cytometry offers an approach to a fast, quantitative characterization of chromosomes due to morphological features. It can be applied for screening of chromosomal abnormalities. We give a preliminary report on the development of the Heidelberg slit-scan flow cytometer. Time-resolved measurement of the fluorescence intensity along the chromosome axis can be registered simultaneously for two parameters when the chromosome axis can be registered simultaneously for two parameters when the chromosome passes perpendicularly through a narrowly focused laser beam combined by a detection slit in the image plane. So far automated data analysis has been performed off-line on a PC. In its final performance, the Heidelberg slit-scan flow cytometer will achieve on-line data analysis that allows an electro-acoustical sorting of chromosomes of interest. Interest is high in the agriculture field to study chromosome aberrations that influence the size of litters in pig (Sus scrofa domestica) breeding. Slit-scan measurements have been performed to characterize chromosomes of pigs; we present results for chromosome 1 and a translocation chromosome 6/15.

  12. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  13. SPIDER - VII. The Central Dark Matter Content of Bright Early-Type Galaxies: Benchmark Correlations with Mass, Structural Parameters and Environment

    CERN Document Server

    Tortora, C; Napolitano, N R; de Carvalho, R R; Romanowsky, A J

    2012-01-01

    We analyze the central dark-matter (DM) content of $\\sim 4,500$ massive ($M_\\star \\gsim 10^{10} \\, M_\\odot$), low-redshift ($z<0.1$), early-type galaxies (ETGs), with high-quality $ugrizYJHK$ photometry and optical spectroscopy from SDSS and UKIDSS. We estimate the "central" fraction of DM within the $K$-band effective radius, \\Re. The main results of the present work are the following: (1) DM fractions increase systematically with both structural parameters (i.e. \\Re, and S\\'ersic index, $n$) and mass proxies (central velocity dispersion, stellar and dynamical mass), as in previous studies, and decrease with central stellar density. 2) All correlations involving DM fractions are caused by two fundamental ones with galaxy effective radius and central velocity dispersion. These correlations are independent of each other, so that ETGs populate a central-DM plane (DMP), i.e. a correlation among fraction of total-to-stellar mass, effective radius, and velocity dispersion, whose scatter along the total-to-stell...

  14. SPIDER - VI. The central dark matter content of luminous early-type galaxies: Benchmark correlations with mass, structural parameters and environment

    Science.gov (United States)

    Tortora, C.; La Barbera, F.; Napolitano, N. R.; de Carvalho, R. R.; Romanowsky, A. J.

    2012-09-01

    We analyse the central dark-matter (DM) content of ˜4500 massive (M★ ≳ 1010 M⊙), low-redshift (z < 0.1), early-type galaxies (ETGs), with high-quality ugrizY JHK photometry and optical spectroscopy from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey (UKIDSS). We estimate the 'central' fraction of DM within the K-band effective radius, Reff, using spherically symmetric isotropic galaxy models. We discuss the role of systematics in stellar mass estimates, dynamical modelling, and velocity dispersion anisotropy. The main results of the present work are the following: (1) DM fractions increase systematically with both structural parameters (i.e. Reff and Sérsic index, n) and mass proxies (central velocity dispersion, stellar and dynamical mass), as in previous studies, and decrease with central stellar density. (2) All correlations involving DM fractions are caused by two fundamental ones with galaxy effective radius and central velocity dispersion. These correlations are independent of each other, so that ETGs populate a central-DM plane (DMP), i.e. a correlation among fraction of total-to-stellar mass, effective radius, and velocity dispersion, whose scatter along the total-to-stellar mass axis amounts to ˜0.15 dex. (3) In general, under the assumption of an isothermal or a constant M/L profile for the total mass distribution, a Chabrier initial mass function (IMF) is favoured with respect to a bottom-heavier Salpeter IMF, as the latter produces negative (i.e. unphysical) DM fractions for more than 50 per cent of the galaxies in our sample. For a Chabrier IMF, the DM estimates agree with Λ cold dark matter toy-galaxy models based on contracted DM-halo density profiles. We also find agreement with predictions from hydrodynamical simulations. (4) The central DM content of ETGs does not depend significantly on the environment where galaxies reside, with group and field ETGs having similar DM trends.

  15. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  16. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  17. Repeatability and Reproducibility of Retinal Nerve Fiber Layer Parameters Measured by Scanning Laser Polarimetry with Enhanced Corneal Compensation in Normal and Glaucomatous Eyes

    Directory of Open Access Journals (Sweden)

    Mirian Ara

    2015-01-01

    Full Text Available Objective. To assess the intrasession repeatability and intersession reproducibility of peripapillary retinal nerve fiber layer (RNFL thickness parameters measured by scanning laser polarimetry (SLP with enhanced corneal compensation (ECC in healthy and glaucomatous eyes. Methods. One randomly selected eye of 82 healthy individuals and 60 glaucoma subjects was evaluated. Three scans were acquired during the first visit to evaluate intravisit repeatability. A different operator obtained two additional scans within 2 months after the first session to determine intervisit reproducibility. The intraclass correlation coefficient (ICC, coefficient of variation (COV, and test-retest variability (TRT were calculated for all SLP parameters in both groups. Results. ICCs ranged from 0.920 to 0.982 for intravisit measurements and from 0.910 to 0.978 for intervisit measurements. The temporal-superior-nasal-inferior-temporal (TSNIT average was the highest (0.967 and 0.946 in normal eyes, while nerve fiber indicator (NFI; 0.982 and inferior average (0.978 yielded the best ICC in glaucomatous eyes for intravisit and intervisit measurements, respectively. All COVs were under 10% in both groups, except NFI. TSNIT average had the lowest COV (2.43% in either type of measurement. Intervisit TRT ranged from 6.48 to 12.84. Conclusions. The reproducibility of peripapillary RNFL measurements obtained with SLP-ECC was excellent, indicating that SLP-ECC is sufficiently accurate for monitoring glaucoma progression.

  18. Effect of duration of scan acquisition on CT perfusion parameter values in primary and metastatic tumors in the lung

    Energy Technology Data Exchange (ETDEWEB)

    Ng, Chaan S., E-mail: cng@mdanderson.org [Departments of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Chandler, Adam G., E-mail: adam.chandler@mdanderson.org [Departments of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); CT research, GE Healthcare, Waukesha, Wisconsin (United States); Wei, Wei, E-mail: wwei@mdanderson.org [Departments of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Anderson, Ella F., E-mail: eanderson@mdanderson.org [Departments of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Herron, Delise H., E-mail: dherron@mdanderson.org [Departments of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Kurzrock, Razelle, E-mail: rkurzrock@ucsd.edu [Departments of Investigational Cancer Therapeutics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Charnsangavej, Chusilp, E-mail: ccharn@mdanderson.org [Departments of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2013-10-01

    Objectives: To assess the effect of acquisition duration (T{sub acq}) and pre-enhancement set points (T{sub 1}) on computer tomography perfusion (CTp) parameter values in primary and metastatic tumors in the lung. Materials and methods: 24 lung CTp datasets (10 primary; 14 metastatic), acquired using a two phase protocol spanning 125 s, in 12 patients with lung tumors, were analyzed by deconvolution modeling to yield tumor blood flow (BF), blood volume (BV), mean transit time (MTT), and permeability (PS) values. CTp analyses were undertaken for the reference dataset (i.e., T{sub 1} = t{sub 0}) with varying T{sub acq} from 12 to 125 s. This was repeated for shifts in T{sub 1} (±0.5 s, ±1.0 s, ±2.0 s relative to the reference at t{sub 0}). Resultant CTp values were plotted against T{sub acq}; values at 30 s, 50 s, 65 s and 125 s were compared using linear mixed model. Results: All CTp parameter values were noticeably influenced by T{sub acq}, with generally less marked changes beyond 50 s, and with no difference in behavior between primary and secondary tumors. Apart from BV, which attained a plateau at approximately 50 s, the other three CTp parameters did not reach steady-state values within the available 125 s of data, with values at 30 s, 50 s and 65 s significantly different from 125 s (p < 0.004). Shifts in T{sub 1} also affected the CTp parameters values, with positive shifts having greater impact on CTp values than negative shifts. Conclusion: CTp parameter values derived from deconvolution modeling can be markedly affected by T{sub acq}, and pre-enhancement set-points. 50 s acquisition may be adequate for BV, but longer than 125 s is probably required for reliable characterization of the other three CTp parameters.

  19. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  20. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  1. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  2. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  3. Limits on the Superconducting Order Parameter in NdFeAsO_{1-x}F_y from Scanning SQUID Microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Hicks, Clifford W.; Lippman, Thomas M.; /Stanford U., Geballe Lab.; Huber, Martin E.; /Colorado U.; Ren, Zhi-An; Yang, Jie; Zhao, Zhong-Xian; /Beijing, Inst. Phys.; Moler, Kathryn A.; /Stanford U., Geballe Lab.

    2009-01-08

    Identifying the symmetry of the superconducting order parameter in the recently-discovered ferrooxypnictide family of superconductors, RFeAsO{sub 1-x}F{sub y}, where R is a rare earth, is a high priority. Many of the proposed order parameters have internal {pi} phase shifts, like the d-wave order found in the cuprates, which would result in direction-dependent phase shifts in tunneling. In dense polycrystalline samples, these phase shifts in turn would result in spontaneous orbital currents and magnetization in the superconducting state. We perform scanning SQUID microscopy on a dense polycrystalline sample of NdFeAsO{sub 0.94}F{sub 0.06} with T{sub c} = 48K and find no such spontaneous currents, ruling out many of the proposed order parameters.

  4. Benchmarking in Foodservice Operations.

    Science.gov (United States)

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  5. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  6. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  7. Scanning drift tube measurements of electron transport parameters in different gases: argon, synthetic air, methane and deuterium

    Science.gov (United States)

    Korolov, I.; Vass, M.; Donkó, Z.

    2016-10-01

    Measurements of transport coefficients of electrons in a scanning drift tube apparatus are reported for different gases: argon, synthetic air, methane and deuterium. The experimental system allows the spatio-temporal development of the electron swarms (‘swarm maps’) to be recorded and this information, when compared with the profiles predicted by theory, makes it possible to determine the ‘time-of-flight’ transport coefficients: the bulk drift velocity, the longitudinal diffusion coefficient and the effective ionization coefficient, in a well-defined way. From these data, the effective Townsend ionization coefficient is determined as well. The swarm maps provide, additionally, direct, unambiguous information about the hydrodynamic/non-hydrodynamic regimes of the swarms, aiding the selection of the proper regions applicable for the determination of the transport coefficients.

  8. Impact of scanning parameters and breathing patterns on image quality and accuracy of tumor motion reconstruction in 4D CBCT: a phantom study.

    Science.gov (United States)

    Lee, Soyoung; Yan, Guanghua; Lu, Bo; Kahler, Darren; Li, Jonathan G; Sanjiv, Samat S

    2015-11-08

    Four-dimensional, cone-beam CT (4D CBCT) substantially reduces respiration-induced motion blurring artifacts in three-dimension (3D) CBCT. However, the image quality of 4D CBCT is significantly degraded which may affect its accuracy in localizing a mobile tumor for high-precision, image-guided radiation therapy (IGRT). The purpose of this study was to investigate the impact of scanning parameters hereinafter collectively referred to as scanning sequence) and breathing patterns on the image quality and the accuracy of computed tumor trajectory for a commercial 4D CBCT system, in preparation for its clinical implementation. We simulated a series of periodic and aperiodic sinusoidal breathing patterns with a respiratory motion phantom. The aperiodic pattern was created by varying the period or amplitude of individual sinusoidal breathing cycles. 4D CBCT scans of the phantom were acquired with a manufacturer-supplied scanning sequence (4D-S-slow) and two in-house modified scanning sequences (4D-M-slow and 4D-M-fast). While 4D-S-slow used small field of view (FOV), partial rotation (200°), and no imaging filter, 4D-M-slow and 4D-M-fast used medium FOV, full rotation, and the F1 filter. The scanning speed was doubled in 4D-M-fast (100°/min gantry rotation). The image quality of the 4D CBCT scans was evaluated using contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), and motion blurring ratio (MBR). The trajectory of the moving target was reconstructed by registering each phase of the 4D CBCT with a reference CT. The root-mean-squared-error (RMSE) analysis was used to quantify its accuracy. Significant decrease in CNR and SNR from 3D CBCT to 4D CBCT was observed. The 4D-S-slow and 4D-M-fast scans had comparable image quality, while the 4D-M-slow scans had better performance due to doubled projections. Both CNR and SNR decreased slightly as the breathing period increased, while no dependence on the amplitude was observed. The difference of both CNR and SNR

  9. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Takashina, Masaaki; Koizumi, Masahiko [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P., E-mail: vadim.p.moskvin@gmail.com [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States)

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  10. Benchmarking File System Benchmarking: It *IS* Rocket Science

    OpenAIRE

    Seltzer, Margo I.; Tarasov, Vasily; Bhanage, Saumitra; Zadok, Erez

    2011-01-01

    The quality of file system benchmarking has not improved in over a decade of intense research spanning hundreds of publications. Researchers repeatedly use a wide range of poorly designed benchmarks, and in most cases, develop their own ad-hoc benchmarks. Our community lacks a definition of what we want to benchmark in a file system. We propose several dimensions of file system benchmarking and review the wide range of tools and techniques in widespread use. We experimentally show that even t...

  11. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  12. 线结构光扫描测头结构参数优化设计%Structure Parameter Optimization of Line Structured Light Scanning Probe

    Institute of Scientific and Technical Information of China (English)

    张海燕; 于连栋; 郑文兴; 董钊

    2014-01-01

    线结构光扫描测头在逆向工程中应用广泛,其测量精度对3D 重建可靠性有重要影响。本文建立了基于光学三角法的线结构光扫描测头数学模型,由坐标转换和透视成像原理实现二维像素坐标与三维世界坐标间转换。分析了线结构光扫描测头结构参数对测量精度的影响,推导出测量误差模型。分析了漫反射光强度变化对结构参数的影响,结合实际设计需要给出边界约束条件,并仿真实现结构参数优化设计,误差低于0.02mm。%Line structured light scanning probe is widely used in reverse engineering,its measurement precision and reliability have an important influence on 3D reconstruction.This paper establishes mathematical model of line structured light scanning probe based on optical triangulation method.The transformation between imaging pixel coordinates and three-dimensional world coordinate is achieved by coordinate transformation and perspective imaging principle.The influence on measurement accuracy of structured light scanning probe structure parameters is analyzed and the measurement error model is derived.The effect of diffuse light intensity on structural parameters is analyzed and boundary conditions are de-termined by actual design needs.Simulation on optimization design of structural parameters displays an error less than 0. 02mm.

  13. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  14. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  15. FGK Benchmark Stars A new metallicity scale

    CERN Document Server

    Jofre, Paula; Soubiran, C; Blanco-Cuaresma, S; Pancino, E; Bergemann, M; Cantat-Gaudin, T; Hernandez, J I Gonzalez; Hill, V; Lardo, C; de Laverny, P; Lind, K; Magrini, L; Masseron, T; Montes, D; Mucciarelli, A; Nordlander, T; Recio-Blanco, A; Sobeck, J; Sordo, R; Sousa, S G; Tabernero, H; Vallenari, A; Van Eck, S; Worley, C C

    2013-01-01

    In the era of large spectroscopic surveys of stars of the Milky Way, atmospheric parameter pipelines require reference stars to evaluate and homogenize their values. We provide a new metallicity scale for the FGK benchmark stars based on their corresponding fundamental effective temperature and surface gravity. This was done by analyzing homogeneously with up to seven different methods a spectral library of benchmark stars. Although our direct aim was to provide a reference metallicity to be used by the Gaia-ESO Survey, the fundamental effective temperatures and surface gravities of benchmark stars of Heiter et al. 2013 (in prep) and their metallicities obtained in this work can also be used as reference parameters for other ongoing surveys, such as Gaia, HERMES, RAVE, APOGEE and LAMOST.

  16. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  17. Liver Steatosis Assessed by Controlled Attenuation Parameter (CAP) Measured with the XL Probe of the FibroScan: A Pilot Study Assessing Diagnostic Accuracy.

    Science.gov (United States)

    Sasso, Magali; Audière, Stéphane; Kemgang, Astrid; Gaouar, Farid; Corpechot, Christophe; Chazouillères, Olivier; Fournier, Céline; Golsztejn, Olivier; Prince, Stéphane; Menu, Yves; Sandrin, Laurent; Miette, Véronique

    2016-01-01

    To assess liver steatosis, the controlled attenuation parameter (CAP; giving an estimate of ultrasound attenuation ∼3.5 MHz) is available with the M probe of the FibroScan. We report on the adaptation of the CAP for the FibroScan XL probe (center frequency 2.5 MHz) without modifying the range of values (100-400 dB/m). CAP validation was successfully performed on Field II simulations and on tissue-mimicking phantoms. In vivo performance was assessed in a cohort of 59 patients spanning the range of steatosis. In vivo reproducibility was good and similar with both probes. The area under receiver operative characteristic curve was equal to 0.83/0.84 and 0.92/0.91 for the M/XL probes to detect >2% and >16% liver fat, respectively, as assessed by magnetic resonance imaging. Patients can now be assessed simultaneously for steatosis and fibrosis using the FibroScan, regardless of their morphology.

  18. Remote sensing of ice crystal asymmetry parameter using multi-directional polarization measurements – Part 2: Application to the Research Scanning Polarimeter

    Directory of Open Access Journals (Sweden)

    B. van Diedenhoven

    2013-03-01

    Full Text Available A new method to retrieve ice cloud asymmetry parameters from multi-directional polarized reflectance measurements is applied to measurements of the airborne Research Scanning Polarimeter (RSP obtained during the CRYSTAL-FACE campaign in 2002. The method assumes individual hexagonal ice columns and plates serve as proxies for more complex shapes and aggregates. The closest fit is searched in a look-up table of simulated polarized reflectances computed for cloud layers that contain individual, randomly oriented hexagonal columns and plates with a virtually continuous selection of aspect ratios and distortion. The asymmetry parameter, aspect ratio and distortion of the hexagonal particle that leads to the best fit with the measurements are considered the retrieved values. Two cases of thick convective clouds and two cases of thinner anvil cloud layers are analyzed. Median asymmetry parameters retrieved by the RSP range from 0.76 to 0.78, and are generally smaller than those currently assumed in most climate models and satellite retrievals. In all cases the measurements indicate roughened or distorted ice crystals, which is consistent with previous findings. Retrieved aspect ratios in three of the cases range from 0.9 to 1.6, indicating compact particles dominate the cloud-top shortwave radiation. Retrievals for the remaining case indicate plate-like ice crystals with aspect ratios around 0.3. The RSP retrievals are qualitatively consistent with the CPI images obtained in the same cloud layers. Retrieved asymmetry parameters are compared to those determined in situ by the Cloud Integrating Nephelometer (CIN. For two cases, the median values of asymmetry parameter retrieved by CIN and RSP agree within 0.01, while for the two other cases RSP asymmetry parameters are about 0.03–0.05 greater than those obtained by the CIN. Part of this bias might be explained by vertical variation of the asymmetry parameter or ice shattering on the CIN probe, or both.

  19. Motion Interplay as a Function of Patient Parameters and Spot Size in Spot Scanning Proton Therapy for Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Grassberger, Clemens, E-mail: Grassberger.Clemens@mgh.harvard.edu [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States); Center for Proton Radiotherapy, Paul Scherrer Institute, Villigen (Switzerland); Dowdell, Stephen [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States); Lomax, Antony [Center for Proton Radiotherapy, Paul Scherrer Institute, Villigen (Switzerland); Sharp, Greg; Shackleford, James; Choi, Noah; Willers, Henning; Paganetti, Harald [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States)

    2013-06-01

    Purpose: To quantify the impact of respiratory motion on the treatment of lung tumors with spot scanning proton therapy. Methods and Materials: Four-dimensional Monte Carlo simulations were used to assess the interplay effect, which results from relative motion of the tumor and the proton beam, on the dose distribution in the patient. Ten patients with varying tumor sizes (2.6-82.3 cc) and motion amplitudes (3-30 mm) were included in the study. We investigated the impact of the spot size, which varies between proton facilities, and studied single fractions and conventionally fractionated treatments. The following metrics were used in the analysis: minimum/maximum/mean dose, target dose homogeneity, and 2-year local control rate (2y-LC). Results: Respiratory motion reduces the target dose homogeneity, with the largest effects observed for the highest motion amplitudes. Smaller spot sizes (σ ≈ 3 mm) are inherently more sensitive to motion, decreasing target dose homogeneity on average by a factor 2.8 compared with a larger spot size (σ ≈ 13 mm). Using a smaller spot size to treat a tumor with 30-mm motion amplitude reduces the minimum dose to 44.7% of the prescribed dose, decreasing modeled 2y-LC from 87.0% to 2.7%, assuming a single fraction. Conventional fractionation partly mitigates this reduction, yielding a 2y-LC of 71.6%. For the large spot size, conventional fractionation increases target dose homogeneity and prevents a deterioration of 2y-LC for all patients. No correlation with tumor volume is observed. The effect on the normal lung dose distribution is minimal: observed changes in mean lung dose and lung V{sub 20} are <0.6 Gy(RBE) and <1.7%, respectively. Conclusions: For the patients in this study, 2y-LC could be preserved in the presence of interplay using a large spot size and conventional fractionation. For treatments using smaller spot sizes and/or in the delivery of single fractions, interplay effects can lead to significant deterioration of

  20. Effect of CT scanning parameters on CT number%CT扫描参数对人体组织CT值影响的研究

    Institute of Scientific and Technical Information of China (English)

    彭文献; 彭天舟; 叶小琴; 付益谋; 潘慧平; 高源统; 金光波

    2010-01-01

    目的 探索不同CT扫描参数对人体同一种组织CT值的影响.方法 通过在同一台CT机上,分别改变其中1个扫描参数,如X线管电压、毫安秒和重建函数等,而保持其他扫描参数不变,多次扫描标准体模,测量和分析体模中不同物质的CT值.结果 X线管电压的改变对物质的CT值影响具有显著意义.聚乙烯、聚碳酸酯、有机玻璃的CT值与管电压成正相关;聚四氟乙烯的CT值与管电压成负相关.毫安秒和重建函数对CT值的影响差异无统计学意义.结论 同一个人体组织在不同的X线管电压条件下CT值是变化的.因此,在临床影像诊断和放疗中应该考虑图像扫描参数的设置对诊断和治疗结果的影响.%Objective To study the effects on tissue CT number caused by scan protocols.Methods The phantom was repeatedly scanned in different protocols by changing only one of parameters,such as X-ray tube voltage,mAs and recon kernel,while other parameters were ketp unchanged.The CT number of different materials in phantom were measured and analyzed.Results The CT numbers of tissues changed remarkably with the tube voltage and had different relativity for different tissues.The CT numbers had positive correlation with kV for such maierials as polyethyle,lexan,perspex,but for teflon the correlation was negative.The mAs and recon kernel had no effects on CT number.Conclusions The CT number of tissue changes with scanning X-ray tube voltage,so the setting of scan parameters should be taken into account in image diagnosis and radiotherapy.

  1. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  2. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  3. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  4. New LHC Benchmarks for the CP-conserving Two-Higgs-Doublet Model

    CERN Document Server

    Haber, Howard E

    2015-01-01

    We introduce a strategy to study the parameter space of the general, CP-conserving, two-Higgs-doublet Model (2HDM) with a softly broken Z_2-symmetry by means of a new "hybrid" basis. In this basis the input parameters are the measured values of the mass of the observed Standard Model (SM)-like Higgs boson and its coupling strength to vector boson pairs, the mass of the second CP-even Higgs boson, the ratio of neutral Higgs vacuum expectation values, and three additional dimensionless parameters. Using the hybrid basis, we present numerical scans of the 2HDM parameter space where we survey available parameter regions and analyze model constraints. From these results, we define a number of benchmark scenarios that capture different aspects of non-standard Higgs phenomenology that are of interest for future LHC Higgs searches.

  5. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  6. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  7. HPCS HPCchallenge Benchmark Suite

    Science.gov (United States)

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  8. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  9. Optimization of scanning parameters in children CT examination%儿童CT检查中扫描参数的优化

    Institute of Scientific and Technical Information of China (English)

    李大伟; 周献锋; 杨春勇; 王进; 涂彧; 余宁乐

    2014-01-01

    Objective To reduce the radiation dose to children from CT scanning through proper adjustment to milliamps (mAs) and scan lengths with a view to learning the relationship between scanning condition and radiation dose.Methods To compare the differences in main scanning parameters used for head,chest and abdomen at multi-detector CT examination of paediatric patients (< 1 year old,1-5 years old,6-10 years old,11-15 years old) at seven hospitals in Jiangsu province.CT dose index (CTDI) and dose-length-product (DLP) were gained by using standard children dose model (diameter 16 cm) under the same scanning conditions.Effective doses (E) at different parts of the body from children CT scanning were estimated after modification by empirical weighting factor.Statistical analyses of mAs,scan lengths and DLP were performed with SPSS 16.0 software.The differences in radiation dose due to the choice of condition of scanning were compared between two typical hospitals.Results The mean values of effective doses to paediatric patients during head,chest and abdomen CT scanning were 2.46,5.69,11.86 mSv,respectively.DLP was correlated positively with mAs and scan length (head,chest and abdomen examination,r =0.81,0.81,0.92,P <0.05).Due to higher mAs used,the effective dose from chest and abdomen CT examination among all age groups was higher than that in Germany Galanski research.Due to larger scanning length in abdominal examination among all age groups,effective doses in hospital were the highest.Conclusions Reasonablely reducing the scan length and mAs during CT scanning could lower children's CT radiation risk,while clinical diagnosis is not affected.%目的 了解儿童CT检查扫描条件选择及其所致辐射剂量的相关性,以期通过适当调节mAs、扫描长度等参数,降低儿童CT检查患者受照剂量.方法 比较江苏省7家医院不同年龄组(<1岁、1~5岁、6~10岁和11 ~15岁)儿童头颅、胸部、腹部多排螺旋CT检查主要扫描

  10. Effection of HBV on controlled attenuation parameters using FibroScan(R)%HBV感染对FibroScan(R)实施受控衰减参数评价脂肪肝影响的研究

    Institute of Scientific and Technical Information of China (English)

    朱梦飞; 刘静; 王洁; 高岭; 陈公英; 施军平; 娄国强

    2014-01-01

    Objective To evulate the effection of HBV on controlled attenuation parameters(CAP)measurement of fatty liver using FibroScan(R).Methods Patients with only non alcohol fatty liverdisease (NAFLD),only chronic hepatitis B(CHB)and CHB combining with NAFLD (CHB + NAFLD) were complete the CAP measurement with FibroScan-502.Results 579 patients with CHB,624 patients with NAFLD and 124 patients with CHB + NAFLD were recruited.CAP values of CHB was 218.90 ±56.40 dB/m,significantly lower than that of NAFLD (290.85 ± 61.46 dB/m,P =0.00) and that of CHB + NAFLD (284.93 ±64.70 dB/m,P =0.00).It is no difference between CAP values of CHB + NAFLD and NAFLD (P =0.55).It is no difference between CAP values of high load of HBVDNA group and the low,high load of HBsAg group and the low,and HBeAg positive group and the negative (P =0.73,0.93,0.55).Conclusion HBV infection does not effect on CAP values of FibroScan(R).%目的 评价HBV感染是否对FibroScan(R)实施受控衰减参数(CAP)有影响.方法 使用FibroScan-502机型对临床诊断非酒精性脂肪性肝病(NAFLD)患者、慢性乙型肝炎患者(CHB)及慢性乙肝合并脂肪肝(CHB合并NAFLD)患者进行肝脏脂肪含量(CAP值)测定.结果 579例CHB患者、124例CHB合并NAFLD患者和624例NAFLD患者FibroScan检查,FibroScan测定的CAP与BMI呈正相关(r=0.46,P=0.004),而与血清HBV NDA载量、HBsAg载量以及HBeAg阳性与否无关;CHB组的CAP值(218.90 ±56.40 dB/m)显著低于NAFLD组(290.85±61.46 dB/m,P=0.00),也低于CHB合并NAFLD组(284.93±64.70 dB/m,P=0.00),而CHB合并NAFLD组的CAP值与NAFLD组间的差异无统计学意义(P=0.55);血清高HBV DNA载量组的CAP值与低HBV DNA载量组间,高HBsAg载量的CAP值与低HBsAg载量组间,以及HBeAg阳性组的CAP值与HBeAg阴性组间差异均无统计学意义.结论 HBV感染不影响FibroScan测定的CAP值.

  11. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, K [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Takashina, M; Koizumi, M [Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Das, I; Moskvin, V [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  12. A new numerical benchmark of a freshwater lens

    Science.gov (United States)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  13. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    Science.gov (United States)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-08-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  14. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  15. Multislice helical CT (MSCT) for mid-facial trauma: optimization of parameters for scanning and reconstruction; Mehrschicht-Spiral-CT (MSCT) beim Mittelgesichtstrauma: Optimierung der Aufnahme- und Rekonstruktionsparameter

    Energy Technology Data Exchange (ETDEWEB)

    Dammert, S.; Funke, M.; Obernauer, S.; Grabbe, E. [Abt. Roentgendiagnostik I, Georg-August-Univ. Goettingen (Germany); Merten, H.A. [Abt. fuer Mund-, Kiefer- und Gesichtschirurgie, Georg-August-Univ. Goettingen (Germany)

    2002-07-01

    Purpose: To determine the optimal scan parameters in multislice helical CT (MSCT) of the facial bone complex for both axial scanning and multiplanar reconstructions. Material and Methods: An anthropomorphic skull phantom was examined with a MSCT. Axial scans were performed with continuously increasing collimations (4 x 1.25 - 4 x 2.5 mm), tube current (20 - 200 mA) and table speeds (3.75 mm/rot. and 7.5 mm/rot.). Multiplanar reconstructions in coronal and parasagittal planes with different reconstruction increment and slice thickness were evaluated in terms of image noise, contour artifacts and visualisation of anatomical structures. Results: The best image quality was obtained with a collimation of 4 x 1.25 mm and a - table speed of 3.75 mm/rot. A reconstruction increment of 0.6 mm achieved the best time to image quality relation. With these parameters the bone structures were depicted in an optimal way without artifacts. The tube current could be reduced to 50 mA without significant loss of image quality. The optimized protocol was used for regular routine examinations in patients with facial trauma (n = 66). Conclusions: Low-dose MSCT using thin collimation, low table speed and small reconstruction increments provides excellent data for both axial images and multiplanar reconstructions in patients with facial trauma. An additional examination in coronal orientation is therefore no longer necessary. (orig.) [German] Zielsetzung: Verbesserung der Aufnahme- und Rekonstruktionsparameter in der Mehrschicht Spiral-CT (MSCT) zur Untersuchung des knoechernen Mittelgesichtes in verschiedenen Ebenen. Material und Methode: Ein anthropomorphes Schaedel-Phantom wurde in axialer Schichtfuehrung mit einem MSCT untersucht, wobei die Kollimation (1,25 - 2,5 mm), der Tischvorschubfaktor (Pitch 3 - 6) und der Roehrenstrom (20 - 200 mA) systematisch variiert wurden. Aus den Volumendatensaetzen wurden jeweils koronare und parasagittale Sekundaerreformationen mit unterschiedlichen

  16. Radiography benchmark 2014

    Energy Technology Data Exchange (ETDEWEB)

    Jaenisch, G.-R., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Deresch, A., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Bellon, C., E-mail: Gerd-Ruediger.Jaenisch@bam.de [Federal Institute for Materials Research and Testing, Unter den Eichen 87, 12205 Berlin (Germany); Schumm, A.; Lucet-Sanchez, F.; Guerin, P. [EDF R and D, 1 avenue du Général de Gaulle, 92141 Clamart (France)

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  17. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperf...

  18. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts. The blade shape is considered as a fixed parameter...

  19. Evaluation of relevant time-of-flight-MS parameters used in HPLC/MS full-scan screening methods for pesticide residues.

    Science.gov (United States)

    Mezcua, Milagros; Malato, Octavio; Martinez-Uroz, Maria Angeles; Lozano, Ana; Agüera, Ana; Fernández-Alba, Amadeo R

    2011-01-01

    An automatic screening method based on HPLC/time-of-flight (TOF)-MS (full scan) was used for the analysis of 103 non-European fruit and vegetable samples after extraction by the quick, easy, cheap, effective, rugged, and safe method. The screening method uses a database that includes 300 pesticides, their fragments, and isotopical signals (910 ions) that identified 210 pesticides in 78 positive samples, with the highest number of detection being nine pesticides/sample. The concentrations of 97 pesticides were 100 microg/kg. Several parameters of the automatic screening method were carefully studied to avoid false positives and negatives in the studied samples; these included peak filter (number of chromatographic peak counts) and search criteria (retention time and error window). These parameters were affected by differences in mass accuracy and sensitivity of the two HPLC/TOF-MS systems used with different resolution powers (15 000 and 7500), the capabilities of which for resolving the ions included in the database from the matrix ions were studied in four matrixes, viz., pepper, rice, garlic, and cauliflower. Both of these mass resolutions were found to be satisfactory to resolve interferences from the signals of interest in the studied matrixes.

  20. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  1. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  2. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  3. The PROOF benchmark suite measuring PROOF performance

    Science.gov (United States)

    Ryu, S.; Ganis, G.

    2012-06-01

    The PROOF benchmark suite is a new utility suite of PROOF to measure performance and scalability. The primary goal of the benchmark suite is to determine optimal configuration parameters for a set of machines to be used as PROOF cluster. The suite measures the performance of the cluster for a set of standard tasks as a function of the number of effective processes. Cluster administrators can use the suite to measure the performance of the cluster and find optimal configuration parameters. PROOF developers can also utilize the suite to help them measure, identify problems and improve their software. In this paper, the new tool is explained in detail and use cases are presented to illustrate the new tool.

  4. Spatial and optical parameters of contrails in the vortex and dispersion regime determined by means of a ground-based scanning lidar

    Energy Technology Data Exchange (ETDEWEB)

    Freudenthaler, V.; Homburg, F.; Jaeger, H. [Fraunhofer-Inst. fuer Atmosphaerische Umweltforschung (IFU), Garmisch-Partenkirchen (Germany)

    1997-12-31

    The spatial growth of individual condensation trails (contrails) of commercial aircrafts in the time range from 15 s to 60 min behind the aircraft is investigated by means of a ground-based scanning backscatter lidar. The growth in width is mainly governed by wind shear and varies between 18 m/min and 140 m/min. The growth of the cross-section varies between 3500 m{sup 2}/min and 25000 m{sup 2}/min. These values are in agreement with results of model calculations and former field measurements. The vertical growth is often limited by boundaries of the humid layer at flight level, but values up to 18 m/min were observed. Optical parameters like depolarization, optical depth and lidar ratio, i.e. the extinction-to-backscatter ratio, have been retrieved from the measurements at a wavelength of 532 nm. The linear depolarization rises from values as low as 0.06 for a young contrail (10 s old) to values around 0.5, typical for aged contrails. The latter indicates the transition from non-crystalline to crystalline particles in persistent contrails within a few minutes. The scatter of depolarization values measured in individual contrails is narrow, independent of the contrails age, and suggests a rather uniform growth of the particles inside a contrail. (author) 18 refs.

  5. Third-order nonlinear optical properties of organic azo dyes by using strength of nonlinearity parameter and Z-scan technique

    Science.gov (United States)

    Motiei, H.; Jafari, A.; Naderali, R.

    2017-02-01

    In this paper, two chemically synthesized organic azo dyes, 2-(2,5-Dichloro-phenyazo)-5,5-dimethyl-cyclohexane-1,3-dione (azo dye (i)) and 5,5-Dimethyl-2-tolylazo-cyclohexane-1,3-dione (azo dye (ii)), have been studied from optical Kerr nonlinearity point of view. These materials were characterized by Ultraviolet-visible spectroscopy. Experiments were performed using a continous wave diode-pumped laser at 532 nm wavelength in three intensities of the laser beam. Nonlinear absorption (β), refractive index (n2) and third-order susceptibility (χ (3)) of dyes, were calculated. Nonlinear absorption coefficient of dyes have been calculated from two methods; 1) using theoretical fits and experimental data in the Z-scan technique, 2) using the strength of nonlinearity curves. The values of β obtained from both of the methods were approximately the same. The results demonstrated that azo dye (ii) displays better nonlinearity and has a lower two-photon absorption threshold than azo dye (i). Calculated parameter related to strength of nonlinearity for azo dye (ii) was higher than azo dye (i), It may be due to presence of methyl in azo dye (ii) instead of chlorine in azo dye (i). Furthermore, The measured values of third order susceptibility of azo dyes were from the order of 10-9 esu . These azo dyes can be suitable candidate for optical switching devices.

  6. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  7. Entropy-based benchmarking methods

    OpenAIRE

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  8. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  9. Size-dependent scanning parameters (kVp and mAs) for photon-counting spectral CT system in pediatric imaging: simulation study

    Science.gov (United States)

    Chen, Han; Danielsson, Mats; Xu, Cheng

    2016-06-01

    We are developing a photon-counting spectral CT detector with a small pixel size of 0.4× 0.5 mm2, offering a potential advantage for better visualization of small structures in pediatric patients. The purpose of this study is to determine the patient size dependent scanning parameters (kVp and mAs) for pediatric CT in two imaging cases: adipose imaging and iodinated blood imaging. Cylindrical soft-tissue phantoms of diameters between 10-25 cm were used to mimic patients of different ages from 0 to 15 y. For adipose imaging, a 5 mm diameter adipose sphere was assumed as an imaging target, while in the case of iodinated imaging, an iodinated blood sphere of 1 mm in diameter was assumed. By applying the geometry of a commercial CT scanner (GE Lightspeed VCT), simulations were carried out to calculate the detectability index, {{d}\\prime 2} , with tube potentials varying from 40 to 140 kVp. The optimal kVp for each phantom in each imaging case was determined such that the dose-normalized detectability index, {{d}\\prime 2}/ dose, is maximized. With the assumption that the detectability index in pediatric imaging is required the same as in typical adult imaging, the value of mAs at optimal kVp for each phantom was selected to achieve a reference detectability index that was obtained by scanning an adult phantom (30 cm in diameter) in a typical adult CT procedure (120 kVp and 200 mAs) using a modeled energy-integrating system. For adipose imaging, the optimal kVps are 50, 60, 80, and 120 kVp, respectively, for phantoms of 10, 15, 20, and 25 cm in diameter. The corresponding mAs values required to achieve the reference detectability index are only 9%, 23%, 24%, and 54% of the mAs that is used for adult patients at 120 kVp, for 10, 15, 20, and 25 cm diameter phantoms, respectively. In the case of iodinated imaging, a tube potential of 60 kVp was found optimal for all phantoms investigated, and the mAs values required to achieve the reference detectability

  10. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  11. A Benchmark of Lidar-Based Single Tree Detection Methods Using Heterogeneous Forest Data from the Alpine Space

    Directory of Open Access Journals (Sweden)

    Lothar Eysn

    2015-05-01

    Full Text Available In this study, eight airborne laser scanning (ALS-based single tree detection methods are benchmarked and investigated. The methods were applied to a unique dataset originating from different regions of the Alpine Space covering different study areas, forest types, and structures. This is the first benchmark ever performed for different forests within the Alps. The evaluation of the detection results was carried out in a reproducible way by automatically matching them to precise in situ forest inventory data using a restricted nearest neighbor detection approach. Quantitative statistical parameters such as percentages of correctly matched trees and omission and commission errors are presented. The proposed automated matching procedure presented herein shows an overall accuracy of 97%. Method based analysis, investigations per forest type, and an overall benchmark performance are presented. The best matching rate was obtained for single-layered coniferous forests. Dominated trees were challenging for all methods. The overall performance shows a matching rate of 47%, which is comparable to results of other benchmarks performed in the past. The study provides new insight regarding the potential and limits of tree detection with ALS and underlines some key aspects regarding the choice of method when performing single tree detection for the various forest types encountered in alpine regions.

  12. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  13. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  14. Benchmark job – Watch out!

    CERN Document Server

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  15. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  16. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  17. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth pre

  18. Influence of scanning and reconstruction parameters on quality of three-dimensional surface models of the dental arches from cone beam computed tomography

    NARCIS (Netherlands)

    Hassan, B.; Souza, P.C.; Jacobs, R.; Berti, S.D.; van der Stelt, P.

    2010-01-01

    The study aim is to investigate the influence of scan field, mouth opening, voxel size, and segmentation threshold selections on the quality of the three-dimensional (3D) surface models of the dental arches from cone beam computed tomography (CBCT). 3D models of 25 patients scanned with one image in

  19. Measurement Methods in the field of benchmarking

    Directory of Open Access Journals (Sweden)

    István Szűts

    2004-05-01

    Full Text Available In benchmarking we often come across with parameters being difficultto measure while executing comparisons or analyzing performance, yet they haveto be compared and measured so as to be able to choose the best practices. Thesituation is similar in the case of complex, multidimensional evaluation as well,when the relative importance and order of different dimensions, parameters to beevaluated have to be determined or when the range of similar performanceindicators have to be decreased with regard to simpler comparisons. In suchcases we can use the ordinal or interval scales of measurement elaborated by S.S.Stevens.

  20. Nuclear Scans

    Science.gov (United States)

    Nuclear scans use radioactive substances to see structures and functions inside your body. They use a special ... images. Most scans take 20 to 45 minutes. Nuclear scans can help doctors diagnose many conditions, including ...

  1. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  2. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  3. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  4. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  5. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  6. Developing scheduling benchmark tests for the Space Network

    Science.gov (United States)

    Moe, Karen L.; Happell, Nadine; Brady, Sean

    1993-01-01

    A set of benchmark tests were developed to analyze and measure Space Network scheduling characteristics and to assess the potential benefits of a proposed flexible scheduling concept. This paper discusses the role of the benchmark tests in evaluating alternative flexible scheduling approaches and defines a set of performance measurements. The paper describes the rationale for the benchmark tests as well as the benchmark components, which include models of the Tracking and Data Relay Satellite (TDRS), mission spacecraft, their orbital data, and flexible requests for communication services. Parameters which vary in the tests address the degree of request flexibility, the request resource load, and the number of events to schedule. Test results are evaluated based on time to process and schedule quality. Preliminary results and lessons learned are addressed.

  7. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  8. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  9. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  10. International Benchmarking of Electricity Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2014-01-01

    Electricity transmission system operators (TSO) in Europe are increasing subject to high-powered performance-based regulation, such as revenue-cap regimes. The determination of the parameters in such regimes is challenging for national regulatory authorities (NRA), since there is normally a single...... TSO operating in each jurisdiction. The solution for European regulators has been found in international regulatory benchmarking, organized in collaboration with the Council of European Energy Regulators (CEER) in 2008 and 2012 for 22 and 23 TSOs, respectively. The frontier study provides static cost...

  11. Visual information transfer. 1: Assessment of specific information needs. 2: The effects of degraded motion feedback. 3: Parameters of appropriate instrument scanning behavior

    Science.gov (United States)

    Comstock, J. R., Jr.; Kirby, R. H.; Coates, G. D.

    1984-01-01

    Pilot and flight crew assessment of visually displayed information is examined as well as the effects of degraded and uncorrected motion feedback, and instrument scanning efficiency by the pilot. Computerized flight simulation and appropriate physiological measurements are used to collect data for standardization.

  12. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    Energy Technology Data Exchange (ETDEWEB)

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  13. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  14. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  15. Randomized benchmarking of multiqubit gates.

    Science.gov (United States)

    Gaebler, J P; Meier, A M; Tan, T R; Bowler, R; Lin, Y; Hanneke, D; Jost, J D; Home, J P; Knill, E; Leibfried, D; Wineland, D J

    2012-06-29

    We describe an extension of single-qubit gate randomized benchmarking that measures the error of multiqubit gates in a quantum information processor. This platform-independent protocol evaluates the performance of Clifford unitaries, which form a basis of fault-tolerant quantum computing. We implemented the benchmarking protocol with trapped ions and found an error per random two-qubit Clifford unitary of 0.162±0.008, thus setting the first benchmark for such unitaries. By implementing a second set of sequences with an extra two-qubit phase gate inserted after each step, we extracted an error per phase gate of 0.069±0.017. We conducted these experiments with transported, sympathetically cooled ions in a multizone Paul trap-a system that can in principle be scaled to larger numbers of ions.

  16. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  17. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...... already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity. © IWA Publishing 2013....

  18. The contextual benchmark method: benchmarking e-government services

    NARCIS (Netherlands)

    Jansen, Jurjen; Vries, de Sjoerd; Schaik, van Paul

    2010-01-01

    This paper offers a new method for benchmarking e-Government services. Government organizations no longer doubt the need to deliver their services on line. Instead, the question that is more relevant is how well the electronic services offered by a particular organization perform in comparison with

  19. Plasma Waves as a Benchmark Problem

    CERN Document Server

    Kilian, Patrick; Schreiner, Cedric; Spanier, Felix

    2016-01-01

    A large number of wave modes exist in a magnetized plasma. Their properties are determined by the interaction of particles and waves. In a simulation code, the correct treatment of field quantities and particle behavior is essential to correctly reproduce the wave properties. Consequently, plasma waves provide test problems that cover a large fraction of the simulation code. The large number of possible wave modes and the freedom to choose parameters make the selection of test problems time consuming and comparison between different codes difficult. This paper therefore aims to provide a selection of test problems, based on different wave modes and with well defined parameter values, that is accessible to a large number of simulation codes to allow for easy benchmarking and cross validation. Example results are provided for a number of plasma models. For all plasma models and wave modes that are used in the test problems, a mathematical description is provided to clarify notation and avoid possible misunderst...

  20. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  1. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  2. JACOB: a dynamic database for computational chemistry benchmarking.

    Science.gov (United States)

    Yang, Jack; Waller, Mark P

    2012-12-21

    JACOB (just a collection of benchmarks) is a database that contains four diverse benchmark studies, which in-turn included 72 data sets, with a total of 122,356 individual results. The database is constructed upon a dynamic web framework that allows users to retrieve data from the database via predefined categories. Additional flexibility is made available via user-defined text-based queries. Requested sets of results are then automatically presented as bar graphs, with parameters of the graphs being controllable via the URL. JACOB is currently available at www.wallerlab.org/jacob.

  3. Higgs Pair Production: Choosing Benchmarks With Cluster Analysis

    CERN Document Server

    Dall'Osso, Martino; Gottardo, Carlo A; Oliveira, Alexandra; Tosi, Mia; Goertz, Florian

    2015-01-01

    New physics theories often depend on a large number of free parameters. The precise values of those parameters in some cases drastically affect the resulting phenomenology of fundamental physics processes, while in others finite variations can leave it basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics of different models; a clustering algorithm using that metric may then allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmark points are then guaranteed to be sensitive to a large area of the parameter space. In this doc...

  4. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  5. 2009 South American benchmarking study: natural gas transportation companies

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Nathalie [Gas TransBoliviano S.A. (Bolivia); Walter, Juliana S. [TRANSPETRO, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In the current business environment large corporations are constantly seeking to adapt their strategies. Benchmarking is an important tool for continuous improvement and decision-making. Benchmarking is a methodology that determines which aspects are the most important to be improved upon, and it proposes establishing a competitive parameter in an analysis of the best practices and processes, applying continuous improvement driven by the best organizations in their class. At the beginning of 2008, GTB (Gas TransBoliviano S.A.) contacted several South American gas transportation companies to carry out a regional benchmarking study in 2009. In this study, the key performance indicators of the South American companies, whose reality is similar, for example, in terms of prices, availability of labor, and community relations, will be compared. Within this context, a comparative study of the results, the comparative evaluation among natural gas transportation companies, is becoming an essential management instrument to help with decision-making. (author)

  6. Development of a Benchmark Example for Delamination Fatigue Growth Prediction

    Science.gov (United States)

    Krueger, Ronald

    2010-01-01

    The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  7. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  8. SU-E-T-266: Development of Evaluation System of Optimal Synchrotron Controlling Parameter for Spot Scanning Proton Therapy with Multiple Gate Irradiations in One Operation Cycle

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, T; Fujii, Y [Hokkaido University Hospital, Sapporo, Hokkaido (Japan); Hitachi Ltd., Hitachi-shi, Ibaraki (Japan); Miyamoto, N; Matsuura, T; Takao, S; Matsuzaki, Y [Hokkaido University Hospital, Sapporo, Hokkaido (Japan); Koyano, H; Shirato, H [Hokkaido University Graduate School of Medicine, Sapporo, Hokkaido (Japan); Nihongi, H; Umezawa, M; Matsuda, K [Hitachi Ltd., Hitachi-shi, Ibaraki (Japan); Umegaki, K [Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido (Japan)

    2015-06-15

    Purpose: We have developed a gated spot scanning proton beam therapy system with real-time tumor-tracking. This system has the ability of multiple-gated irradiation in a single synchrotron operation cycle controlling the wait-time for consecutive gate signals during a flat-top phase so that the decrease in irradiation efficiency induced by irregular variation of gate signal is reduced. Our previous studies have shown that a 200 ms wait-time is appropriate to increase the average irradiation efficiency, but the optimal wait-time can vary patient by patient and day by day. In this research, we have developed an evaluation system of the optimal wait-time in each irradiation based on the log data of the real-time-image gated proton beam therapy (RGPT) system. Methods: The developed system consists of logger for operation of RGPT system and software for evaluation of optimal wait-time. The logger records timing of gate on/off, timing and the dose of delivered beam spots, beam energy and timing of X-ray irradiation. The evaluation software calculates irradiation time in the case of different wait-time by simulating the multiple-gated irradiation operation using several timing information. Actual data preserved in the log data are used for gate on and off time, spot irradiation time, and time moving to the next spot. Design values are used for the acceleration and deceleration times. We applied this system to a patient treated with the RGPT system. Results: The evaluation system found the optimal wait-time of 390 ms that reduced the irradiation time by about 10 %. The irradiation time with actual wait-time used in treatment was reproduced with accuracy of 0.2 ms. Conclusion: For spot scanning proton therapy system with multiple-gated irradiation in one synchrotron operation cycle, an evaluation system of the optimal wait-time in each irradiation based on log data has been developed. Funding Support: Japan Society for the Promotion of Science (JSPS) through the FIRST

  9. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  10. FIESTA序列扫描参数优化在胎羊中的应用研究%Scan parameters optimization of FIESTA sequence in the application study of fetal sheep

    Institute of Scientific and Technical Information of China (English)

    曹剑锋; 张玉珍; 朱铭

    2011-01-01

    Objective In order to enhance the image qualities, optimize scan parameters of FIESTA sequence were studied through fetal sheep for better use of fetal MRI in human. Materials and Methods Two fetal sheep were scanned by different parameters of FIESTA. Compared their specific absorption rate (SAR), imaging definition. Better parameters summarized after the study. Results The better parameters of FIESTA were TR: 3. 8ms.TE:l. 4ms,thicknesS:7mm,gap!lmmtmatrix!224X224,NEX:2. Conclusion The optimization parameters of FIESTA sequence will do better for human fetus and be important for diagnosing fetal abnormalities.%目的:优化快速稳态进动采集序列(FIESTA)扫描序列的参数,取得高质量的图像,便于MRI在人类胎儿扫描中的应用.方法:对2只胎羊行FIESTA序列扫描,变化各种扫描参数,比较不同参数下图像质量的差异.比较总结出相对理想的扫描参数.结果:总结得出相对理想的扫描参数为TR:3.8ms,TE:1.4ms,层厚:7mm,间隔:1mm,矩阵:224×224,采集次数:2次.结论:通过胎羊动物模型,所取得的比较理想的扫描参数,能更好地运用于人类胎儿,持对诊断胎儿各种病变诊断有重要作用.

  11. Cardiac function after chemoradiation for esophageal cancer: comparison of heart dose-volume histogram parameters to multiple gated acquisition scan changes.

    Science.gov (United States)

    Tripp, P; Malhotra, H K; Javle, M; Shaukat, A; Russo, R; De Boer, S; Podgorsak, M; Nava, H; Yang, G Y

    2005-01-01

    In this paper we determine if preoperative chemoradiation for locally advanced esophageal cancer leads to changes in cardiac ejection fraction. This is a retrospective review of 20 patients treated at our institution for esophageal cancer between 2000 and 2002. Multiple gated acquisition cardiac scans were obtained before and after platinum-based chemoradiation (50.4 Gy). Dose-volume histograms for heart, left ventricle and left anterior descending artery were analyzed. Outcomes assessed included pre- and postchemoradiation ejection fraction ratio and percentage change in ejection fraction postchemoradiation. A statistically significant difference was found between median prechemoradiation ejection fraction (59%) and postchemoradiation ejection fraction (54%) (P = 0.01), but the magnitude of the difference was not clinically significant. Median percentage volume of heart receiving more than 20, 30 and 40 Gy were 61.5%, 58.5% and 53.5%, respectively. Our data showed a clinically insignificant decline in ejection fraction following chemoradiation for esophageal cancer. We did not observe statistically or clinically significant associations between radiation dose to heart, left ventricle or left anterior descending artery and postchemoradiation ejection fraction.

  12. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  13. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  14. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  15. MRI Scans

    Science.gov (United States)

    Magnetic resonance imaging (MRI) uses a large magnet and radio waves to look at organs and structures inside your body. Health care professionals use MRI scans to diagnose a variety of conditions, from ...

  16. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  17. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection.

  18. A Benchmark for Management Effectiveness

    OpenAIRE

    Zimmermann, Bill; Chanaron, Jean-Jacques; Klieb, Leslie

    2007-01-01

    International audience; This study presents a tool to gauge managerial effectiveness in the form of a questionnaire that is easy to administer and score. The instrument covers eight distinct areas of organisational climate and culture of management inside a company or department. Benchmark scores were determined by administering sample-surveys to a wide cross-section of individuals from numerous firms in Southeast Louisiana, USA. Scores remained relatively constant over a seven-year timeframe...

  19. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  20. Development and Application of Benchmark Examples for Mode II Static Delamination Propagation and Fatigue Growth Predictions

    Science.gov (United States)

    Krueger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  1. Development of Benchmark Examples for Quasi-Static Delamination Propagation and Fatigue Growth Predictions

    Science.gov (United States)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  2. Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions

    Science.gov (United States)

    Kruger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  3. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  4. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  5. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  6. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Science.gov (United States)

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  7. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  8. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  9. Prognostic role of metabolic parameters of {sup 18}F-FDG PET-CT scan performed during radiation therapy in locally advanced head and neck squamous cell carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Min, Myo; Forstner, Dion [Liverpool Hospital, Cancer Therapy Centre, Liverpool, NSW (Australia); University of New South Wales, Sydney, NSW (Australia); Ingham Institute of Applied Medical Research, Liverpool, NSW (Australia); Lin, Peter; Shon, Ivan Ho; Lin, Michael [University of New South Wales, Sydney, NSW (Australia); Liverpool Hospital, Department of Nuclear Medicine and Positron Emission Tomography, Liverpool, NSW (Australia); University of Western Sydney, Sydney, NSW (Australia); Lee, Mark T. [Liverpool Hospital, Cancer Therapy Centre, Liverpool, NSW (Australia); University of New South Wales, Sydney, NSW (Australia); Bray, Victoria; Fowler, Allan [Liverpool Hospital, Cancer Therapy Centre, Liverpool, NSW (Australia); Chicco, Andrew [Liverpool Hospital, Department of Nuclear Medicine and Positron Emission Tomography, Liverpool, NSW (Australia); Tieu, Minh Thi [Calvary Mater Newcastle, Department of Radiation Oncology, Newcastle, NSW (Australia); University of Newcastle, Newcastle, NSW (Australia)

    2015-12-15

    To evaluate the prognostic value of {sup 18}F-FDG PET-CT performed in the third week (iPET) of definitive radiation therapy (RT) in patients with newly diagnosed locally advanced mucosal primary head and neck squamous-cell-carcinoma (MPHNSCC). Seventy-two patients with MPHNSCC treated with radical RT underwent staging PET-CT and iPET. The maximum standardised uptake value (SUV{sub max}), metabolic tumour volume (MTV) and total lesional glycolysis (TLG) of primary tumour (PT) and index node (IN) [defined as lymph node(s) with highest TLG] were analysed, and results were correlated with loco-regional recurrence-free survival (LRFS), disease-free survival (DFS), metastatic failure-free survival(MFFS) and overall survival (OS), using Kaplan-Meier analysis. Optimal cutoffs (OC) were derived from receiver operating characteristic curves: SUV{sub max-PT} = 4.25 g/mL, MTV{sub PT} = 3.3 cm{sup 3}, TLG{sub PT} = 9.4 g, for PT, and SUV{sub max-IN} = 4.05 g/mL, MTV{sub IN} = 1.85 cm{sup 3} and TLG{sub IN} = 7.95 g for IN. Low metabolic values in iPET for PT below OC were associated with statistically significant better LRFS and DFS. TLG was the best predictor of outcome with 2-year LRFS of 92.7 % vs. 71.1 % [p = 0.005, compared with SUV{sub max} (p = 0.03) and MTV (p = 0.022)], DFS of 85.9 % vs. 60.8 % [p = 0.005, compared with SUV{sub max} (p = 0.025) and MTV (p = 0.018)], MFFS of 85.9 % vs. 83.7 % [p = 0.488, compared with SUV{sub max} (p = 0.52) and MTV (p = 0.436)], and OS of 81.1 % vs. 75.0 % [p = 0.279, compared with SUV{sub max} (p = 0.345) and MTV (p = 0.512)]. There were no significant associations between the percentage reduction of primary tumour metabolic parameters and outcomes. In patients with nodal disease, metabolic parameters below OC (for both PT and IN) were significantly associated with all oncological outcomes, while TLG was again the best predictor: LRFS of 84.0 % vs. 55.3 % (p = 0.017), DFS of 79.4 % vs. 38.6 % (p = 0.001), MFFS 86.4 % vs. 68.2 % (p = 0

  10. General scan in flavor parameter space in models with vector quark doublets and an enhancement in the B → Xsγ process

    Science.gov (United States)

    Wang, Wenyu; Xiong, Zhao-Hua; Zhao, Xin-Yan

    2016-09-01

    In models with vector-like quark doublets, the mass matrices of up and down type quarks are related. Precise diagonalization of the mass matrices has become an obstacle in numerical studies. In this work we first propose a diagonalization method. As its application, in the Standard Model with one vector-like quark doublet we present the quark mass spectrum and Feynman rules for the calculation of B → Xsγ. We find that i) under the constraints of the CKM matrix measurements, the mass parameters in the bilinear term are constrained to a small value by the small deviation from unitarity; ii) compared with the fourth generation extension of the Standard Model, there is an enhancement to the B → Xsγ process in the contribution of vector-like quarks, resulting in a non-decoupling effect in such models. Supported by Natural Science Foundation of China (11375001) and Talents Foundation of Education Department of Beijing

  11. General scan in flavor parameter space in the models with vector quark doublets and an enhancement in $B\\to X_s\\gamma$ process

    CERN Document Server

    Wang, Wenyu; Zhao, Xin-Yan

    2016-01-01

    In the models with vector like quark doublets, the mass matrices of up and down type quarks are related. Precise diagonalization for the mass matrices became an obstacle in the numerical studies. In this work we propose a diagonalization method at first. As its application, in the standard model with one vector like quark doublet we present quark mass spectrum, Feynman rules for the calculation of $B\\to X_s\\gamma$. We find that i) under the constraints of the CKM matrix measurements, the mass parameters in the bilinear term are constrained to a small value by the small deviation from unitarity; ii) compared with the fourth generation extension of the standard model, there is an enhancement to $B\\to X_s\\gamma$ process in the contribution of vector like quark, resulting a non-decoupling effect in such models.

  12. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  13. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  14. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  15. Scan Statistics

    CERN Document Server

    Glaz, Joseph

    2009-01-01

    Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.

  16. 腰椎多层螺旋CT低剂量扫描参数优化的研究%Study of Multi-slice Spiral CT Low-dose Scanning Parameters Optimization in the Lumbar Spine

    Institute of Scientific and Technical Information of China (English)

    董艳军; 张磊磊; 唐晓; 白男男; 胡蓬勃; 郭兰田

    2014-01-01

    目的:评价不同扫描条件腰椎多层螺旋CT扫描对成像质量及辐射剂量的影响,探讨多层螺旋CT在腰椎软组织病变扫描中适宜的低剂量扫描参数。方法通过CT设备扫描患者腰椎,以240 mAs为起点,逐渐降低管电流量,记录CT主机上显示的辐射剂量;评价图像质量采用adw 4.3工作站,统计学处理采用SPSS18.0软件。结果管电流量80 mAs扫描所获得的图像评分均不低于3分,其产生的辐射剂量为6.81 mGy;而管电流量240 mAs扫描所获得的图像评分均为4分,其产生的辐射剂量为20.22 mGy。结论腰椎软组织多层螺旋CT低剂量扫描的适宜扫描参数为80 mAs。%Objective To evaluate the effect of lumbar spine multiple-slice spiral CT different scanning schemes on image quality and radiation dose and investigate the suitable low-dose scanning parameters of multiple-slice spiral CT in lumbar spine soft tis-sue lesions. Methods We scanned the patients' lumbar spine by CT equipment. We started with 240mAs and gradually reduced mAs. We also recorded the radiation dose showed by the CT host computer and evaluated image quality by the adw 4.3 worksta-tion. We analyzed data through SPSS 18.0. Results All image quality scores were no less than 3 points at 80mAs, and the radia-tion dose at 80mAs was 6.81mGy. All image quality scores were 4 points at 240mAs, and the radiation dose at 240mAs was 20.22mGy. Conclusion The suitable multiple-slice spiral CT low-dose scanning parameters in the lumbar spine soft tissue lesions is 80mAs.

  17. 基于Benchmark模型的柔性大浮空结构模态参数辨识及振动主动控制方法%Identification of Modal Parameters and Active Vibration Control Method of Flexible Structures of Big Aerostat Based on Benchmark Model

    Institute of Scientific and Technical Information of China (English)

    孙颖宏

    2015-01-01

    为形成统一、标准、实用的柔性大浮空结构振动控制评价体系,本文提出了一种基于Benchmark模型的振控效果评价准则,并依此作为对整个研究进程进行指导和后续研究结果进行评价的标准。在此基础上论述了以基于模态参数辨识的振动主动控制为目标的试验与仿真横向并行的研究方法,该方法贯穿于浮空结构振动主动控制的整个纵向研究流程。%For establishing uniform,standard and practical vibration control and estimation system of flexible struc⁃tures of big aerostat,this paper proposed an evaluation criteria of vibration control effects based on Benchmark mod⁃el,which is used as the standard for guiding the whole research process and evaluating later research results. Then the paper discussed a horizontal research method of experiments and simulation for the purpose of active vibration control based on identification of modal parameters. And the method runs through the whole vertical research process of active vibration control of aerostat structures.

  18. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  19. Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity

    Directory of Open Access Journals (Sweden)

    Yu.V. Dvirko

    2013-03-01

    Full Text Available The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalization big companies, enterprises, organizations realize the necessity of the thorough and profound research of the best achievements of market subjects relations with their further using in their own activity. Benchmarking is the process of competitive advantages borrowing and competitiveness increasing of Consumer Cooperatives trade enterprises at the expense of research leaning and adapting the best methods of business processes realization with the purpose to increase their functioning affectivity and best satisfaction of societal needs.The main goals of benchmarking using in Consumer Cooperatives are the following: increasing of needs satisfaction level at the expense of products quality increasing, transportation goods term shortening, service quality increasing; enterprise potential strengthening, competitiveness strengthening, image improvement; generation and new ideas and innovative decisions implementation in trade enterprise activity. The advantages of benchmarking using in Consumer Cooperatives trade enterprises activity are the following: adapting the parameters of enterprise functioning to market demands; gradual defining and removing inadequacies which obstacle enterprise development; borrowing the best methods of further enterprise development; competitive advantages gaining; technological innovations; employees motivation. Authors classification of benchmarking is represented by the following components: by cycle durability strategic, operative

  20. Isprs Benchmark for Multi-Platform Photogrammetry

    Science.gov (United States)

    Nex, F.; Gerke, M.; Remondino, F.; Przybilla, H.-J.; Bäumker, M.; Zurhorst, A.

    2015-03-01

    Airborne high resolution oblique imagery systems and RPAS/UAVs are very promising technologies that will keep on influencing the development of geomatics in the future years closing the gap between terrestrial and classical aerial acquisitions. These two platforms are also a promising solution for National Mapping and Cartographic Agencies (NMCA) as they allow deriving complementary mapping information. Although the interest for the registration and integration of aerial and terrestrial data is constantly increasing, only limited work has been truly performed on this topic. Several investigations still need to be undertaken concerning algorithms ability for automatic co-registration, accurate point cloud generation and feature extraction from multiplatform image data. One of the biggest obstacles is the non-availability of reliable and free datasets to test and compare new algorithms and procedures. The Scientific Initiative "ISPRS benchmark for multi-platform photogrammetry", run in collaboration with EuroSDR, aims at collecting and sharing state-of-the-art multi-sensor data (oblique airborne, UAV-based and terrestrial images) over an urban area. These datasets are used to assess different algorithms and methodologies for image orientation and dense matching. As ground truth, Terrestrial Laser Scanning (TLS), Aerial Laser Scanning (ALS) as well as topographic networks and GNSS points were acquired to compare 3D coordinates on check points (CPs) and evaluate cross sections and residuals on generated point cloud surfaces. In this paper, the acquired data, the pre-processing steps, the evaluation procedures as well as some preliminary results achieved with commercial software will be presented.

  1. Residual Generation for the Ship Benchmark Using Structural Approach

    DEFF Research Database (Denmark)

    Cocquempot, V.; Izadi-Zamanabadi, Roozbeh; Staroswiecki, M

    1998-01-01

    The prime objective of Fault-tolerant Control (FTC) systems is to handle faults and discrepancies using appropriate accommodation policies. The issue of obtaining information about various parameters and signals, which have to be monitored for fault detection purposes, becomes a rigorous task wit...... with the growing number of subsystems. The structural approach, presented in this paper, constitutes a general framework for providing information when the system becomes complex. The methodology of this approach is illustrated on the ship propulsion benchmark....

  2. Portfolio selection and asset pricing under a benchmark approach

    Science.gov (United States)

    Platen, Eckhard

    2006-10-01

    The paper presents classical and new results on portfolio optimization, as well as the fair pricing concept for derivative pricing under the benchmark approach. The growth optimal portfolio is shown to be a central object in a market model. It links asset pricing and portfolio optimization. The paper argues that the market portfolio is a proxy of the growth optimal portfolio. By choosing the drift of the discounted growth optimal portfolio as parameter process, one obtains a realistic theoretical market dynamics.

  3. [Benchmarking in health care: conclusions and recommendations].

    Science.gov (United States)

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  4. Benchmarking NMR experiments: a relational database of protein pulse sequences.

    Science.gov (United States)

    Senthamarai, Russell R P; Kuprov, Ilya; Pervushin, Konstantin

    2010-03-01

    Systematic benchmarking of multi-dimensional protein NMR experiments is a critical prerequisite for optimal allocation of NMR resources for structural analysis of challenging proteins, e.g. large proteins with limited solubility or proteins prone to aggregation. We propose a set of benchmarking parameters for essential protein NMR experiments organized into a lightweight (single XML file) relational database (RDB), which includes all the necessary auxiliaries (waveforms, decoupling sequences, calibration tables, setup algorithms and an RDB management system). The database is interfaced to the Spinach library (http://spindynamics.org), which enables accurate simulation and benchmarking of NMR experiments on large spin systems. A key feature is the ability to use a single user-specified spin system to simulate the majority of deposited solution state NMR experiments, thus providing the (hitherto unavailable) unified framework for pulse sequence evaluation. This development enables predicting relative sensitivity of deposited implementations of NMR experiments, thus providing a basis for comparison, optimization and, eventually, automation of NMR analysis. The benchmarking is demonstrated with two proteins, of 170 amino acids I domain of alphaXbeta2 Integrin and 440 amino acids NS3 helicase.

  5. Head CT scan

    Science.gov (United States)

    Brain CT; Cranial CT; CT scan - skull; CT scan - head; CT scan - orbits; CT scan - sinuses; Computed tomography - cranial; CAT scan - brain ... hold your breath for short periods. A complete scan usually take only 30 seconds to a few ...

  6. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  7. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  8. An Effective Approach for Benchmarking Implementation

    OpenAIRE

    B. M. Deros; Tan, J.; M.N.A. Rahman; N. A.Q.M. Daud

    2011-01-01

    Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty res...

  9. Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications

    Energy Technology Data Exchange (ETDEWEB)

    Leal, Luiz C [ORNL; Ivanov, E. [Institut de Radioprotection et de Surete Nucleaire

    2015-01-01

    The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.

  10. Gaia FGK Benchmark Stars: New Candidates At Low-Metallicities

    CERN Document Server

    Hawkins, Keith; Heiter, Ulrike; Soubiran, Caroline; Blanco-Cuaresma, Sergi; Casagrande, Luca; Gilmore, Gerry; Lind, Karin; Magrini, Laura; Masseron, Thomas; Pancino, Elena; Randich, Sofia; Worley, Clare C

    2016-01-01

    We have entered an era of large spectroscopic surveys in which we can measure, through automated pipelines, the atmospheric parameters and chemical abundances for large numbers of stars. Calibrating these survey pipelines using a set of "benchmark stars" in order to evaluate the accuracy and precision of the provided parameters and abundances is of utmost importance. The recent proposed set of Gaia FGK benchmark stars of Heiter et al. (2015) has no recommended stars within the critical metallicity range of $-2.0 <$ [Fe/H] $< -1.0$ dex. In this paper, we aim to add candidate Gaia benchmark stars inside of this metal-poor gap. We began with a sample of 21 metal-poor stars which was reduced to 10 stars by requiring accurate photometry and parallaxes, and high-resolution archival spectra. The procedure used to determine the stellar parameters was similar to Heiter et al. (2015) and Jofre et al. (2014) for consistency. The effective temperature (T$_{\\mathrm{eff}}$) of all candidate stars was determined using...

  11. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  12. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  13. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  14. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  15. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  16. Benchmarking ICRF simulations for ITER

    Energy Technology Data Exchange (ETDEWEB)

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  17. COG validation: SINBAD Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  18. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  19. Research of foot parameters measurement based on line structured light and plantar scanning%基于线结构光和足底扫描的足部参数测量研究

    Institute of Scientific and Technical Information of China (English)

    李新华; 程涛军; 马春; 孙南; 王俊青

    2013-01-01

    针对现有足部轮廓三维重构方法精度低,鲁棒性差,成本昂贵且不符合实际足部生物力学研究要求等问题,设计了一种利用光学测量技术实现无接触式足部参数测量的系统。该系统一方面通过对足底扫描图像处理,构建足底轮廓点云,分割足底压力区域,计算足底相关参数;另一方面利用线结构光技术,重构足面轮廓,将足底轮廓点云与足面轮廓点云在系统规定世界坐标系内融合,形成完整足部轮廓点云,根据定义计算足部围度等足部系列参数。通过搭建相应硬件平台对多组人体足部进行测量,实验结果表明系统能够快速、精确地完成足部三维重构,具有很好的鲁棒性。%Since the existing methods of human foot outline three-dimensional reconstruction have the problems of low accuracy, poor robustness, high cost, and do not meet the actual requirements of the foot biomechanics research etc, a foot parameter measure-ment system which uses optical measurement technology and realizes non-contact measurement has been designed. On one hand, the system uses plantar scanning technology to construct the point cloud of plantar, segments plantar pressure area, and calculates the relevant parameters of plantar; on the other hand, the system uses line structured light to scan the foot surface to construct point cloud of foot surface, fuses into a whole foot contour point cloud, and finally, the system measures the series of foot param-eters according to the definition of foot biomechanics. By the construction of the corresponding hardware platform, the measure-ments of multiple groups of foot are made, and the experimental results show that the system can finish foot 3D reconstruction quickly and accurately while has good robustness.

  20. 42 CFR 440.330 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  1. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  2. Benchmark Assessment for Improved Learning. AACC Report

    Science.gov (United States)

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  3. Benchmark analysis of railway networks and undertakings

    NARCIS (Netherlands)

    Hansen, I.A.; Wiggenraad, P.B.L.; Wolff, J.W.

    2013-01-01

    Benchmark analysis of railway networks and companies has been stimulated by the European policy of deregulation of transport markets, the opening of national railway networks and markets to new entrants and separation of infrastructure and train operation. Recent international railway benchmarking s

  4. Machines are benchmarked by code, not algorithms

    NARCIS (Netherlands)

    Poss, R.

    2013-01-01

    This article highlights how small modifications to either the source code of a benchmark program or the compilation options may impact its behavior on a specific machine. It argues that for evaluating machines, benchmark providers and users be careful to ensure reproducibility of results based on th

  5. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  6. Synergetic effect of benchmarking competitive advantages

    Directory of Open Access Journals (Sweden)

    N.P. Tkachova

    2011-12-01

    Full Text Available It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  7. Synergetic effect of benchmarking competitive advantages

    OpenAIRE

    N.P. Tkachova; P.G. Pererva

    2011-01-01

    It is analyzed the essence of synergistic competitive benchmarking. The classification of types of synergies is developed. It is determined the sources of synergies in conducting benchmarking of competitive advantages. It is proposed methodological framework definition of synergy in the formation of competitive advantage.

  8. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  9. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  10. Fundamental modeling issues on benchmark structure for structural health monitoring

    Institute of Scientific and Technical Information of China (English)

    LI HuaJun; ZHANG Min; WANG JunRong; HU Sau-Lon James

    2009-01-01

    The IASC-ASCE Structural Health Monitoring Task Group developed a series of benchmark problems,and participants of the benchmark study were charged with using a 12-degree-of-freedom (DOF) shear building as their identification model. The present article addresses improperness, including the parameter and modeling errors, of using this particular model for the intended purpose of damage detection, while the measurements of damaged structures are synthesized from a full-order finite-element model. In addressing parameter errors, a model calibration procedure is utilized to tune the mass and stiffness matrices of the baseline identification model, and a 12-DOF shear building model that preserves the first three modes of the full-order model is obtained. Sequentially, this calibrated model is employed as the baseline model while performing the damage detection under various damage scenarios. Numerical results indicate that the 12-DOF shear building model is an over-simplified identification model, through which only idealized damage situations for the benchmark structure can be detected. It is suggested that a more sophisticated 3-dimensional frame structure model should be adopted as the identification model, if one intends to detect local member damages correctly.

  11. Fundamental modeling issues on benchmark structure for structural health monitoring

    Institute of Scientific and Technical Information of China (English)

    HU; Sau-Lon; James

    2009-01-01

    The IASC-ASCE Structural Health Monitoring Task Group developed a series of benchmark problems, and participants of the benchmark study were charged with using a 12-degree-of-freedom (DOF) shear building as their identification model. The present article addresses improperness, including the parameter and modeling errors, of using this particular model for the intended purpose of damage detec- tion, while the measurements of damaged structures are synthesized from a full-order finite-element model. In addressing parameter errors, a model calibration procedure is utilized to tune the mass and stiffness matrices of the baseline identification model, and a 12-DOF shear building model that preserves the first three modes of the full-order model is obtained. Sequentially, this calibrated model is employed as the baseline model while performing the damage detection under various damage scenarios. Numerical results indicate that the 12-DOF shear building model is an over-simplified identification model, through which only idealized damage situations for the benchmark structure can be detected. It is suggested that a more sophisticated 3-dimensional frame structure model should be adopted as the identification model, if one intends to detect local member damages correctly.

  12. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    DEFF Research Database (Denmark)

    Menze, Bjoern H.; Jakab, Andras; Bauer, Stefan

    2015-01-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low......- and high-grade glioma patients – manually annotated by up to four raters – and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74...... a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing...

  13. RESRAD benchmarking against six radiation exposure pathway models

    Energy Technology Data Exchange (ETDEWEB)

    Faillace, E.R.; Cheng, J.J.; Yu, C.

    1994-10-01

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, input parameters such as occupancy, shielding, and consumption factors.

  14. Gaia FGK Benchmark Stars: Effective temperatures and surface gravities

    CERN Document Server

    Heiter, U; Gustafsson, B; Korn, A J; Soubiran, C; Thévenin, F

    2015-01-01

    Large Galactic stellar surveys and new generations of stellar atmosphere models and spectral line formation computations need to be subjected to careful calibration and validation and to benchmark tests. We focus on cool stars and aim at establishing a sample of 34 Gaia FGK Benchmark Stars with a range of different metallicities. The goal was to determine the effective temperature and the surface gravity independently from spectroscopy and atmospheric models as far as possible. Fundamental determinations of Teff and logg were obtained in a systematic way from a compilation of angular diameter measurements and bolometric fluxes, and from a homogeneous mass determination based on stellar evolution models. The derived parameters were compared to recent spectroscopic and photometric determinations and to gravity estimates based on seismic data. Most of the adopted diameter measurements have formal uncertainties around 1%, which translate into uncertainties in effective temperature of 0.5%. The measurements of bol...

  15. Synthetic benchmarks for machine olfaction: Classification, segmentation and sensor damage☆

    Science.gov (United States)

    Ziyatdinov, Andrey; Perera, Alexandre

    2015-01-01

    The design of the signal and data processing algorithms requires a validation stage and some data relevant for a validation procedure. While the practice to share public data sets and make use of them is a recent and still on-going activity in the community, the synthetic benchmarks presented here are an option for the researches, who need data for testing and comparing the algorithms under development. The collection of synthetic benchmark data sets were generated for classification, segmentation and sensor damage scenarios, each defined at 5 difficulty levels. The published data are related to the data simulation tool, which was used to create a virtual array of 1020 sensors with a default set of parameters [1]. PMID:26217732

  16. Thyroid Scan and Uptake

    Science.gov (United States)

    ... Physician Resources Professions Site Index A-Z Thyroid Scan and Uptake Thyroid scan and uptake uses small ... Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid scan is a type ...

  17. Influencing factors and reproducibility of controlled attenuation parameters in the evaluation of fatty liver disease using FibroScan R%FibroScan R 实施受控衰减参数评价脂肪肝的影响因素及重复性分析

    Institute of Scientific and Technical Information of China (English)

    沈峰; 徐正婕; 潘勤; 陈光榆; 曹毅; 黄家懿; 范建高

    2013-01-01

    Objective To evaluate the influencing factors and reproducibility of controlled attenuation parameters (CAP)measurement of fatty liver using FibroScan R . Methods Patients with non-alcohol fatty liver disease(NAFLD)and normal controls were recruited to complete the CAP measurement with new FibroScan-502 and M probe. In NAFLD groups,some subjects were repeatedly checked-up by the same or different operator. Intraclass correlation coefficient (ICC)was used to evaluate the reproducibility of the operation. Results A total of 228 subjects were recruited, and 200 subjects(87.7%)got the valid measurement;the success rates in normal and obese persons were 93.9%(77/82)and 75.0%(33/44,x2=9.548,P=0.02),respectively;female,senior and obese persons had lower success of examination;CAP values in NAFLD group was 291.1±54.0 dB/m,significantly higher than that in control groups(216.4±43.3dB/m,P<0.01);The ICC was 0.848(95% CI 0.761~0.905,P<0.01)with same operator and 0.718 (95% CI 0.607~0.896,P<0.01)with different operator. Conclusion The CAP can be used for non-invasive diagnosis of fatty liver,with a good reproducibility.%  目的评价瞬时弹性记录仪(FibroScan R )实施受控衰减参数(CAP)无创定量诊断脂肪肝的影响因素及重复性.方法使用新型FibroScan-502机型及M探头对非酒精性脂肪性肝病(NAFLD)患者及对照人群进行肝脏脂肪含量(CAP值)测定.应用组内相关系数(ICC)评价重复测量的变异程度.结果169例NAFLD患者和59例对照人群接受FibroScan R 检查,其中200例(87.7%)完成有效测量;肥胖组检查成功率显著低于体重正常组[75.0%(33/44)对93.9%(77/82),x2=9.548,P=0.02],女性、高龄和中心性肥胖者影响检查成功率;NAFLD组CAP值显著高于对照组(291.1±54.0dB/m对216.4±43.3dB/m,P<0.01);63例NAFLD患者进行CAP重复性测量提示同一操作者ICC为0.848(95%可信区间为0.761~0.905,P<0.01),不同操作者ICC为0.718(95%可信区间为0.607~0.896, P<0.01).

  18. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).

    Science.gov (United States)

    Menze, Bjoern H; Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B; Ayache, Nicholas; Buendia, Patricia; Collins, D Louis; Cordier, Nicolas; Corso, Jason J; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M; Jena, Raj; John, Nigel M; Konukoglu, Ender; Lashkari, Danial; Mariz, José Antonió; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J; Raviv, Tammy Riklin; Reza, Syed M S; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A; Sousa, Nuno; Subbanna, Nagesh K; Szekely, Gabor; Taylor, Thomas J; Thomas, Owen M; Tustison, Nicholas J; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2015-10-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.

  19. 正交三维扫描测头标定的多区域变参数法研究%Study on Multiregion Variable Parameters Method for 3D Orthogonal Scanning Probe Calibration

    Institute of Scientific and Technical Information of China (English)

    万鹏; 郭俊杰; 李海涛; 王金栋

    2012-01-01

    To solve coupling of 3D orthogonal scanning probe among three detecting directions, a calibration method based on multiregion and variable parameters coefficient matrix is presented. The 3*3 calibration matrix is used in calibration process, and the standard ball is divided into four basic areas according to the probe deformations. Let the probe contact several points planned on different areas of the standard ball and approach continuously to every point, and a number of data of the probe and linear grating are collected to establish a calibration equation, then four calibration matrices are so obtained with the least square method, and high-precise calibration for 3D orthogonal scanning probe can be finally achieved. The calibration results are experimentally verified with other two methods, which shows the higher calibration rate and precision. This study may offer a foundation for the high-precise measurement of 3D orthogonal scanning probe.%针对正交三维扫描测头的3个探测方向间存在着相互干涉,即耦合的问题,提出了一种基于多区域变参数系数矩阵的正交三维扫描测头标定方法.运用3×3系数矩阵对测头进行标定,该方法依据测头变形量的正负将标准球划分为4个基本区域,通过控制测头接触标准球上不同区域中的若干探测点,在每个探测点上向标准球球心方向连续步进测量,利用两者变化量之间的关系建立标定方程,运用最小二乘方法确定出4个标定矩阵,实现了正交三维扫描测头的高精度标定.采用2种方法对标定结果进行了实验验证,实验结果表明,多区域变参数系数矩阵的标定方法是可行的,具有标定快速、准确的优势,为正交三维扫描测头的高精度测量奠定了坚实的基础.

  20. Analysis of Manipulation of FootScan Pressure Sensor System Based on the Parameters of Plate%基于FootScan压力传感器平板系统的推拿手法参数分析

    Institute of Scientific and Technical Information of China (English)

    徐刚; 杨华元; 刘堂义; 高明; 胡银娥; 唐文超

    2016-01-01

    Objective Through this experiment, massage operation were measured and recorded and related to the change of characteristic parameters of mechanics, and the test results of observation and analysis, will be more comprehensive to clarify massage operation essentials, teaching and assessment for massage, massage the biomechanical study and massage quantification, standardization and standardized research to provide the reference.Methods This experiment using FootScan pressure sensor measuring force plate system, observation group G0, G1, G2, and G3 set of parameters in the process of implementation of tuina manipulation, in order to carry out massage parameter quantitative and standardized research. Conclusion The massage, contact area, frequency, and the power of the force of the cumulative to the discussion of different groups (impulse), results suggest in terms of certain parameters contrast, between different groups, there are differences between different technique group, prompt the mechanical parameters is an important parameter to massage the biomechanics.In the process of the analysis method of hand pressure, no significant differences between groups, prompt the mechanics parameter is the massage the biomechanics of the secondary parameters.%目的:通过本试验的开展,试验中检测并记录推拿手法操作时的力学及相关特征参数的变化,并对试验结果观察与分析,将更全面的阐明推拿手法操作要领,为推拿手法教学与考评、推拿手法的生物力学研究以及推拿手法量化、规范化与标准化研究提供参考。方法本实验利用FootScan压力传感器测力平板系统,观测G0组,G1组、G2组、G3组实施推拿手法操作过程中的参数,以此来开展推拿手法参数的量化与标准化研究。结论在推拿手法的作用力、接触面积、作用频率、及作用力的累计(冲量)方面对不同组别进行讨论,结果提示在某些参数对比方面,

  1. Effect of noise correlations on randomized benchmarking

    Science.gov (United States)

    Ball, Harrison; Stace, Thomas M.; Flammia, Steven T.; Biercuk, Michael J.

    2016-02-01

    Among the most popular and well-studied quantum characterization, verification, and validation techniques is randomized benchmarking (RB), an important statistical tool used to characterize the performance of physical logic operations useful in quantum information processing. In this work we provide a detailed mathematical treatment of the effect of temporal noise correlations on the outcomes of RB protocols. We provide a fully analytic framework capturing the accumulation of error in RB expressed in terms of a three-dimensional random walk in "Pauli space." Using this framework we derive the probability density function describing RB outcomes (averaged over noise) for both Markovian and correlated errors, which we show is generally described by a Γ distribution with shape and scale parameters depending on the correlation structure. Long temporal correlations impart large nonvanishing variance and skew in the distribution towards high-fidelity outcomes—consistent with existing experimental data—highlighting potential finite-sampling pitfalls and the divergence of the mean RB outcome from worst-case errors in the presence of noise correlations. We use the filter-transfer function formalism to reveal the underlying reason for these differences in terms of effective coherent averaging of correlated errors in certain random sequences. We conclude by commenting on the impact of these calculations on the utility of single-metric approaches to quantum characterization, verification, and validation.

  2. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  3. Benchmarking Measures of Network Influence

    Science.gov (United States)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  4. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  5. FibroTouch及FibroScan(R)实施受控衰减参数评价肝细胞脂肪变与肝脏病理比较分析%Comparative analysis of controlled attenuation parameters in evaluation of liver steatosis using FibroTouch and FibroScan(R)

    Institute of Scientific and Technical Information of China (English)

    朱梦飞; 刘静; 王洁; 陈公英; 娄国强; 施军平

    2014-01-01

    目的 评价瞬时弹性超声诊断仪FibroTouch、FibroScan?实施受控衰减参数(CAP)无创定量诊断脂肪肝与肝脏穿刺病理结果的对比及相关分析.方法 对2013年7-10月在杭州师范大学附属医院对病理确诊的非酒精性脂肪性肝病(NAFLD)患者及慢性乙型肝炎患者(CHB)应用FibroTouch、FibroScan-502机型测定肝脏脂肪含量(CAP值).结果 41例CHB患者与20例NAFLD患者同时接受FibroTouch、FibroScan检查,测定的CAP与患者体质量指数(BMI,r分别为0.42,0.61;P分别为0.02,0.000)及肝细胞脂肪变呈正相关(r分别为0.54,0.56;P均<0.00l);33例病理诊断肝细胞脂肪变>5%的患者FibroTouch与FibroScan测定的CAP值均显著高于28例无脂肪变的CAP值[FibroTouch:(252.18±41.23) dB/m比(220.68±54.75) dB/m,P=0.04;FibroScan:(291.61±56.80) dB/m比(215.75±45.11) dB/m,P=0.000];FibroScan测定肝细胞脂肪变<30%患者的CAP值、肝细胞脂肪变30% ~ 60%患者的CAP值、>60%脂肪变组的CAP值间存在差异(F=6.82,P=0.004),且肝细胞脂肪变<30%患者的CAP值低于>60%脂肪变组的CAP值[(258.73±52.54) dB/m比(327.42±49.08)dB/m,P=0.04].但FibroTouch对上述3组患者测定的CAP值间差异无统计学意义(F=2.30,P=0.12).结论 FibroTouch与FibroScan测定的CAP值均能评价肝细胞脂肪变,CAP值的高低与患者的BMI大小、肝病理脂肪变严重程度呈正相关.FibroScan测定的CAP值能区分轻度与重度的脂肪变,而FibroTouch测定的CAP值不能区分,可能与本组实验病例过少有关,有待进一步研究.%Objective To compare the controlled attenuation parameters (CAP) measurement of liver steatosis using FibroTouch and FibroScan?.Methods Patients with non-alcohol fatty liver disease (NAFLD) and chronic hepatitis B (CHB) were included from July to October 2013 and liver biopsy was completed on these patients.They were examined to collect the CAP values by FibroTouch and FibroScan-502 respectively.Results A

  6. 非酒精性单纯性脂肪肝 FibroScan 值与代谢参数的相关性研究%Study on the correlation between FibroScan value and metabolic parameters in patients with nonalcoholic fatty liver

    Institute of Scientific and Technical Information of China (English)

    李岩; 杨涛; 田海燕; 王亚珍

    2014-01-01

    目的:应用瞬时弹性成像技术(FibroScan)对非酒精性单纯性脂肪肝(NAFL)患者肝脏硬度(E)及受控衰减系数(CAPTM)进行检测及分析。方法选取NAFL患者729例及正常健康人300例,均行腹部超声检查、实验室检查及FibroScan测量E及CAPTM值。结果 NAFL组临床资料各项数值均高于对照组( P <05.01),E值与丙氨酸氨基转移酶(ALT)、天冬氨酸氨基转移酶(AST)、CAP呈正相关( P <0.05),与总胆固醇(TC)、高密度脂蛋白胆固醇(HDL-C)呈负相关( P <0.01),CAPTM值三酰甘油(TG)、空腹血糖(FPG)、高密度脂蛋白胆固醇(HDL-C)、硬度值(E)呈正相关( P <0.05)。不同人群特征对失败率存在影响:失败率随BMI升高而升高,老年人测量失败率高于年轻人,女性失败率高于男性,肋间隙狭窄者失败率高( P均<0.01)。结论 FibroScan是目前唯一完全无创、无痛、对肝脏无损伤的评估与监测脂肪肝的量化工具,对脂肪定量具有较高的敏感度和准确度,对脂肪肝的诊断及定期跟踪随访具有较高的价值。%Objective Using transient elastography ( FibroScan ) to detect liver stiffness ( E ) and controlled attenuation coefficient ( CAPTM ) in patients with nonalcoholic fatty liver ( NAFL ) , and to explore the correlation between FibroScan value and metabolic parameters .Methods The 729 patients with NAFL ( NAFL group ) and 300 healthy subjects ( control group ) received abdominal ultrasound examination , laboratory examination and FibroScan measurement for E and CAPTM value .Results The clinical parameters in NAFL group were significantly higher than those in control group ( P <0.01).The E value in NAFL group was positively correlated to alanine aminotransferase (ALT),aspartate aminotransferase (AST), CAP, however, which was negatively correlated with total cholesterol ( TC), high density lipoprotein

  7. Evaluation of the applicability of the Benchmark approach to existing toxicological data. Framework: Chemical compounds in the working place

    NARCIS (Netherlands)

    Appel MJ; Bouman HGM; Pieters MN; Slob W; CSR

    2001-01-01

    Vijf stoffen in de werkomgeving waarvoor risico-evaluaties beschikbaar waren, werden geselecteerd voor analyse met de benchmark-benadering. De kritische studies werden voor elk van deze stoffen geanalyseerd. De onderzochte toxicologische parameters betroffen zowel continue als ordinale gegevens.

  8. 胸部CT扫描参数与组织噪声相关性的实验研究%Experiment study of correlation between CT scan parameters and noise of different tissues

    Institute of Scientific and Technical Information of China (English)

    赵峰; 曾勇明; 彭盛坤; 彭刚; 郁仁强; 谭欢; 刘潇; 王杰

    2012-01-01

    Objective:To investigate the correlations between CT scan parameters and noises of different tissues and to discuss the factors influencing the image quality. Methods: The anthropomorphic phantom equivalent to human tissue was scanned by 16 slices spiral scanner(GE BrightSpeed) with different voltages(80,100,120 kV and 140 kV),tube currents(80,100,120,180,240 mA and 300 mA) and pitches(0.562,0.938,1.375 and 1.75). 120 kV,300 mA and 0.938(pitch) were using as the standard. One of the three scanning parameters was changed and other scanning parameters were fixed. The CT value and image noise were recorded and analyzed (Standard derivation,SD). Results:There was no statistical difference in noises of the lung among groups of 80,100,120,140 kV(F=0.966,/>>0.05). There were statistical differences between groups of 80 kV and 120 kV in noises of the chest wall,soft tissue of spine and aorta,P0.05; chest wall F=1.53,P>0.05;soft tissue of spine F=2.27,P>0.05). There were statistical differences in noises of aorta among groups with different pitches, F=9.68,P<0.05. Conclusions:The noises of different tissues are increasing with the decrease of tube voltage,tube current and increase of pitch,but the increase of noise in the lung is not obviously. The chest scan with low dose can reduce the radiation dose and keep the image quality without obviously increasing the noise in the lung.%目的:探讨胸部CT扫描参数变化与不同组织噪声的相关性及图像质量影响的规律.方法:应用GE BrightSpeed 16层CT机以管电压120 kV,管电流300 mA,螺距0.938为标准,每次扫描改变一种扫描参数,其它扫描参数不变,以不同管电压(80、100、120、140 kV)、管电流(80、100、120、180、240、300 mA)及螺距(0.562、0.938、1.375、1.75)扫描仿真胸部体模,测量、记录体模不同组织噪声值(CT值标准差,SD),并进行统计学分析.结果:肺组织不同管电压组(80、100、120、140 kV)的噪声比较差异无统计学意义(F=0

  9. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador;

    2016-01-01

    -recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  10. Benchmarking Attosecond Physics with Atomic Hydrogen

    Science.gov (United States)

    2015-05-25

    Final 3. DATES COVERED (From - To) 12 Mar 12 – 11 Mar 15 4. TITLE AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a...AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a. CONTRACT NUMBER FA2386-12-1-4025 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Final Report for AOARD Grant FA2386-12-1-4025 “ Benchmarking

  11. Transient elastography (FibroScan(®)) with controlled attenuation parameter in the assessment of liver steatosis and fibrosis in patients with nonalcoholic fatty liver disease - Where do we stand?

    Science.gov (United States)

    Mikolasevic, Ivana; Orlic, Lidija; Franjic, Neven; Hauser, Goran; Stimac, Davor; Milic, Sandra

    2016-08-28

    Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease worldwide. Currently, the routinely used modalities are unable to adequately determine the levels of steatosis and fibrosis (laboratory tests and ultrasonography) or cannot be applied as a screening procedure (liver biopsy). Among the non-invasive tests, transient elastography (FibroScan(®), TE) with controlled attenuation parameter (CAP) has demonstrated good accuracy in quantifying the levels of liver steatosis and fibrosis in patients with NAFLD, the factors associated with the diagnosis and NAFLD progression. The method is fast, reliable and reproducible, with good intra- and interobserver levels of agreement, thus allowing for population-wide screening and disease follow-up. The initial inability of the procedure to accurately determine fibrosis and steatosis in obese patients has been addressed with the development of the obese-specific XL probe. TE with CAP is a viable alternative to ultrasonography, both as an initial assessment and during follow-up of patients with NAFLD. Its ability to exclude patients with advanced fibrosis may be used to identify low-risk NAFLD patients in whom liver biopsy is not needed, therefore reducing the risk of complications and the financial costs.

  12. Transient elastography (FibroScan®) with controlled attenuation parameter in the assessment of liver steatosis and fibrosis in patients with nonalcoholic fatty liver disease - Where do we stand?

    Science.gov (United States)

    Mikolasevic, Ivana; Orlic, Lidija; Franjic, Neven; Hauser, Goran; Stimac, Davor; Milic, Sandra

    2016-01-01

    Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease worldwide. Currently, the routinely used modalities are unable to adequately determine the levels of steatosis and fibrosis (laboratory tests and ultrasonography) or cannot be applied as a screening procedure (liver biopsy). Among the non-invasive tests, transient elastography (FibroScan®, TE) with controlled attenuation parameter (CAP) has demonstrated good accuracy in quantifying the levels of liver steatosis and fibrosis in patients with NAFLD, the factors associated with the diagnosis and NAFLD progression. The method is fast, reliable and reproducible, with good intra- and interobserver levels of agreement, thus allowing for population-wide screening and disease follow-up. The initial inability of the procedure to accurately determine fibrosis and steatosis in obese patients has been addressed with the development of the obese-specific XL probe. TE with CAP is a viable alternative to ultrasonography, both as an initial assessment and during follow-up of patients with NAFLD. Its ability to exclude patients with advanced fibrosis may be used to identify low-risk NAFLD patients in whom liver biopsy is not needed, therefore reducing the risk of complications and the financial costs. PMID:27621571

  13. A Scanning Quantum Cryogenic Atom Microscope

    CERN Document Server

    Yang, Fan; Taylor, Stephen F; Turner, Richard W; Lev, Benjamin L

    2016-01-01

    Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity, high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented DC-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a noise floor of 300 pT and provides a 100x improvement in magnetic flux sensitivity over previous atomic scanning probe magnetometers. These capabilities are carefully benchmarked by imaging magnet...

  14. Helix Scan: A Scan Design for Diagnosis

    Institute of Scientific and Technical Information of China (English)

    WANG Fei; HU Yu; LI Xiaowei

    2007-01-01

    Scan design is a widely used design-for-testability technique to improve test quality and efficiency. For the scan-designed circuit, test and diagnosis of the scan chain and the circuit is an important process for silicon debug and yield learning. However, conventional scan designs and diagnosis methods abort the subsequent diagnosis process after diagnosing the scan chain if the scan chain is faulty. In this work, we propose a design-for-diagnosis scan strategy called helix scan and a diagnosis algorithm to address this issue. Unlike previous proposed methods, helix scan has the capability to carry on the diagnosis process without losing information when the scan chain is faulty. What is more, it simplifies scan chain diagnosis and achieves high diagnostic resolution as well as accuracy. Experimental results demonstrate the effectiveness of our design.

  15. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  16. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    these issues, and describes how effects are closely connected to the perception of benchmarking, the intended users of the system and the application of the benchmarking results. The fundamental basis of this paper is taken from the development of benchmarking in the Danish construction sector. Two distinct...... perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind...... of benchmarking. In conclusion it is argued that clients and the Danish government are the intended users of the benchmarking system. The benchmarking results are primarily used by the government for monitoring and regulation of the construction sector and by clients for contractor selection. The dominating use...

  17. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and the...

  18. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically receive bureaucratic benchmarking information from the administration. We find that more frequent bureaucratic...

  19. Benchmarking ENDF/B-VII.0

    Science.gov (United States)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  20. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  1. XWeB: the XML Warehouse Benchmark

    CERN Document Server

    Mahboubi, Hadj

    2011-01-01

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  2. A framework of benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-02-01

    Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1) targeted aspects of model performance to be evaluated; (2) a set of benchmarks as defined references to test model performance; (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4) model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  3. A framework of benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-02-01

    Full Text Available Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1 targeted aspects of model performance to be evaluated; (2 a set of benchmarks as defined references to test model performance; (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4 model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  4. A framework for benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J. T.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J. B.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  5. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  6. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  7. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  8. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  9. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  10. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  11. Instrumental fundamental parameters and selected applications of the microfocus X-ray fluorescence analysis at a scanning electron microscope; Instrumentelle Fundamentalparameter und ausgewaehlte Anwendungen der Mikrofokus-Roentgenfluoreszenzanalyse am Rasterelektronenmikroskop

    Energy Technology Data Exchange (ETDEWEB)

    Rackwitz, Vanessa

    2012-05-30

    For a decade X-ray sources have been commercially available for the microfocus X-ray fluorescence analysis ({mu}-XRF) and offer the possibility of extending the analytics at a scanning electron microscope (SEM) with an attached energy dispersive X-ray spectrometer (EDS). By using the {mu}-XRF it is possible to determine the content of chemical elements in a microscopic sample volume in a quantitative, reference-free and non-destructive way. For the reference-free quantification with the XRF the Sherman equation is referred to. This equation deduces the intensity of the detected X-ray intensity of a fluorescence peak to the content of the element in the sample by means of fundamental parameters. The instrumental fundamental parameters of the {mu}-XRF at a SEM/EDS system are the excitation spectrum consisting of X-ray tube spectrum and the transmission of the X-ray optics, the geometry and the spectrometer efficiency. Based on a calibrated instrumentation the objectives of this work are the development of procedures for the characterization of all instrumental fundamental parameters as well as the evaluation and reduction of their measurement uncertainties: The algorithms known from the literature for the calculation of X-ray tube spectrum are evaluated with regard to their deviations in the spectral distribution. Within this work a novel semi-empirical model is improved with respect to its uncertainties and enhanced in the low energy range as well as extended for another three anodes. The emitted X-ray tube spectrum is calculated from the detected one, which is measured at an especially developed setup for the direct measurement of X-ray tube spectra. This emitted X-ray tube spectrum is compared to the one calculated on base of the model of this work. A procedure for the determination of the most important parameters of an X-ray semi-lens in parallelizing mode is developed. The temporal stability of the transmission of X-ray full lenses, which have been in regular

  12. Lumbar spine CT scan

    Science.gov (United States)

    CAT scan - lumbar spine; Computed axial tomography scan - lumbar spine; Computed tomography scan - lumbar spine; CT - lower back ... your breath for short periods of time. The scan should take only 10 to 15 minutes.

  13. Design of Test Wrapper Scan Chain Based on Differential Evolution

    Directory of Open Access Journals (Sweden)

    Aijun Zhu

    2013-08-01

    Full Text Available Integrated Circuit has entered the era of design of the IP-based SoC (System on Chip, which makes the IP core reuse become a key issue. SoC test wrapper design for scan chain is a NP Hard problem, we propose an algorithm based on Differential Evolution (DE to design wrapper scan chain. Through group’s mutation, crossover and selection operations, the design of test wrapper scan chain is achieved. Experimental verification is carried out according to the international standard benchmark ITC’02. The results show that the algorithm can obtain shorter longest wrapper scan chains, compared with other algorithms.

  14. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  15. Benchmarks and statistics of entanglement dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tiersch, Markus

    2009-09-04

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  16. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  17. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  18. Benchmark studies of the gyro-Landau-fluid code and gyro-kinetic codes on kinetic ballooning modes

    Science.gov (United States)

    Tang, T. F.; Xu, X. Q.; Ma, C. H.; Bass, E. M.; Holland, C.; Candy, J.

    2016-03-01

    A Gyro-Landau-Fluid (GLF) 3 + 1 model has been recently implemented in BOUT++ framework, which contains full Finite-Larmor-Radius effects, Landau damping, and toroidal resonance [Ma et al., Phys. Plasmas 22, 055903 (2015)]. A linear global beta scan has been conducted using the JET-like circular equilibria (cbm18 series), showing that the unstable modes are kinetic ballooning modes (KBMs). In this work, we use the GYRO code, which is a gyrokinetic continuum code widely used for simulation of the plasma microturbulence, to benchmark with GLF 3 + 1 code on KBMs. To verify our code on the KBM case, we first perform the beta scan based on "Cyclone base case parameter set." We find that the growth rate is almost the same for two codes, and the KBM mode is further destabilized as beta increases. For JET-like global circular equilibria, as the modes localize in peak pressure gradient region, a linear local beta scan using the same set of equilibria has been performed at this position for comparison. With the drift kinetic electron module in the GYRO code by including small electron-electron collision to damp electron modes, GYRO generated mode structures and parity suggest that they are kinetic ballooning modes, and the growth rate is comparable to the GLF results. However, a radial scan of the pedestal for a particular set of cbm18 equilibria, using GYRO code, shows different trends for the low-n and high-n modes. The low-n modes show that the linear growth rate peaks at peak pressure gradient position as GLF results. However, for high-n modes, the growth rate of the most unstable mode shifts outward to the bottom of pedestal and the real frequency of what was originally the KBMs in ion diamagnetic drift direction steadily approaches and crosses over to the electron diamagnetic drift direction.

  19. Theory of second optimization for scan experiment

    CERN Document Server

    Mo, X H

    2015-01-01

    The optimal design of scan experiment is of great significance both for scientific research and from economical viewpoint. Two approaches, one has recourse to the sampling technique and the other resorts to the analytical proof, are adopted to figure out the optimized scan scheme for the relevant parameters. The final results indicate that for $n$ parameters scan experiment, $n$ energy points are necessary and sufficient for optimal determination of these $n$ parameters; each optimal position can be acquired by single parameter scan (sampling method), or by analysis of auxiliary function (analytic method); the luminosity allocation among the points can be determined analytically with respect to the relative importance between parameters. By virtue of the second optimization theory established in this paper, it is feasible to accommodate the perfectly optimal scheme for any scan experiment.

  20. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  1. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  2. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  3. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    Science.gov (United States)

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  4. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  5. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  6. Organizational and economic aspects of benchmarking innovative products at the automobile industry enterprises

    Directory of Open Access Journals (Sweden)

    L.M. Taraniuk

    2016-06-01

    Full Text Available The aim of the article. The aim of the article is to determine the nature and characteristics of the use of benchmarking in the activity of domestic enterprises of automobile industry under current economic conditions. The results of the analysis. The article identified the concept of benchmarking, examining the stages of benchmarking, determination the efficiency of benchmarking in work automakers. It is considered the historical aspects of the emergence of benchmarking method in world economics. It is determined the economic aspects of the benchmarking in the work of enterprise automobile industry. The analysis on the stages of benchmarking of innovative products in the modern development of the productive forces and the impact of market factors on the economic activities of companies, including in the enterprise of automobile industry. The attention is focused on the specifics of implementing benchmarking at companies of automobile industry. It is considered statistics number of owners of electric vehicles worldwide. The authors researched market of electric vehicles in Ukraine. Also, it is considered the need of benchmarking using to improve the competitiveness of the national automobile industry especially CJSC “Zaporizhia Automobile Building Plant”. Authors suggested reasonable steps for its improvement. The authors improved methodical approach to assessing the selection of vehicles with the best technical parameters based on benchmarking, which, unlike the existing ones, based on the calculation of the integral factor of technical specifications of vehicles in order to establish better competitive products of companies automobile industry among evaluated. The main indicators of the national production of electric vehicles are shown. Attention is paid to the development of important ways of CJSC “Zaporizhia Automobile Building Plant”, where authors established the aspects that need to pay attention in the management of the

  7. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  8. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  9. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated with profess......Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated...... for 191 orthopaedics departments of German hospitals matched with survey data on bureaucratic benchmarking information provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically...

  10. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  11. DWEB: A Data Warehouse Engineering Benchmark

    CERN Document Server

    Darmont, Jérôme; Boussaïd, Omar

    2005-01-01

    Data warehouse architectural choices and optimization techniques are critical to decision support query performance. To facilitate these choices, the performance of the designed data warehouse must be assessed. This is usually done with the help of benchmarks, which can either help system users comparing the performances of different systems, or help system engineers testing the effect of various design choices. While the TPC standard decision support benchmarks address the first point, they are not tuneable enough to address the second one and fail to model different data warehouse schemas. By contrast, our Data Warehouse Engineering Benchmark (DWEB) allows to generate various ad-hoc synthetic data warehouses and workloads. DWEB is fully parameterized to fulfill data warehouse design needs. However, two levels of parameterization keep it relatively easy to tune. Finally, DWEB is implemented as a Java free software that can be interfaced with most existing relational database management systems. A sample usag...

  12. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  13. Uncertainty Analysis for OECD-NEA-UAM Benchmark Problem of TMI-1 PWR Fuel Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Hyuk; Kim, S. J.; Seo, K.W.; Hwang, D. H. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    A quantification of code uncertainty is one of main questions that is continuously asked by the regulatory body like KINS. Utility and code developers solve the issue case by case because the general answer about this question is still opened. Under the circumference, OECD-NEA has attracted the global consensus on the uncertainty quantification through the UAM benchmark program. OECD-NEA benchmark II-2 problem is a problem on the uncertainty quantification of subchannel code. It is a problem that the uncertainty of fuel temperature and ONB location on the TMI-1 fuel assembly are estimated on the transient and steady condition. In this study, the uncertainty quantification of MATRA code is performed on the problem. Workbench platform is developed to produce the large set of inputs that is needed to estimate the uncertainty quantification on the benchmark problem. Direct Monte Carlo sampling is used to the random sampling from sample PDF. Uncertainty analysis of MATRA code on OECD-NEA benchmark problem is estimated using the developed tool and MATRA code. Uncertainty analysis on OECD-NEA benchmark II-2 problem was performed to quantify the uncertainty of MATRA code. Direct Monte Carlo sampling is used to extract 2000 random parameters. Workbench program is developed to generate input files and post process of calculation results. Uncertainty affected by input parameters was estimated on the DNBR, the cladding and the coolant temperatures.

  14. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  15. Lung gallium scan

    Science.gov (United States)

    Gallium 67 lung scan; Lung scan; Gallium scan - lung; Scan - lung ... Gallium is injected into a vein. The scan will be taken 6 to 24 hours after the gallium is injected. (Test time depends on whether your condition is acute or chronic .) ...

  16. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  17. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  18. Benchmarking of Heavy Ion Transport Codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, Igor [ORNL; Ronningen, Reginald M. [Michigan State University, East Lansing; Heilbronn, Lawrence [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required.

  19. Fault Detection and Isolation Using Analytical Redundancy Relations for the Ship Propulsion Benchmark

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh

    The prime objective of Fault-tolerant Control (FTC) systems is to handle faults and discrepancies using appropriate accommodation policies. The issue of obtaining information about various parameters and signals, which have to be monitored for fault detection purposes, becomes a rigorous task wit...... is illustrated on the ship propulsion benchmark....

  20. Research on Effect of SpectraI Scanning Parameters on Quantitative AnaIysis ModeI of TotaI Acids and Amino Acid Nitrogen in Soy Sauce%光谱扫描参数对酱油总酸和氨基酸态氮定量分析模型的影响研究

    Institute of Scientific and Technical Information of China (English)

    胡亚云; 崔璐

    2015-01-01

    研究了适合进行酱油中总酸和氨基酸态氮定量分析的近红外光谱扫描参数。通过设定不同的分辨率和扫描次数,采用光程为1 mm的比色皿分别扫描各种扫描参数下的酱油透射光谱,利用PLS-交叉验证法建立酱油定量分析校正模型,结果表明:在光谱扫描频率范围为12000~4000 cm-1,分辨率为8 cm-1,扫描次数64次的参数条件下所建模型最优。%Study near infrared spectrum scanning parameters for total acids and amino acid nitrogen in soy sauce by quantitative analysis.By setting different resolution and scanning times,using the cuvette with optical path of 1 mm to scan transmission spectra of soy sauce under various parameters respectively,using PLS-cross validation method to establish quantitative analysis calibration model of soy sauce,the results show that:in the spectral scanning frequency range of 12000~4000 cm-1 ,the resolution of 8 cm-1 ,the scanning times of 64,the model is the best.

  1. Heart PET scan

    Science.gov (United States)

    ... nuclear medicine scan; Heart positron emission tomography; Myocardial PET scan ... A PET scan requires a small amount of radioactive material (tracer). This tracer is given through a vein (IV), ...

  2. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    Science.gov (United States)

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible

  3. Clinical study on combined diagnosis of FibroScan and multi parameter model in liver fibrosis,cirrhosis%FibroScan与多参数模型联合诊断肝纤维化、肝硬化临床研究

    Institute of Scientific and Technical Information of China (English)

    蔺咏梅; 韩洁; 陆长春; 皇甫彤; 段作斌; 李秀花

    2015-01-01

    目的:探讨FibroScan联合多参数模型对肝纤维化、肝硬化的诊断价值.方法:收治肝纤维化患者52例和肝硬化患者27例,检测其AST、ALT、G及PLT水平,计算FibroIdex、FIB-4及APRI模型值,并进行FibroScan检查,评价模型参数与 FibroScan 联合和单独诊断肝纤维化、肝硬化的效能.结果:肝硬化组的 APRI、FIB-4、FibroIndex 及FibroScan均较肝纤维化组显著升高,且组内呈升高趋势,FibroScan与3个模型间呈显著相关性(P<0.05);FibroScan与多参数模型联合可提高诊断准确率与AUC.结论:FibroScan联合多参数模型可提高肝纤维化与肝硬化的诊断准确度.%Objective:To explore the combined diagnosis value of FibroScan and multi parameter model in liver fibrosis and cirrhosis.Methods:52 patients with liver fibrosis and 27 patients with liver cirrhosis were selected.AST,ALT,G and PLT levels were detected to calculate FibroIdex.FIB-4 and APRI model values,FibroScan was examined.We evaluated the efficiency of combined with the parameters and FibroScan model diagnosis and a separate diagnosis for liver fibrosis and cirrhosis.Results:In liver cirrhosis group,compared with the hepatic fibrosis group,APRI,FIB-4,FibroIndex and FibroScan were significantly increased, and within groups showed a rising trend,FibroScan was significantly associated with the three model(P<0.05);the multi parameter model combined with FibroScan can improve the accuracy of diagnosis and AUC.Conclusion:FibroScan combined with multi parameter model can improve the diagnostic accuracy of hepatic fibrosis and cirrhosis.

  4. Benchmarking of methods for genomic taxonomy

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Cosentino, Salvatore; Lukjancenko, Oksana;

    2014-01-01

    . Nevertheless, the method has been found to have a number of shortcomings. In the current study, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene...

  5. Seven Benchmarks for Information Technology Investment.

    Science.gov (United States)

    Smallen, David; Leach, Karen

    2002-01-01

    Offers benchmarks to help campuses evaluate their efforts in supplying information technology (IT) services. The first three help understand the IT budget, the next three provide insight into staffing levels and emphases, and the seventh relates to the pervasiveness of institutional infrastructure. (EV)

  6. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  7. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated with profess......Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated...... with professional performance. Employed professionals will further be more open to consider bureaucratic benchmarking information provided by the administration, if they are aware that their professional performance is low. To test our hypotheses, we rely on a sample of archival public professional performance data...... for 191 orthopaedics departments of German hospitals matched with survey data on bureaucratic benchmarking information provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically...

  8. Benchmarking Peer Production Mechanisms, Processes & Practices

    Science.gov (United States)

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  9. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  10. Benchmarking 2010: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  11. Benchmark Experiment for Beryllium Slab Samples

    Institute of Scientific and Technical Information of China (English)

    NIE; Yang-bo; BAO; Jie; HAN; Rui; RUAN; Xi-chao; REN; Jie; HUANG; Han-xiong; ZHOU; Zu-ying

    2015-01-01

    In order to validate the evaluated nuclear data on beryllium,a benchmark experiment has been performed at China Institution of Atomic Energy(CIAE).Neutron leakage spectra from pure beryllium slab samples(10cm×10cm×11cm)were measured at 61°and 121°using timeof-

  12. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  13. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  14. Benchmark Generation and Simulation at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Lagadapati, Mahesh [North Carolina State University (NCSU), Raleigh; Mueller, Frank [North Carolina State University (NCSU), Raleigh; Engelmann, Christian [ORNL

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  15. Thermodynamic benchmark study using Biacore technology

    NARCIS (Netherlands)

    Navratilova, I.; Papalia, G.A.; Rich, R.L.; Bedinger, D.; Brophy, S.; Condon, B.; Deng, T.; Emerick, A.W.; Guan, H.W.; Hayden, T.; Heutmekers, T.; Hoorelbeke, B.; McCroskey, M.C.; Murphy, M.M.; Nakagawa, T.; Parmeggiani, F.; Xiaochun, Q.; Rebe, S.; Nenad, T.; Tsang, T.; Waddell, M.B.; Zhang, F.F.; Leavitt, S.; Myszka, D.G.

    2007-01-01

    A total of 22 individuals participated in this benchmark study to characterize the thermodynamics of small-molecule inhibitor-enzyme interactions using Biacore instruments. Participants were provided with reagents (the enzyme carbonic anhydrase II, which was immobilized onto the sensor surface, and

  16. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  17. Alberta K-12 ESL Proficiency Benchmarks

    Science.gov (United States)

    Salmon, Kathy; Ettrich, Mike

    2012-01-01

    The Alberta K-12 ESL Proficiency Benchmarks are organized by division: kindergarten, grades 1-3, grades 4-6, grades 7-9, and grades 10-12. They are descriptors of language proficiency in listening, speaking, reading, and writing. The descriptors are arranged in a continuum of seven language competences across five proficiency levels. Several…

  18. Algorithm and Architecture Independent Benchmarking with SEAK

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  19. A human benchmark for language recognition

    NARCIS (Netherlands)

    Orr, R.; Leeuwen, D.A. van

    2009-01-01

    In this study, we explore a human benchmark in language recognition, for the purpose of comparing human performance to machine performance in the context of the NIST LRE 2007. Humans are categorised in terms of language proficiency, and performance is presented per proficiency. Themain challenge in

  20. Benchmarking Year Five Students' Reading Abilities

    Science.gov (United States)

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  1. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  2. Scan BIST with biased scan test signals

    Institute of Scientific and Technical Information of China (English)

    XIANG Dong; CHEN MingJing; SUN JiaGuang

    2008-01-01

    The conventional test-per-scan built-in self-test (BIST) scheme needs a number of shift cycles followed by one capture cycle.Fault effects received by the scan flip-flops are shifted out while shifting in the next test vector like scan testing.Unlike deterministic testing,it is unnecessary to apply a complete test vector to the scan chains.A new scan-based BIST scheme is proposed by properly controlling the test signals of the scan chains,Different biased random values are assigned to the test signals of scan flip-flops in separate scan chains.Capture cycles can be inserted at any clock cycle if necessary.A new testability estimation procedure according to the proposed testing scheme is presented.A greedy procedure is proposed to select a weight for each scan chain.Experimental results show that the proposed method can improve test effectiveness of scan-based BIST greatly,and most circuits can obtain complete fault coverage or very close to complete fault coverage.

  3. Higgs-Boson Benchmarks in Agreement with CDM, EWPO and BPO

    CERN Document Server

    Heinemeyer, S

    2007-01-01

    We explore `benchmark planes' in the Minimal Supersymmetric Standard Model (MSSM) that are in agreement with the measured cold dark matter (CDM) density, electroweak precision observables (EWPO) and B physics observables (BPO). The M_A-tan_beta planes are specified assuming that gaugino masses m_{1/2}, soft trilinear supersymmetry-breaking parameters A_0 and the soft supersymmetry-breaking contributions m_0 to the squark and slepton masses are universal, but not those associated with the Higgs multiplets (the NUHM framework). We discuss the prospects for probing experimentally these benchmark surfaces at the Tevatron collider, the LHC and the ILC.

  4. Gaia Benchmark stars and their twins in the Gaia-ESO Survey

    CERN Document Server

    Jofre, Paula

    2015-01-01

    The Gaia benchmark stars are stars with very precise stellar parameters that cover a wide range in the HR diagram at various metallicities. They are meant to be good representative of typical FGK stars in the Milky Way. Currently, they are used by several spectroscopic surveys to validate and calibrate the methods that analyse the data. I review our recent activities done for these stars. Additionally, by applying our new method to find stellar twins on the Gaia-ESO Survey, I discuss how good representatives of Milky Way stars the benchmark stars are and how they distribute in space.

  5. Gaia benchmark stars and their twins in the Gaia-ESO Survey

    Science.gov (United States)

    Jofré, P.

    2016-09-01

    The Gaia benchmark stars are stars with very precise stellar parameters that cover a wide range in the HR diagram at various metallicities. They are meant to be good representative of typical FGK stars in the Milky Way. Currently, they are used by several spectroscopic surveys to validate and calibrate the methods that analyse the data. I review our recent activities done for these stars. Additionally, by applying our new method to find stellar twins on the Gaia-ESO Survey, I discuss how good representatives of Milky Way stars the benchmark stars are and how they distribute in space.

  6. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  7. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    Science.gov (United States)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  8. The ACRV Picking Benchmark (APB): A Robotic Shelf Picking Benchmark to Foster Reproducible Research

    OpenAIRE

    Leitner, Jürgen; Tow, Adam W.; Dean, Jake E.; Suenderhauf, Niko; Durham, Joseph W.; Cooper, Matthew; Eich, Markus; Lehnert, Christopher; Mangels, Ruben; McCool, Christopher; Kujala, Peter; Nicholson, Lachlan; Van Pham, Trung; Sergeant, James; Wu, Liao

    2016-01-01

    Robotic challenges like the Amazon Picking Challenge (APC) or the DARPA Challenges are an established and important way to drive scientific progress. They make research comparable on a well-defined benchmark with equal test conditions for all participants. However, such challenge events occur only occasionally, are limited to a small number of contestants, and the test conditions are very difficult to replicate after the main event. We present a new physical benchmark challenge for robotic pi...

  9. Indoor Modelling Benchmark for 3D Geometry Extraction

    Science.gov (United States)

    Thomson, C.; Boehm, J.

    2014-06-01

    A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.

  10. 胸部低剂量CT定量指标与肺气流受限的相关性分析%Correlation between the parameters quantified by chest low-dose CT scan and airflow limitation examined by spirometry

    Institute of Scientific and Technical Information of China (English)

    陈淑靖; 白春学; 顾宇彤; 张静; 余勇夫; 计海婴; 王桂芳; 李丽; 龚颖; 陈刚

    2012-01-01

    目的 探讨胸部低剂量CT(LDCT)定量指标和肺气流受限及其严重程度的相关性,建立初步相关模型.方法 纳入2008年7月至2012年2月在我院同步完成LDCT和肺功能检查的48例40岁以上有吸烟史的患者,对LDCT定量指标和肺功能指标进行相关性分析,并结合年龄、性别等因素建立回归模型.通过绘制受试者工作特征曲线(ROC曲线)确定对气流受限的判断作用最佳的LDCT定量指标.结果 经调整年龄、性别及BMI校正后,EV和El与FEV1、FEV1% pred、FEV1/FVC和TLC% pred呈负相关(P<0.05).但与RV/TLC% pred则无明显相关性(P>0.05);确定最佳回归模型为FEV1/FVC%=94.17+25.31×性别(gender)-0.58×年龄(age)-l0.84×In(El(%));FEV1% pred=141.76-0.78×年龄(age)-14.07×In(EI(%)).经ROC曲线确定对气流受限的判断作用最佳的LDCT定量指标为El.结论 El可用于判别有无气流受限,通过回归方程计算可估测气流受限和肺气肿严重程度.LDCT有望用于COPD的早期诊断.%Objective The purpose of the current study was to deternmine the correlation between the parameters quantified by chest low-dose CT scan (LDCT) and airflow limitation examined by spirometry.Methods This study included 48 patients above 40 years old and with smoking history who underwent LDCT and spirometry on the same day.The regression model was generated on the base of the LDCT value which correlated best to airflow limitation using ROC curve.Results With age,gender and body mass index adjusted,EV and El significantly correlated negatively with FEV1,FEV1%predieted,FEV1/FVC and TLC% predicted (P<0.05),but had no correlation with RV/TLC% predicted ( P>0.05).The regression model were FEV1/FVC%=94.17+25.31 X<gender-0.58×age-10.84×In (EI (%)),FEV1%predicted=141.76-0.78×age-14.07×In (EI(%)).The best LDCT value for airflow limitation estimation was EI.Conclusions El may be used to estimate whether airflow limitation exited and to

  11. Automatic generation of reaction energy databases from highly accurate atomization energy benchmark sets.

    Science.gov (United States)

    Margraf, Johannes T; Ranasinghe, Duminda S; Bartlett, Rodney J

    2017-03-31

    In this contribution, we discuss how reaction energy benchmark sets can automatically be created from arbitrary atomization energy databases. As an example, over 11 000 reaction energies derived from the W4-11 database, as well as some relevant subsets are reported. Importantly, there is only very modest computational overhead involved in computing >11 000 reaction energies compared to 140 atomization energies, since the rate-determining step for either benchmark is performing the same 140 quantum chemical calculations. The performance of commonly used electronic structure methods for the new database is analyzed. This allows investigating the relationship between the performances for atomization and reaction energy benchmarks based on an identical set of molecules. The atomization energy is found to be a weak predictor for the overall usefulness of a method. The performance of density functional approximations in light of the number of empirically optimized parameters used in their design is also discussed.

  12. Effects of Exposure Imprecision on Estimation of the Benchmark Dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose......Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose...

  13. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  14. Influência de alguns parâmetros experimentais nos resultados de análises calorimétricas diferenciais - DSC Influence of some experimental parameters on the results of differential scanning calorimetry - DSC.

    Directory of Open Access Journals (Sweden)

    Cláudia Bernal

    2002-09-01

    Full Text Available A series of experiments were performed in order to demonstrate to undergraduate students or users of the differential scanning calorimetry (DSC, that several factors can influence the qualitative and quantitative aspects of DSC results. Saccharin, an artificial sweetner, was used as a probe and its thermal behavior is also discussed on the basis of thermogravimetric (TG and DSC curves.

  15. 激光宽带扫描热处理用锥形多棱镜的参数分析%Analysis of parameters of pyramid polygon mirror for wide-band laser scanning heat treatment

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The service properties of materials can be effectively improvedby laser heat treatment. Wide-band scanning makes the temperature distribution more uniform, the deformation smaller, and the previously treated area affected by the next scan less. This paper presents formula to calculate the scanning frequency, scanning angle, scanning width and excessive focal length of the wide-band laser scanning device. Heat treatment experiments with SNCM220 and SCM440 using wide-band scanning with a 2kW CO2 laser indicated that the average hardened width reaches 15.5mm. The hardened width of SNCM220 decreased with increasing travel speed while that of SCM440 was not sensitive to travel speed. With increasing travel speed, the hardened depth decreased and the hardness increased.%激光热处理可有效提高材料的使用性能。采用宽带扫描法可使温度分布更均匀,热处理引起的变形更小,同时减小下一道对前一道的热影响。介绍了激光宽带扫描装置,提出了多棱镜宽带扫描的扫描频率、扫描角度、扫描宽度、过剩聚焦长度的解析式。实验研究了扫描高度变化时,扫描宽度的变化情况,并与解析计算结果进行了对比分析。对SNCM220,SCM440两种钢采用2kWCO2激光进行了宽带扫描热处理实验,结果表明,激光宽带扫描平均硬化宽度可达15.5mm,SNCM220硬化宽度随行走速度提高减小较快,SCM440对行走速度则较不敏感。随行走速度提高,钢的硬化层厚度减小,硬度提高。

  16. Fault Detection of Wind Turbines with Uncertain Parameters

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Odgaard, Peter Fogh; Bak, Thomas;

    2012-01-01

    In this paper a set-membership approach for fault detection of a benchmark wind turbine is proposed. The benchmark represents relevant fault scenarios in the control system, including sensor, actuator and system faults. In addition we also consider parameter uncertainties and uncertainties on the...

  17. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    OpenAIRE

    Zaharchenko Lolita A.; Kolesnyk Oksana A.

    2013-01-01

    The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking an...

  18. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  19. Benchmarking of corporate social responsibility: Methodological problems and robustness

    OpenAIRE

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  20. Conventional cerebrospinal fluid scanning

    Energy Technology Data Exchange (ETDEWEB)

    Schicha, H.

    1985-06-01

    Conventional cerebrospinal fluid scanning (CSF scanning) today is mainly carried out in addition to computerized tomography to obtain information about liquor flow kinetics. Especially in patients with communicating obstructive hydrocephalus, CSF scanning is clinically useful for the decision for shunt surgery. In patients with intracranial cysts, CSF scanning can provide information about liquor circulation. Further indications for CSF scanning include the assessment of shunt patency especially in children, as well as the detection and localization of cerebrospinal fluid leaks.

  1. 47 CFR 69.108 - Transport rate benchmark.

    Science.gov (United States)

    2010-10-01

    ... with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone company... benchmark ratio of 9.6 to 1 or higher. (c) If a telephone company's initial transport rates are based on... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section...

  2. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  3. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  4. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying an...... organizational relations, behaviors and actions. In closing it is briefly considered how to study the calculative practices of benchmarking....

  5. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  6. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  7. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  8. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  9. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  10. 29 CFR 1952.203 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.203 Section 1952.203... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  11. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  12. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  13. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  15. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  16. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  17. Estimation of the Kinetic Parameters and the Critical Rate of Temperature Rise in the Thermal Explosion from the Exothermic Autocatalytic Decomposition of 3,4-Bis(4′-nitrofurazan-3′-yl)-2-oxofurazan (BNFOF) Using Non-isothermal Differential Scanning Calorimetry

    Institute of Scientific and Technical Information of China (English)

    ZHAO Feng-Qi; ZHOU Yan-Shui; ZHAO Hong-An; GAO Sheng-Li; SHI Qi-Zhen; LU Gui-E; JIANG Jin-Yong; GUO Peng-Jiang; HU Rong-Zu; ZHANG Hai; XIA Zhi-Ming; GAO Hong-Xu; CHEN Pei; LUO Yang; ZHANG Zhi-Zhong

    2006-01-01

    A method of estimating the kinetic parameters and the critical rate of temperature rise in the thermal explosion for the autocatalytic decomposition of 3,4-bis(4′-nitrofurazan-3′-yl)-2-oxofurazan (BNFOF) with non-isothermal differential scanning calorimetry (DSC) was presented. The rate equation for the decomposition of BNFOF was established, and information was obtained on the rate of temperature increase in BNFOF when the empiric-order autocatalytic decomposition was converted into thermal explosion.

  18. Characterization of addressability by simultaneous randomized benchmarking

    CERN Document Server

    Gambetta, Jay M; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-01-01

    The control and handling of errors arising from cross-talk and unwanted interactions in multi-qubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking each qubit individually and then simultaneously, and the amount of addressability is related to the difference of the average gate fidelities of those experiments. We present the results on two similar samples with different amounts of cross-talk and unwanted interactions, which agree with predictions based on simple models for the amount of residual coupling.

  19. ASBench: benchmarking sets for allosteric discovery.

    Science.gov (United States)

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  20. Active vibration control of nonlinear benchmark buildings

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xing-de; CHEN Dao-zheng

    2007-01-01

    The present nonlinear model reduction methods unfit the nonlinear benchmark buildings as their vibration equations belong to a non-affine system. Meanwhile,the controllers designed directly by the nonlinear control strategy have a high order, and they are difficult to be applied actually. Therefore, a new active vibration control way which fits the nonlinear buildings is proposed. The idea of the proposed way is based on the model identification and structural model linearization, and exerting the control force to the built model according to the force action principle. This proposed way has a better practicability as the built model can be reduced by the balance reduction method based on the empirical Grammian matrix. A three-story benchmark structure is presented and the simulation results illustrate that the proposed method is viable for the civil engineering structures.

  1. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  2. Measuring NUMA effects with the STREAM benchmark

    CERN Document Server

    Bergstrom, Lars

    2011-01-01

    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

  3. Physics benchmarks of the VELO upgrade

    CERN Document Server

    Eklund, Lars

    2017-01-01

    The LHCb Experiment at the LHC is successfully performing precision measurements primarily in the area of flavour physics. The collaboration is preparing an upgrade that will start taking data in 2021 with a trigger-less readout at five times the current luminosity. The vertex locator has been crucial in the success of the experiment and will continue to be so for the upgrade. It will be replaced by a hybrid pixel detector and this paper discusses the performance benchmarks of the upgraded detector. Despite the challenging experimental environment, the vertex locator will maintain or improve upon its benchmark figures compared to the current detector. Finally the long term plans for LHCb, beyond those of the upgrade currently in preparation, are discussed.

  4. Argonne Code Center: benchmark problem book

    Energy Technology Data Exchange (ETDEWEB)

    1977-06-01

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification. (RWR)

  5. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  6. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  7. Experiences in Benchmarking of Autonomic Systems

    Science.gov (United States)

    Etchevers, Xavier; Coupaye, Thierry; Vachet, Guy

    Autonomic computing promises improvements of systems quality of service in terms of availability, reliability, performance, security, etc. However, little research and experimental results have so far demonstrated this assertion, nor provided proof of the return on investment stemming from the efforts that introducing autonomic features requires. Existing works in the area of benchmarking of autonomic systems can be characterized by their qualitative and fragmented approaches. Still a crucial need is to provide generic (i.e. independent from business, technology, architecture and implementation choices) autonomic computing benchmarking tools for evaluating and/or comparing autonomic systems from a technical and, ultimately, an economical point of view. This article introduces a methodology and a process for defining and evaluating factors, criteria and metrics in order to qualitatively and quantitatively assess autonomic features in computing systems. It also discusses associated experimental results on three different autonomic systems.

  8. Healthy Foodservice Benchmarking and Leading Practices

    Science.gov (United States)

    2012-07-01

    reduction in weight status, the prevalence of high waist circumference values, and fasting insulin as compared to the control group. The high- risk portion...whole experienced a slight reduction in weight status (BMI z-score), the prevalence of high waist circumference values, and fasting insulin as... risk for contamination with pesticide residues was 30% lower for organically grown Healthy Foodservice Benchmarking and Leading Practices | 32

  9. Benchmarking polish basic metal manufacturing companies

    Directory of Open Access Journals (Sweden)

    P. Pomykalski

    2014-01-01

    Full Text Available Basic metal manufacturing companies are undergoing substantial strategic changes resulting from global changes in demand. During such periods managers should closely monitor and benchmark the financial results of companies operating in their section. Proper and timely identification of the consequences of changes in these areas may be crucial as managers seek to exploit opportunities and avoid threats. The paper examines changes in financial ratios of basic metal manufacturing companies operating in Poland in the period 2006-2011.

  10. Benchmarking Performance of Web Service Operations

    OpenAIRE

    Zhang, Shuai

    2011-01-01

    Web services are often used for retrieving data from servers providing information of different kinds. A data providing web service operation returns collections of objects for a given set of arguments without any side effects. In this project a web service benchmark (WSBENCH) is developed to simulate the performance of web service calls. Web service operations are specified as SQL statements. The function generator of WSBENCH converts user specified SQL queries into functions and automatical...

  11. Self-interacting Dark Matter Benchmarks

    OpenAIRE

    Kaplinghat, M.; Tulin, S.; Yu, H-B

    2017-01-01

    Dark matter self-interactions have important implications for the distributions of dark matter in the Universe, from dwarf galaxies to galaxy clusters. We present benchmark models that illustrate characteristic features of dark matter that is self-interacting through a new light mediator. These models have self-interactions large enough to change dark matter densities in the centers of galaxies in accord with observations, while remaining compatible with large-scale structur...

  12. BENCHMARK AS INSTRUMENT OF CRISIS MANAGEMENT

    OpenAIRE

    Haievskyi, Vladyslav

    2017-01-01

    In the article is determined the essence of a question's benchmark through synthesis of such concepts as “benchmark”, “crisis management” as an instrument of crisis management, the powerful tool which the entity carries out the comparative analysis of processes and effective activities and allows to reduce costs for production's of products in case of limitation's resources, to raise profit and to achieve success in optimization of strategy's activities of the entity.

  13. Benchmarking Nature Tourism between Zhangjiajie and Repovesi

    OpenAIRE

    Wu, Zhou

    2014-01-01

    Since nature tourism became a booming business in modern society, more and more tourists choose nature-based tourism destination for their holidays. To find ways to promote Repovesi national park is quite significant, in a bid to reinforce the competitiveness of Repovesi national park. The topic of this thesis is both to find good marketing strategies used by the Zhangjiajie national park, via benchmarking and to provide some suggestions to Repovesi national park. The Method used in t...

  14. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  15. BENCHMARKING ON-LINE SERVICES INDUSTRIES

    Institute of Scientific and Technical Information of China (English)

    John HAMILTON

    2006-01-01

    The Web Quality Analyser (WQA) is a new benchmarking tool for industry. It hasbeen extensively tested across services industries. Forty five critical success features are presented as measures that capture the user's perception of services industry websites. This tool differs to previous tools, in that it captures the information technology (IT) related driver sectors of website performance, along with the marketing-services related driver sectors. These driver sectors capture relevant structure, function and performance components.An 'on-off' switch measurement approach determines each component. Relevant component measures scale into a relative presence of the applicable feature, with a feature block delivering one of the sector drivers. Although it houses both measurable and a few subjective components, the WQA offers a proven and useful means to compare relevant websites.The WQA defines website strengths and weaknesses, thereby allowing for corrections to the website structure of the specific business. WQA benchmarking against services related business competitors delivers a position on the WQA index, facilitates specific website driver rating comparisons, and demonstrates where key competitive advantage may reside. This paper reports on the marketing-services driver sectors of this new benchmarking WQA tool.

  16. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  17. Benchmarking and accounting for the (private) cloud

    Science.gov (United States)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  18. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-03-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  19. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  20. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  1. State of the art: benchmarking microprocessors for embedded automotive applications

    Directory of Open Access Journals (Sweden)

    Adnan Shaout

    2016-09-01

    Full Text Available Benchmarking microprocessors provides a way for consumers to evaluate the performance of the processors. This is done by using either synthetic or real world applications. There are a number of benchmarks that exist today to assist consumers in evaluating the vast number of microprocessors that are available in the market. In this paper an investigation of the various benchmarks available for evaluating microprocessors for embedded automotive applications will be performed. We will provide an overview of the following benchmarks: Whetstone, Dhrystone, Linpack, standard performance evaluation corporation (SPEC CPU2006, embedded microprocessor benchmark consortium (EEMBC AutoBench and MiBench. A comparison of existing benchmarks will be given based on relevant characteristics of automotive applications which will give the proper recommendation when benchmarking processors for automotive applications.

  2. RBC nuclear scan

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003835.htm RBC nuclear scan To use the sharing features on this page, please enable JavaScript. An RBC nuclear scan uses small amounts of radioactive material to ...

  3. A new benchmark for pose estimation with ground truth from virtual reality

    DEFF Research Database (Denmark)

    Schlette, Christian; Buch, Anders Glent; Aksoy, Eren Erdal

    2014-01-01

    , stereo reconstruction and action recognition. As a basis for the machine vision and learning involved, pose estimation is used for deriving object positions and orientations and thus target frames for robot execution. Our contribution introduces and applies a novel benchmark for typical multi......The development of programming paradigms for industrial assembly currently gets fresh impetus from approaches in human demonstration and programming-by-demonstration. Major low- and mid-level prerequisites for machine vision and learning in these intelligent robotic applications are pose estimation......-sensor setups and algorithms in the field of demonstration-based automated assembly. The benchmark platform is equipped with a multi-sensor setup consisting of stereo cameras and depth scanning devices (see Fig. 1). The dimensions and abilities of the platform have been chosen in order to reflect typical manual...

  4. Information-Theoretic Benchmarking of Land Surface Models

    Science.gov (United States)

    Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong

    2016-04-01

    Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed

  5. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... of the Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid scan is ... encourage linking to this site. × Recommend RadiologyInfo to a friend Send to (friend's e-mail address): From ( ...

  6. Multi-Site Model Benchmarking: Do Land Surface Models Leak Information?

    Science.gov (United States)

    Mocko, D. M.; Nearing, G. S.; Kumar, S.

    2014-12-01

    It is widely reported that land surface models (LSMs) are unable to use all of the information available from boundary conditions [1-4]. Evidence for this is that statistical models typically out-perform physics LSMs with the same forcing data. We demonstrate that this conclusion is not necessarily correct. The statistical models don't consider parameters, and the experiments cannot distinguish between information loss and bad information (disinformation). Recent work has outlined a rigorous interpretation of model benchmarking that allows us to measure the amount of information provided by model physics and the amount of information lost due to model error [5]. Recognizing that a complete understanding of model adequacy requires treatment across multiple locations [6] allows us to expand benchmarking theory to segregate bad and missing information. The result is a benchmarking method that that can distinguish error due to parameters, forcing data, and model structure - and, unlike other approaches, does not rely on parameter estimation, which can only provide estimates of parameter uncertainty conditional on model physics. Our new benchmarking methodology was compared with the standard methodology to measure information loss in several LSMs included in the current and developmental generations of the North American Land Data Assimilation System. The classical experiments implied that each of these models lose a significant amount of information from the forcing data; however, the new methodology shows clearly that this information did not actually exist in the boundary conditions in the first place. Almost all model bias can be attributed to incorrect parameters, and that most of the LSMs actually add information (via model physics) to what is available in the boundary conditions. 1 Abramowitz, G., Geophys Res Let 32, (2005). 2 Gupta, H. V., et al., Water Resour Res 48, (2012). 3 Luo, Y. Q. et al., Biogeosciences 9, (2012). 4 Han, E., et al., J Hydromet (2014). 5

  7. Pulmonary ventilation/perfusion scan

    Science.gov (United States)

    V/Q scan; Ventilation/perfusion scan; Lung ventilation/perfusion scan ... A pulmonary ventilation/perfusion scan is actually two tests. They may be done separately or together. During the perfusion scan, a health care provider injects ...

  8. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  9. Towards Systematic Benchmarking of Climate Model Performance

    Science.gov (United States)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  10. MODEL BENCHMARK WITH EXPERIMENT AT THE SNS LINAC

    Energy Technology Data Exchange (ETDEWEB)

    Shishlo, Andrei P [ORNL; Aleksandrov, Alexander V [ORNL; Liu, Yun [ORNL; Plum, Michael A [ORNL

    2016-01-01

    The history of attempts to perform a transverse match-ing in the Spallation Neutron Source (SNS) superconduct-ing linac (SCL) is discussed. The SCL has 9 laser wire (LW) stations to perform non-destructive measurements of the transverse beam profiles. Any matching starts with the measurement of the initial Twiss parameters, which in the SNS case was done by using the first four LW stations at the beginning of the superconducting linac. For years the consistency between data from all LW stations could not be achieved. This problem was resolved only after significant improvements in accuracy of the phase scans of the SCL cavities, more precise analysis of all available scan data, better optics planning, and the initial longitudi-nal Twiss parameter measurements. The presented paper discusses in detail these developed procedures.

  11. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich

    2015-01-01

    In multicomponent electrolyte solutions, the tendency of ions to diffuse at different rates results in a charge imbalance that is counteracted by the electrostatic coupling between charged species leading to a process called “electrochemical migration” or “electromigration.” Although not commonly...... not been published to date. This contribution provides a set of three benchmark problems that demonstrate the effect of electric coupling during multicomponent diffusion and electrochemical migration and at the same time facilitate the intercomparison of solutions from existing reactive transport codes...

  12. ABM11 parton distributions and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, Sergey [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institut Fiziki Vysokikh Ehnergij, Protvino (Russian Federation); Bluemlein, Johannes; Moch, Sven-Olaf [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-08-15

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant {alpha}{sub s} at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS scheme for {alpha}{sub s} and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  13. Benchmarks of Global Clean Energy Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-01-01

    The Clean Energy Manufacturing Analysis Center (CEMAC), sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), provides objective analysis and up-to-date data on global supply chains and manufacturing of clean energy technologies. Benchmarks of Global Clean Energy Manufacturing sheds light on several fundamental questions about the global clean technology manufacturing enterprise: How does clean energy technology manufacturing impact national economies? What are the economic opportunities across the manufacturing supply chain? What are the global dynamics of clean energy technology manufacturing?

  14. Benchmarking East Tennessee`s economic capacity

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-04-20

    This presentation is comprised of viewgraphs delineating major economic factors operating in 15 counties in East Tennessee. The purpose of the information presented is to provide a benchmark analysis of economic conditions for use in guiding economic growth in the region. The emphasis of the presentation is economic infrastructure, which is classified into six categories: human resources, technology, financial resources, physical infrastructure, quality of life, and tax and regulation. Data for analysis of key indicators in each of the categories are presented. Preliminary analyses, in the form of strengths and weaknesses and comparison to reference groups, are given.

  15. An OpenMP Compiler Benchmark

    Directory of Open Access Journals (Sweden)

    Matthias S. Müller

    2003-01-01

    Full Text Available The purpose of this benchmark is to propose several optimization techniques and to test their existence in current OpenMP compilers. Examples are the removal of redundant synchronization constructs, effective constructs for alternative code and orphaned directives. The effectiveness of the compiler generated code is measured by comparing different OpenMP constructs and compilers. If possible, we also compare with the hand coded "equivalent" solution. Six out of seven proposed optimization techniques are already implemented in different compilers. However, most compilers implement only one or two of them.

  16. Robust randomized benchmarking of quantum processes

    CERN Document Server

    Magesan, Easwar; Emerson, Joseph

    2010-01-01

    We describe a simple randomized benchmarking protocol for quantum information processors and obtain a sequence of models for the observable fidelity decay as a function of a perturbative expansion of the errors. We are able to prove that the protocol provides an efficient and reliable estimate of an average error-rate for a set operations (gates) under a general noise model that allows for both time and gate-dependent errors. We determine the conditions under which this estimate remains valid and illustrate the protocol through numerical examples.

  17. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    While the knowledge management literature has addressed the explicit and tacit skills needed for successful performance in the modern enterprise, little attention has been paid to date in this particular literature as to how these wide-ranging skills may be suitably acquired during the course...... experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...

  18. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  19. Benchmarking research of steel companies in Europe

    Directory of Open Access Journals (Sweden)

    M. Antošová

    2013-07-01

    Full Text Available In present time steelworks are at a stage of permanent changes that are marked with still stronger competition pressure. Therefore managers must solve questions of how to decrease production costs, how to overcome competition and how to survive in the world market. Still more attention should be paid to the modern managerial methods of market research and comparison with competition. Benchmarking research is one of the effective tools for such research. The goal of this contribution is to compare chosen steelworks and to indicate new directions for their development with the possibility of increasing the productivity of steel production.

  20. Benchmarking Naval Shipbuilding with 3D Laser Scanning, Additive Manufacturing, and Collaborative Product Lifecycle Management

    Science.gov (United States)

    2015-09-20

    ballistic missile submarines; • 11 nuclear-powered aircraft carriers; • 48 nuclear-powered attack submarines; • 0–4 nuclear-powered cruise missile...Department of the Navy. Notes: The colored parts of the chart reflect the Navy’s old counting rules. SSBNs = ballistic missile submarines; SSGNs...crime scene documentation, forensics , and accident reconstruction. • Architectural and Civil Engineering. Used to capture as-built documentation of

  1. SDSSJ14584479+3720215: A Benchmark JHK Blazar Light Curve from the 2MASS Calibration Scans

    CERN Document Server

    Davenport, James R A; Becker, Andrew C; Macleod, Chelsea L; Cutri, Roc M

    2015-01-01

    Active galactic nuclei (AGNs) are well-known to exhibit flux variability across a wide range of wavelength regimes, but the precise origin of the variability at different wavelengths remains unclear. To investigate the relatively unexplored near-IR variability of the most luminous AGNs, we conduct a search for variability using well sampled JHKs-band light curves from the 2MASS survey calibration fields. Our sample includes 27 known quasars with an average of 924 epochs of observation over three years, as well as one spectroscopically confirmed blazar (SDSSJ14584479+3720215) with 1972 epochs of data. This is the best-sampled NIR photometric blazar light curve to date, and it exhibits correlated, stochastic variability that we characterize with continuous auto-regressive moving average (CARMA) models. None of the other 26 known quasars had detectable variability in the 2MASS bands above the photometric uncertainty. A blind search of the 2MASS calibration field light curves for AGN candidates based on fitting C...

  2. Supply chain integration scales validation and benchmark values

    Directory of Open Access Journals (Sweden)

    Juan A. Marin-Garcia

    2013-06-01

    Full Text Available Purpose: The clarification of the constructs of the supply chain integration (clients, suppliers, external and internal, the creation of a measurement instrument based on a list of items taken from earlier papers, the validation of these scales and a preliminary benchmark to interpret the scales by percentiles based on a set of control variables (size of the plant, country, sector and degree of vertical integration. Design/methodology/approach: Our empirical analysis is based on the HPM project database (2005-2007 timeframe. The international sample is made up of 266 plants across ten countries: Austria, Canada, Finland, Germany, Italy, Japan, Korea, Spain, Sweden and the USA. In each country. We analized the descriptive statistics, internal consistency testing to purify the items (inter-item correlations, Cronbach’s alpha, squared multiple correlation, corrected item-total correlation, exploratory factor analysis, and finally, a confirmatory factor analysis to check the convergent and discriminant validity of the scales. The analyses will be done with the SPSS and EQS programme using the maximum likelihood parameter estimation method. Findings: The four proposed scales show excellent psychometric properties. Research limitations/implications: with a clearer and more concise designation of the supply chain integration measurement scales more reliable and accurate data could be taken to analyse the relations between these constructs with other variables of interest to the academic l fields. Practical implications: providing scales that are valid as a diagnostic tool for best practices, as well as providing a benchmark with which to compare the score for each individual plant against a collection of industrial companies from the machinery, electronics and transportation sectors. Originality/value: supply chain integration may be a major factor in explaining the performance of companies. The results are nevertheless inconclusive, the vast range

  3. Structural Benchmark Creep Testing for Microcast MarM-247 Advanced Stirling Convertor E2 Heater Head Test Article SN18

    Science.gov (United States)

    Krause, David L.; Brewer, Ethan J.; Pawlik, Ralph

    2013-01-01

    This report provides test methodology details and qualitative results for the first structural benchmark creep test of an Advanced Stirling Convertor (ASC) heater head of ASC-E2 design heritage. The test article was recovered from a flight-like Microcast MarM-247 heater head specimen previously used in helium permeability testing. The test article was utilized for benchmark creep test rig preparation, wall thickness and diametral laser scan hardware metrological developments, and induction heater custom coil experiments. In addition, a benchmark creep test was performed, terminated after one week when through-thickness cracks propagated at thermocouple weld locations. Following this, it was used to develop a unique temperature measurement methodology using contact thermocouples, thereby enabling future benchmark testing to be performed without the use of conventional welded thermocouples, proven problematic for the alloy. This report includes an overview of heater head structural benchmark creep testing, the origin of this particular test article, test configuration developments accomplished using the test article, creep predictions for its benchmark creep test, qualitative structural benchmark creep test results, and a short summary.

  4. Benchmark duration of work hours for development of fatigue symptoms in Japanese workers with adjustment for job-related stress.

    Science.gov (United States)

    Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi

    2008-12-01

    The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.

  5. Building with Benchmarks: The Role of the District in Philadelphia's Benchmark Assessment System

    Science.gov (United States)

    Bulkley, Katrina E.; Christman, Jolley Bruce; Goertz, Margaret E.; Lawrence, Nancy R.

    2010-01-01

    In recent years, interim assessments have become an increasingly popular tool in districts seeking to improve student learning and achievement. Philadelphia has been at the forefront of this change, implementing a set of Benchmark assessments aligned with its Core Curriculum district-wide in 2004. In this article, we examine the overall context…

  6. TREAT Transient Analysis Benchmarking for the HEU Core

    Energy Technology Data Exchange (ETDEWEB)

    Kontogeorgakos, D. C. [Argonne National Lab. (ANL), Argonne, IL (United States); Connaway, H. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, A. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-05-01

    This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used to determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.

  7. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  8. Baseline and benchmark model development for hotels

    Science.gov (United States)

    Hooks, Edward T., Jr.

    The hotel industry currently faces rising energy costs and requires the tools to maximize energy efficiency. In order to achieve this goal a clear definition of the current methods used to measure and monitor energy consumption is made. Uncovering the limitations to the most common practiced analysis strategies and presenting methods that can potentially overcome those limitations is the main purpose. Techniques presented can be used for measurement and verification of energy efficiency plans and retrofits. Also, modern energy modeling tool are introduced to demonstrate how they can be utilized for benchmarking and baseline models. This will provide the ability to obtain energy saving recommendations and parametric analysis to explore energy savings potential. These same energy models can be used in design decisions for new construction. An energy model is created of a resort style hotel that over one million square feet and has over one thousand rooms. A simulation and detailed analysis is performed on a hotel room. The planning process for creating the model and acquiring data from the hotel room to calibrate and verify the simulation will be explained. An explanation as to how this type of modeling can potentially be beneficial for future baseline and benchmarking strategies for the hotel industry. Ultimately the conclusion will address some common obstacles the hotel industry has in reaching their full potential of energy efficiency and how these techniques can best serve them.

  9. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  10. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  11. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc.

  12. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  13. Transparency benchmarking on audio watermarks and steganography

    Science.gov (United States)

    Kraetzer, Christian; Dittmann, Jana; Lang, Andreas

    2006-02-01

    The evaluation of transparency plays an important role in the context of watermarking and steganography algorithms. This paper introduces a general definition of the term transparency in the context of steganography, digital watermarking and attack based evaluation of digital watermarking algorithms. For this purpose the term transparency is first considered individually for each of the three application fields (steganography, digital watermarking and watermarking algorithm evaluation). From the three results a general definition for the overall context is derived in a second step. The relevance and applicability of the definition given is evaluated in practise using existing audio watermarking and steganography algorithms (which work in time, frequency and wavelet domain) as well as an attack based evaluation suite for audio watermarking benchmarking - StirMark for Audio (SMBA). For this purpose selected attacks from the SMBA suite are modified by adding transparency enhancing measures using a psychoacoustic model. The transparency and robustness of the evaluated audio watermarking algorithms by using the original and modifid attacks are compared. The results of this paper show hat transparency benchmarking will lead to new information regarding the algorithms under observation and their usage. This information can result in concrete recommendations for modification, like the ones resulting from the tests performed here.

  14. Multisensor benchmark data for riot control

    Science.gov (United States)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  15. Toward Establishing a Realistic Benchmark for Airframe Noise Research: Issues and Challenges

    Science.gov (United States)

    Khorrami, Mehdi R.

    2010-01-01

    The availability of realistic benchmark configurations is essential to enable the validation of current Computational Aeroacoustic (CAA) methodologies and to further the development of new ideas and concepts that will foster the technologies of the next generation of CAA tools. The selection of a real-world configuration, the subsequent design and fabrication of an appropriate model for testing, and the acquisition of the necessarily comprehensive aeroacoustic data base are critical steps that demand great care and attention. In this paper, a brief account of the nose landing-gear configuration, being proposed jointly by NASA and the Gulfstream Aerospace Company as an airframe noise benchmark, is provided. The underlying thought processes and the resulting building block steps that were taken during the development of this benchmark case are given. Resolution of critical, yet conflicting issues is discussed - the desire to maintain geometric fidelity versus model modifications required to accommodate instrumentation; balancing model scale size versus Reynolds number effects; and time, cost, and facility availability versus important parameters like surface finish and installation effects. The decisions taken during the experimental phase of a study can significantly affect the ability of a CAA calculation to reproduce the prevalent flow conditions and associated measurements. For the nose landing gear, the most critical of such issues are highlighted and the compromises made to resolve them are discussed. The results of these compromises will be summarized by examining the positive attributes and shortcomings of this particular benchmark case.

  16. An approach to radiation safety department benchmarking in academic and medical facilities.

    Science.gov (United States)

    Harvey, Richard P

    2015-02-01

    Based on anecdotal evidence and networking with colleagues at other facilities, it has become evident that some radiation safety departments are not adequately staffed and radiation safety professionals need to increase their staffing levels. Discussions with management regarding radiation safety department staffing often lead to similar conclusions. Management acknowledges the Radiation Safety Officer (RSO) or Director of Radiation Safety's concern but asks the RSO to provide benchmarking and justification for additional full-time equivalents (FTEs). The RSO must determine a method to benchmark and justify additional staffing needs while struggling to maintain a safe and compliant radiation safety program. Benchmarking and justification are extremely important tools that are commonly used to demonstrate the need for increased staffing in other disciplines and are tools that can be used by radiation safety professionals. Parameters that most RSOs would expect to be positive predictors of radiation safety staff size generally are and can be emphasized in benchmarking and justification report summaries. Facilities with large radiation safety departments tend to have large numbers of authorized users, be broad-scope programs, be subject to increased controls regulations, have large clinical operations, have significant numbers of academic radiation-producing machines, and have laser safety responsibilities.

  17. Simulation benchmark based on THAI-experiment on dissolution of a steam stratification by natural convection

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, M., E-mail: freitag@becker-technologies.com; Schmidt, E.; Gupta, S.; Poss, G.

    2016-04-01

    Highlights: . • We studied the generation and dissolution of steam stratification in natural convection. • We performed a computer code benchmark including blind and open phases. • The dissolution of stratification predicted only qualitatively by LP and CFD models during the blind simulation phase. - Abstract: Locally enriched hydrogen as in stratification may contribute to early containment failure in the course of severe nuclear reactor accidents. During accident sequences steam might accumulate as well to stratifications which can directly influence the distribution and ignitability of hydrogen mixtures in containments. An international code benchmark including Computational Fluid Dynamics (CFD) and Lumped Parameter (LP) codes was conducted in the frame of the German THAI program. Basis for the benchmark was experiment TH24.3 which investigates the dissolution of a steam layer subject to natural convection in the steam-air atmosphere of the THAI vessel. The test provides validation data for the development of CFD and LP models to simulate the atmosphere in the containment of a nuclear reactor installation. In test TH24.3 saturated steam is injected into the upper third of the vessel forming a stratification layer which is then mixed by a superposed thermal convection. In this paper the simulation benchmark will be evaluated in addition to the general discussion about the experimental transient of test TH24.3. Concerning the steam stratification build-up and dilution of the stratification, the numerical programs showed very different results during the blind evaluation phase, but improved noticeable during open simulation phase.

  18. Accuracy and Uncertainty Analysis of PSBT Benchmark Exercises Using a Subchannel Code MATRA

    Directory of Open Access Journals (Sweden)

    Dae-Hyun Hwang

    2012-01-01

    Full Text Available In the framework of the OECD/NRC PSBT benchmark, the subchannel grade void distribution data and DNB data were assessed by a subchannel code, MATRA. The prediction accuracy and uncertainty of the zone-averaged void fraction at the central region of the 5 × 5 test bundle were evaluated for the steady-state and transient benchmark data. Optimum values of the turbulent mixing parameter were evaluated for the subchannel exit temperature distribution benchmark. The influence of the mixing vanes on the subchannel flow distribution was investigated through a CFD analysis. In addition, a regionwise turbulent mixing model was examined to account for the nonhomogeneous mixing characteristics caused by the vane effect. The steady-state DNB benchmark data with uniform and nonuniform axial power shapes were evaluated by employing various DNB prediction models: EPRI bundle CHF correlation, AECL-IPPE 1995 CHF lookup table, and representative mechanistic DNB models such as a sublayer dryout model and a bubble crowding model. The DNBR prediction uncertainties for various DNB models were evaluated from a Monte-Carlo simulation for a selected steady-state condition.

  19. Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction

    Science.gov (United States)

    Krueger, Ronald

    2011-01-01

    An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.

  20. Low Cost Scan Test by Test Correlation Utilization

    Institute of Scientific and Technical Information of China (English)

    Ozgur Sinanoglu

    2007-01-01

    Scan-based testing methodologies remedy the testability problem of sequential circuits; yet they suffer from prolonged test time and excessive test power due to numerous shift operations. The correlation among test data along with the high density of the unspecified bits in test data enables the utilization of the existing test data in the scan chain for the generation of the subsequent test stimulus, thus reducing both test time and test data volume. We propose a pair of scan approaches in this paper; in the first approach, a test stimulus partially consists of the preceding stimulus, while in the second approach, a test stimulus partially consists of the preceding test response bits. Both proposed scan-based test schemes access only a subset of scan cells for loading the subsequent test stimulus while freezing the remaining scan cells with the preceding test data, thus decreasing scan chain transitions during shift operations. The proposed scan architecture is coupled with test data manipulation techniques which include test stimuli ordering and partitioning algorithms, boosting test time reductions. The experimental results confirm that test time reductions exceeding 97%,and test power reductions exceeding 99% can be achieved by the proposed scan-based testing methodologies on larger ISCAS89 benchmark circuits.

  1. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  2. Taking the Battle Upstream: Towards a Benchmarking Role for NATO

    Science.gov (United States)

    2012-09-01

    various versions of the ‘ Balanced Scorecard ’- methodology) still use widely differing categories, performance indicators and metrics. Because of the...questionable questionnaires, or “benchmarking tourism .” On the upside, the survey also found an upward trend in the quantity of explicit defense...we) against participating in any “benchmarking activity that is nothing more than industrial tourism and/or copying. The first step in benchmarking

  3. DEVELOPMENT OF A MARKET BENCHMARK PRICE FOR AGMAS PERFORMANCE EVALUATIONS

    OpenAIRE

    Good, Darrel L.; Irwin, Scott H.; Jackson, Thomas E.

    1998-01-01

    The purpose of this research report is to identify the appropriate market benchmark price to use to evaluate the pricing performance of market advisory services that are included in the annual AgMAS pricing performance evaluations. Five desirable properties of market benchmark prices are identified. Three potential specifications of the market benchmark price are considered: the average price received by Illinois farmers, the harvest cash price, and the average cash price over a two-year crop...

  4. 42 CFR 422.258 - Calculation of benchmarks.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly benchmark amount” means, for a month in a year: (1) For MA local plans with service areas entirely within...

  5. Benchmark for Evaluating Moving Object Indexes

    DEFF Research Database (Denmark)

    Chen, Su; Jensen, Christian Søndergaard; Lin, Dan

    2008-01-01

    that targets techniques for the indexing of the current and near-future positions of moving objects. This benchmark enables the comparison of existing and future indexing techniques. It covers important aspects of such indexes that have not previously been covered by any benchmark. Notable aspects covered...... include update efficiency, query efficiency, concurrency control, and storage requirements. Next, the paper applies the benchmark to half a dozen notable moving-object indexes, thus demonstrating the viability of the benchmark and offering new insight into the performance properties of the indexes....

  6. Hospital Energy Benchmarking Guidance - Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  7. Laser Scanning in Forests

    OpenAIRE

    Håkan Olsson; Juha Hyyppä; Markus Holopainen

    2012-01-01

    The introduction of Airborne Laser Scanning (ALS) to forests has been revolutionary during the last decade. This development was facilitated by combining earlier ranging lidar discoveries [1–5], with experience obtained from full-waveform ranging radar [6,7] to new airborne laser scanning systems which had components such as a GNSS receiver (Global Navigation Satellite System), IMU (Inertial Measurement Unit) and a scanning mechanism. Since the first commercial ALS in 1994, new ALS-based fore...

  8. Radionucleotide scanning in osteomyelitis

    Energy Technology Data Exchange (ETDEWEB)

    Sachs, W.; Kanat, I.O.

    1986-07-01

    Radionucleotide bone scanning can be an excellent adjunct to the standard radiograph and clinical findings in the diagnosis of osteomyelitis. Bone scans have the ability to detect osteomyelitis far in advance of the standard radiograph. The sequential use of technetium and gallium has been useful in differentiating cellulitis and osteomyelitis. Serial scanning with technetium and gallium may be used to monitor the response of osteomyelitis to antibiotic therapy.

  9. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    Science.gov (United States)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  10. Parameter Extraction of Highway Route Based on 3-d Laser Scanning Technology%基于地面三维激光扫描技术的公路路线设计参数提取

    Institute of Scientific and Technical Information of China (English)

    王鑫森; 孔立; 郑德华

    2013-01-01

    3-D laser scanner is applied to highway survey, the point cloud density is optimized by the control of point cloud acquisition parameters, improved the accuracy of registration using ICP algorithm based on quaternion, and got rid of noise points with the algorithm of local outlier. Thus, the highway pavement point cloud of high quality is got, which can be provided as reliable data source for the follow-up design parameter extraction. Then, the road boundary lines and center lines are extracted with an edge detection algorithm and finally the design parameters are computed. Experiments show that the results fully meet the needs of the subsequent construction design.%将三维激光扫描仪应用于公路测量,通过控制点云采集的参数来优化点云密度分布,采用基于四元数的ICP算法进行配准,提高配准精度.运用局部点离群算法完成点云去噪,得到高质量的公路路面点云,为后续设计参数的提取工作提供了可靠的数据源.最后运用点云边界点识别算法准确提取了公路边界线,并进一步生成了公路中心线及各项路线设计参数.实验表明结果完全满足后续施工设计的需要.

  11. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint US/Russian Progress Report for Fiscal 1997. Volume 3 - Calculations Performed in the Russian Federation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-06-01

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the Russian Federation during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the contaminated benchmarks that the United States and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  12. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Primm III, RT

    2002-05-29

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  13. Bone scanning in otolaryngology.

    Science.gov (United States)

    Noyek, A M

    1979-09-01

    Modern radionuclide bone scanning has introduced a new concept in physiologic and anatomic diagnostic imaging to general medicine. As otolaryngologists must diagnose and treat disease in relation to the bony and/or cartilaginous supporting structures of the neurocranium and upper airway, this modality should be included in the otolaryngologist's diagnostic armamentarium. It is the purpose of this manuscript to study the specific applications of bone scanning to our specialty at this time, based on clinical experience over the past three years. This thesis describes the development of bone scanning in general (history of nuclear medicine and nuclear physics; history of bone scanning in particular). General concepts in nuclear medicine are then presented; these include a discussion of nuclear semantics, principles of radioactive emmissions, the properties 99mTc as a radionuclide, and the tracer principle. On the basis of these general concepts, specific concepts in bone scanning are then brought forth. The physiology of bone and the action of the bone scan agents is presented. Further discussion considers the availability and production of the bone scan agent, patient factors, the gamma camera, the triphasic bone scan and the ultimate diagnostic principle of the bone scan. Clinical applications of bone scanning in otolaryngology are then presented in three sections. Proven areas of application include the evaluation of malignant tumors of the head and neck, the diagnosis of temporomandibular joint disorders, the diagnosis of facial fractures, the evaluation of osteomyelitis, nuclear medicine imaging of the larynx, and the assessment of systemic disease. Areas of adjunctive or supplementary value are also noted, such as diagnostic imaging of meningioma. Finally, areas of marginal value in the application of bone scanning are described.

  14. MIMIC: An Innovative Methodology for Determining Mobile Laser Scanning System Point Density

    Directory of Open Access Journals (Sweden)

    Conor Cahalane

    2014-08-01

    Full Text Available Understanding how various Mobile Mapping System (MMS laser hardware configurations and operating parameters exercise different influence on point density is important for assessing system performance, which in turn facilitates system design and MMS benchmarking. Point density also influences data processing, as objects that can be recognised using automated algorithms generally require a minimum point density. Although obtaining the necessary point density impacts on hardware costs, survey time and data storage requirements, a method for accurately and rapidly assessing MMS performance is lacking for generic MMSs. We have developed a method for quantifying point clouds collected by an MMS with respect to known objects at specified distances using 3D surface normals, 2D geometric formulae and line drawing algorithms. These algorithms were combined in a system called the Mobile Mapping Point Density Calculator (MIMIC and were validated using point clouds captured by both a single scanner and a dual scanner MMS. Results from MIMIC were promising: when considering the number of scan profiles striking the target, the average error equated to less than 1 point per scan profile. These tests highlight that MIMIC is capable of accurately calculating point density for both single and dual scanner MMSs.

  15. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  16. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  17. Shielding integral benchmark archive and database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L.; Grove, R.E. [Radiation Safety Information Computational Center RSICC, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831-6171 (United States); Kodeli, I. [Josef Stefan Inst., Jamova 39, 1000 Ljubljana (Slovenia); Gulliford, J.; Sartori, E. [OECD NEA Data Bank, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  18. Development of solutions to benchmark piping problems

    Energy Technology Data Exchange (ETDEWEB)

    Reich, M; Chang, T Y; Prachuktam, S; Hartzman, M

    1977-12-01

    Benchmark problems and their solutions are presented. The problems consist in calculating the static and dynamic response of selected piping structures subjected to a variety of loading conditions. The structures range from simple pipe geometries to a representative full scale primary nuclear piping system, which includes the various components and their supports. These structures are assumed to behave in a linear elastic fashion only, i.e., they experience small deformations and small displacements with no existing gaps, and remain elastic through their entire response. The solutions were obtained by using the program EPIPE, which is a modification of the widely available program SAP IV. A brief outline of the theoretical background of this program and its verification is also included.

  19. Benchmarking analogue models of brittle thrust wedges

    Science.gov (United States)

    Schreurs, Guido; Buiter, Susanne J. H.; Boutelier, Jennifer; Burberry, Caroline; Callot, Jean-Paul; Cavozzi, Cristian; Cerca, Mariano; Chen, Jian-Hong; Cristallini, Ernesto; Cruden, Alexander R.; Cruz, Leonardo; Daniel, Jean-Marc; Da Poian, Gabriela; Garcia, Victor H.; Gomes, Caroline J. S.; Grall, Céline; Guillot, Yannick; Guzmán, Cecilia; Hidayah, Triyani Nur; Hilley, George; Klinkmüller, Matthias; Koyi, Hemin A.; Lu, Chia-Yu; Maillot, Bertrand; Meriaux, Catherine; Nilfouroushan, Faramarz; Pan, Chang-Chih; Pillot, Daniel; Portillo, Rodrigo; Rosenau, Matthias; Schellart, Wouter P.; Schlische, Roy W.; Take, Andy; Vendeville, Bruno; Vergnaud, Marine; Vettori, Matteo; Wang, Shih-Hsien; Withjack, Martha O.; Yagupsky, Daniel; Yamada, Yasuhiro

    2016-11-01

    We performed a quantitative comparison of brittle thrust wedge experiments to evaluate the variability among analogue models and to appraise the reproducibility and limits of model interpretation. Fifteen analogue modeling laboratories participated in this benchmark initiative. Each laboratory received a shipment of the same type of quartz and corundum sand and all laboratories adhered to a stringent model building protocol and used the same type of foil to cover base and sidewalls of the sandbox. Sieve structure, sifting height, filling rate, and details on off-scraping of excess sand followed prescribed procedures. Our analogue benchmark shows that even for simple plane-strain experiments with prescribed stringent model construction techniques, quantitative model results show variability, most notably for surface slope, thrust spacing and number of forward and backthrusts. One of the sources of the variability in model results is related to slight variations in how sand is deposited in the sandbox. Small changes in sifting height, sifting rate, and scraping will result in slightly heterogeneous material bulk densities, which will affect the mechanical properties of the sand, and will result in lateral and vertical differences in peak and boundary friction angles, as well as cohesion values once the model is constructed. Initial variations in basal friction are inferred to play the most important role in causing model variability. Our comparison shows that the human factor plays a decisive role, and even when one modeler repeats the same experiment, quantitative model results still show variability. Our observations highlight the limits of up-scaling quantitative analogue model results to nature or for making comparisons with numerical models. The frictional behavior of sand is highly sensitive to small variations in material state or experimental set-up, and hence, it will remain difficult to scale quantitative results such as number of thrusts, thrust spacing

  20. Ground truth and benchmarks for performance evaluation

    Science.gov (United States)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  1. Benchmarking Competitiveness: Is America's Technological Hegemony Waning?

    Science.gov (United States)

    Lubell, Michael S.

    2006-03-01

    For more than half a century, by almost every standard, the United States has been the world's leader in scientific discovery, innovation and technological competitiveness. To a large degree, that dominant position stemmed from the circumstances our nation inherited at the conclusion of the World War Two: we were, in effect, the only major nation left standing that did not have to repair serious war damage. And we found ourselves with an extraordinary science and technology base that we had developed for military purposes. We had the laboratories -- industrial, academic and government -- as well as the scientific and engineering personnel -- many of them immigrants who had escaped from war-time Europe. What remained was to convert the wartime machinery into peacetime uses. We adopted private and public policies that accomplished the transition remarkably well, and we have prospered ever since. Our higher education system, our protection of intellectual property rights, our venture capital system, our entrepreneurial culture and our willingness to commit government funds for the support of science and engineering have been key components to our success. But recent competitiveness benchmarks suggest that our dominance is waning rapidly, in part because other nations have begun to emulate our successful model, in part because globalization has ``flattened'' the world and in part because we have been reluctant to pursue the public policies that are necessary to ensure our leadership. We will examine these benchmarks and explore the policy changes that are needed to keep our nation's science and technology enterprise vibrant and our economic growth on an upward trajectory.

  2. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  3. Scanning laser Doppler vibrometry

    DEFF Research Database (Denmark)

    Brøns, Marie; Thomsen, Jon Juel

    With a Scanning Laser Doppler Vibrometer (SLDV) a vibrating surface is automatically scanned over predefined grid points, and data processed for displaying vibration properties like mode shapes, natural frequencies, damping ratios, and operational deflection shapes. Our SLDV – a PSV-500H from...

  4. Frequency scanning microstrip antennas

    DEFF Research Database (Denmark)

    Danielsen, Magnus; Jørgensen, Rolf

    1979-01-01

    The principles of using radiating microstrip resonators as elements in a frequency scanning antenna array are described. The resonators are cascade-coupled. This gives a scan of the main lobe due to the phase-shift in the resonator in addition to that created by the transmission line phase...

  5. Optical Scanning Applications.

    Science.gov (United States)

    Wagner, Hans

    The successful use of optical scanning at the University of the Pacific (UOP) indicates that such techniques can simplify a number of administrative data processing tasks. Optical scanning is regularly used at UOP to assist with data processing in the areas of admissions, registration and grade reporting and also has applications for other tasks…

  6. Energy benchmarking in wastewater treatment plants: the importance of site operation and layout.

    Science.gov (United States)

    Belloir, C; Stanford, C; Soares, A

    2015-01-01

    Energy benchmarking is a powerful tool in the optimization of wastewater treatment plants (WWTPs) in helping to reduce costs and greenhouse gas emissions. Traditionally, energy benchmarking methods focused solely on reporting electricity consumption, however, recent developments in this area have led to the inclusion of other types of energy, including electrical, manual, chemical and mechanical consumptions that can be expressed in kWh/m3. In this study, two full-scale WWTPs were benchmarked, both incorporated preliminary, secondary (oxidation ditch) and tertiary treatment processes, Site 1 also had an additional primary treatment step. The results indicated that Site 1 required 2.32 kWh/m3 against 0.98 kWh/m3 for Site 2. Aeration presented the highest energy consumption for both sites with 2.08 kWh/m3 required for Site 1 and 0.91 kWh/m3 in Site 2. The mechanical energy represented the second biggest consumption for Site 1 (9%, 0.212 kWh/m3) and chemical input was significant in Site 2 (4.1%, 0.026 kWh/m3). The analysis of the results indicated that Site 2 could be optimized by constructing a primary settling tank that would reduce the biochemical oxygen demand, total suspended solids and NH4 loads to the oxidation ditch by 55%, 75% and 12%, respectively, and at the same time reduce the aeration requirements by 49%. This study demonstrated that the effectiveness of the energy benchmarking exercise in identifying the highest energy-consuming assets, nevertheless it points out the need to develop a holistic overview of the WWTP and the need to include parameters such as effluent quality, site operation and plant layout to allow adequate benchmarking.

  7. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallel computing systems and the actual delivered performance for scientific applications. In general this gap is caused by inadequate architectural support for the requirements of modern scientific applications, as commercial applications and the much larger market they represent, have driven the evolution of computer architectures. This gap has raised the importance of developing better benchmarking methodologies to characterize and to understand the performance requirements of scientific applications, to communicate them efficiently to influence the design of future computer architectures. This improved understanding of the performance behavior of scientific applications will allow improved performance predictions, development of adequate benchmarks for identification of hardware and application features that work well or poorly together, and a more systematic performance evaluation in procurement situations. The Berkeley Institute for Performance Studies has developed a three-level approach to evaluating the design of high end machines and the software that runs on them: (1) A suite of representative applications; (2) A set of application kernels; and (3) Benchmarks to measure key system parameters. The three levels yield different type of information, all of which are useful in evaluating systems, and enable NSF and DOE centers to select computer architectures more suited for scientific applications. The analysis will further allow the centers to engage vendors in discussion of strategies to alleviate the present architectural bottlenecks using quantitative information. These may include small hardware changes or larger ones that may be out interest to non-scientific workloads. Providing quantitative models to the vendors allows them to assess the benefits of technology alternatives using their own internal cost-models in the broader marketplace, ideally facilitating the development of future computer

  8. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallelcomputing systems and the actual delivered performance for scientificapplications. In general this gap is caused by inadequate architecturalsupport for the requirements of modern scientific applications, ascommercial applications and the much larger market they represent, havedriven the evolution of computer architectures. This gap has raised theimportance of developing better benchmarking methodologies tocharacterize and to understand the performance requirements of scientificapplications, to communicate them efficiently to influence the design offuture computer architectures. This improved understanding of theperformance behavior of scientific applications will allow improvedperformance predictions, development of adequate benchmarks foridentification of hardware and application features that work well orpoorly together, and a more systematic performance evaluation inprocurement situations. The Berkeley Institute for Performance Studieshas developed a three-level approach to evaluating the design of high endmachines and the software that runs on them: 1) A suite of representativeapplications; 2) A set of application kernels; and 3) Benchmarks tomeasure key system parameters. The three levels yield different type ofinformation, all of which are useful in evaluating systems, and enableNSF and DOE centers to select computer architectures more suited forscientific applications. The analysis will further allow the centers toengage vendors in discussion of strategies to alleviate the presentarchitectural bottlenecks using quantitative information. These mayinclude small hardware changes or larger ones that may be out interest tonon-scientific workloads. Providing quantitative models to the vendorsallows them to assess the benefits of technology alternatives using theirown internal cost-models in the broader marketplace, ideally facilitatingthe development of future computer architectures more suited forscientific

  9. Validation of VHTRC calculation benchmark of critical experiment using the MCB code

    Directory of Open Access Journals (Sweden)

    Stanisz Przemysław

    2016-01-01

    Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.

  10. LIDAR COMBINED SCANNING UNIT

    Directory of Open Access Journals (Sweden)

    V. V. Elizarov

    2016-11-01

    Full Text Available Subject of Research. The results of lidar combined scanning unit development for locating leaks of hydrocarbons are presented The unit enables to perform high-speed scanning of the investigated space in wide and narrow angle fields. Method. Scanning in a wide angular field is produced by one-line scanning path by means of the movable aluminum mirror with a frequency of 20Hz and amplitude of 20 degrees of swing. Narrowband scanning is performed along a spiral path by the deflector. The deflection of the beam is done by rotation of the optical wedges forming part of the deflector at an angle of ±50. The control function of the scanning node is performed by a specialized software product written in C# programming language. Main Results. This scanning unit allows scanning the investigated area at a distance of 50-100 m with spatial resolution at the level of 3 cm. The positioning accuracy of the laser beam in space is 15'. The developed scanning unit gives the possibility to browse the entire investigated area for the time not more than 1 ms at a rotation frequency of each wedge from 50 to 200 Hz. The problem of unambiguous definition of the beam geographical coordinates in space is solved at the software level according to the rotation angles of the mirrors and optical wedges. Lidar system coordinates are determined by means of GPS. Practical Relevance. Development results open the possibility for increasing the spatial resolution of scanning systems of a wide range of lidars and can provide high positioning accuracy of the laser beam in space.

  11. Benchmarking on the management of radioactive waste; Benchmarking sobre la gestion de los residuos radiactivos

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Gomez, M. a.; Gonzalez Gandal, R.; Gomez Castano, N.

    2013-09-01

    In this project, an evaluation of the practices carried out in the waste management field at the Spanish nuclear power plants has been done following the Benchmarking methodology. This process has allowed the identification of aspects to improve waste treatment processes; to reduce the volume of waste; to reduce management costs and to establish ways of management for the waste stream which do not have. (Author)

  12. BIM quickscan: benchmark of BIM performance in the Netherlands

    NARCIS (Netherlands)

    Berlo, L.A.H.M. van; Dijkmans, T.J.A.; Hendriks, H.; Spekkink, D.; Pel, W.

    2012-01-01

    In 2009 a “BIM QuickScan” for benchmarking BIM performance was created in the Netherlands (Sebastian, Berlo 2010). This instrument aims to provide insight into the current BIM performance of a company. The benchmarking instrument combines quantitative and qualitative assessments of the ‘hard’ and ‘s

  13. Evaluation of PWR and BWR pin cell benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Pijlgroms, B.J.; Gruppelaar, H.; Janssen, A.J. (Netherlands Energy Research Foundation (ECN), Petten (Netherlands)); Hoogenboom, J.E.; Leege, P.F.A. de (Interuniversitair Reactor Inst., Delft (Netherlands)); Voet, J. van der (Gemeenschappelijke Kernenergiecentrale Nederland NV, Dodewaard (Netherlands)); Verhagen, F.C.M. (Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands))

    1991-12-01

    Benchmark results of the Dutch PINK working group on PWR and BWR pin cell calculational benchmark as defined by EPRI are presented and evaluated. The observed discrepancies are problem dependent: a part of the results is satisfactory, some other results require further analysis. A brief overview is given of the different code packages used in this analysis. (author). 14 refs., 9 figs., 30 tabs.

  14. Benchmarking ~(232)Th Evaluations With KBR and Thor Experiments

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The n+232Th evaluations from CENDL-3.1, ENDF/B-Ⅶ.0, JENDL-3.3 and JENDL-4.0 were tested with KBR series and THOR benchmark from ICSBEP Handbook. THOR is Plutonium-Metal-Fast (PMF) criticality benchmark reflected with metal thorium.

  15. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    Science.gov (United States)

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  16. 42 CFR 457.420 - Benchmark health benefits coverage.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 457.420 Section 457.420 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... State Plan Requirements: Coverage and Benefits § 457.420 Benchmark health benefits coverage....

  17. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    Science.gov (United States)

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  18. Benchmark en Beleidstoets voor de Drinkwatersector. Indicatoren Waterkwaliteit en Milieu

    NARCIS (Netherlands)

    Versteegh JFM; Tangena BH; Mulschlegel JHC; IMD

    2004-01-01

    De aanleiding van de studie is het voornemen van de Minister van VROM de benchmark op te nemen in de Waterleidingwet. Deze verplichte benchmark zal bestaan uit vier onderdelen: waterkwaliteit, dienstverlening, milieu en financien. De drinkwatersector voert sinds 1999 op vrijwillige basis een bench

  19. Benchmarking with the BLASST Sessional Staff Standards Framework

    Science.gov (United States)

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  20. Nomenclatural Benchmarking: The roles of digital typification and telemicroscopy

    Science.gov (United States)

    The process of nomenclatural benchmarking is the examination of type specimens of all available names to ascertain which currently accepted species the specimen bearing the name falls within. We propose a strategy for addressing four challenges for nomenclatural benchmarking. First, there is the mat...

  1. A Competitive Benchmarking Study of Noncredit Program Administration.

    Science.gov (United States)

    Alstete, Jeffrey W.

    1996-01-01

    A benchmarking project to measure administrative processes and financial ratios received 57 usable replies from 300 noncredit continuing education programs. Programs with strong financial surpluses were identified and their processes benchmarked (including response to inquiries, registrants, registrant/staff ratio, new courses, class size,…

  2. Quality indicators for international benchmarking of mental health care

    DEFF Research Database (Denmark)

    Hermann, Richard C; Mattke, Soeren; Somekh, David;

    2006-01-01

    To identify quality measures for international benchmarking of mental health care that assess important processes and outcomes of care, are scientifically sound, and are feasible to construct from preexisting data.......To identify quality measures for international benchmarking of mental health care that assess important processes and outcomes of care, are scientifically sound, and are feasible to construct from preexisting data....

  3. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Determination of energy cost benchmarks. 1709.5... SERVICE, DEPARTMENT OF AGRICULTURE ASSISTANCE TO HIGH ENERGY COST COMMUNITIES General Requirements § 1709.5 Determination of energy cost benchmarks. (a) The Administrator shall establish, using the...

  4. A Protein Classification Benchmark collection for machine learning

    NARCIS (Netherlands)

    Sonego, P.; Pacurar, M.; Dhir, S.; Kertész-Farkas, A.; Kocsor, A.; Gáspári, Z.; Leunissen, J.A.M.; Pongor, S.

    2007-01-01

    Protein classification by machine learning algorithms is now widely used in structural and functional annotation of proteins. The Protein Classification Benchmark collection (http://hydra.icgeb.trieste.it/benchmark) was created in order to provide standard datasets on which the performance of machin

  5. What Are the ACT College Readiness Benchmarks? Information Brief

    Science.gov (United States)

    ACT, Inc., 2013

    2013-01-01

    The ACT College Readiness Benchmarks are the minimum ACT® college readiness assessment scores required for students to have a high probability of success in credit-bearing college courses--English Composition, social sciences courses, College Algebra, or Biology. This report identifies the College Readiness Benchmarks on the ACT Compass scale…

  6. Selecting indicators for international benchmarking of radiotherapy centres

    NARCIS (Netherlands)

    Lent, van W.A.M.; Beer, de R. D.; Triest, van B.; Harten, van W.H.

    2013-01-01

    Introduction: Benchmarking can be used to improve hospital performance. It is however not easy to develop a concise and meaningful set of indicators on aspects related to operations management. We developed an indicator set for managers and evaluated its use in an international benchmark of radiothe

  7. 29 CFR 1952.103 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.103 Section 1952.103... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  8. Developing Benchmarks to Measure Teacher Candidates' Performance

    Science.gov (United States)

    Frazier, Laura Corbin; Brown-Hobbs, Stacy; Palmer, Barbara Martin

    2013-01-01

    This paper traces the development of teacher candidate benchmarks at one liberal arts institution. Begun as a classroom assessment activity over ten years ago, the benchmarks, through collaboration with professional development school partners, now serve as a primary measure of teacher candidates' performance in the final phases of the…

  9. 29 CFR 1952.263 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.263 Section 1952.263... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  10. 29 CFR 1952.363 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.363 Section 1952.363... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were required for...

  11. 29 CFR 1952.153 - Compliance staffing benchmarks.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.153 Section 1952.153....153 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (“benchmarks”) necessary for a “fully effective” enforcement program were...

  12. Benchmarking Mentoring Practices: A Case Study in Turkey

    Science.gov (United States)

    Hudson, Peter; Usak, Muhammet; Savran-Gencer, Ayse

    2010-01-01

    Throughout the world standards have been developed for teaching in particular key learning areas. These standards also present benchmarks that can assist to measure and compare results from one year to the next. There appears to be no benchmarks for mentoring. An instrument devised to measure mentees' perceptions of their mentoring in primary…

  13. EU and OECD benchmarking and peer review compared

    NARCIS (Netherlands)

    Groenendijk, Nico

    2009-01-01

    Benchmarking and peer review are essential elements of the so-called EU open method of coordination (OMC) which has been contested in the literature for lack of effectiveness. In this paper we compare benchmarking and peer review procedures as used by the EU with those used by the OECD. Different ty

  14. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  15. SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI

    Directory of Open Access Journals (Sweden)

    Ralf Reussner

    2002-01-01

    Full Text Available The main objective of the MPI communication library is to enable portable parallel programming with high performance within the message-passing paradigm. Since the MPI standard has no associated performance model, and makes no performance guarantees, comprehensive, detailed and accurate performance figures for different hardware platforms and MPI implementations are important for the application programmer, both for understanding and possibly improving the behavior of a given program on a given platform, as well as for assuring a degree of predictable behavior when switching to another hardware platform and/or MPI implementation. We term this latter goal performance portability, and address the problem of attaining performance portability by benchmarking. We describe the SKaMPI benchmark which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability. SKaMPI is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.

  16. Intra and inter-organizational learning from benchmarking IS services

    DEFF Research Database (Denmark)

    Mengiste, Shegaw Anagaw; Kræmmergaard, Pernille; Hansen, Bettina

    2016-01-01

    in benchmarking their IS services and functions since 2006. Particularly, this research tackled existing IS benchmarking approaches and methods by turning to a learning-oriented perspective and by empirically exploring the dynamic process of intra and inter-organizational learning from benchmarking IS/IT services......This paper reports a case study of benchmarking IS services in Danish municipalities. Drawing on Holmqvist’s (2004) organizational learning model of exploration and exploitation, the paper explores intra and inter-organizational learning dynamics among Danish municipalities that are involved....... The paper also makes a contribution by emphasizing the importance of informal cross-municipality consortiums to facilitate learning and experience sharing across municipalities. The findings of the case study demonstrated that the IS benchmarking scheme is relatively successful in sharing good practices...

  17. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    Science.gov (United States)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  18. Laser Scanning in Forests

    Directory of Open Access Journals (Sweden)

    Håkan Olsson

    2012-09-01

    Full Text Available The introduction of Airborne Laser Scanning (ALS to forests has been revolutionary during the last decade. This development was facilitated by combining earlier ranging lidar discoveries [1–5], with experience obtained from full-waveform ranging radar [6,7] to new airborne laser scanning systems which had components such as a GNSS receiver (Global Navigation Satellite System, IMU (Inertial Measurement Unit and a scanning mechanism. Since the first commercial ALS in 1994, new ALS-based forest inventory approaches have been reported feasible for operational activities [8–12]. ALS is currently operationally applied for stand level forest inventories, for example, in Nordic countries. In Finland alone, the adoption of ALS for forest data collection has led to an annual savings of around 20 M€/year, and the work is mainly done by companies instead of governmental organizations. In spite of the long implementation times and there being a limited tradition of making changes in the forest sector, laser scanning was commercially and operationally applied after about only one decade of research. When analyzing high-ranked journal papers from ISI Web of Science, the topic of laser scanning of forests has been the driving force for the whole laser scanning research society over the last decade. Thus, the topic “laser scanning in forests” has provided a significant industrial, societal and scientific impact. [...

  19. Resonant scanning mechanism

    Science.gov (United States)

    Wallace, John; Newman, Mike; Gutierrez, Homero; Hoffman, Charlie; Quakenbush, Tim; Waldeck, Dan; Leone, Christopher; Ostaszewski, Miro

    2014-10-01

    Ball Aerospace & Technologies Corp. developed a Resonant Scanning Mechanism (RSM) capable of combining a 250- Hz resonant scan about one axis with a two-hertz triangular scan about the orthogonal axis. The RSM enables a rapid, high-density scan over a significant field of regard (FOR) while minimizing size, weight, and power requirements. The azimuth scan axis is bearing mounted allowing for 30° of mechanical travel, while the resonant elevation axis is flexure and spring mounted with five degrees of mechanical travel. Pointing-knowledge error during quiescent static pointing at room temperature across the full range is better than 100 μrad RMS per axis. The compact design of the RSM, roughly the size of a soda can, makes it an ideal mechanism for use on low-altitude aircraft and unmanned aerial vehicles. Unique aspects of the opto-mechanical design include i) resonant springs which allow for a high-frequency scan axis with low power consumption; and ii) an independent lower-frequency scan axis allowing for a wide FOR. The pointing control system operates each axis independently and employs i) a position loop for the azimuth axis; and ii) a unique combination of parallel frequency and amplitude control loops for the elevation axis. All control and pointing algorithms are hosted on a 200-MHz microcontroller with 516 KB of RAM on a compact 3"×4" digital controller, also of Ball design.

  20. The New Weather Radar for America's Space Program in Florida: A Temperature Profile Adaptive Scan Strategy

    Science.gov (United States)

    Carey, L. D.; Petersen, W. A.; Deierling, W.; Roeder, W. P.

    2009-01-01

    A new weather radar is being acquired for use in support of America s space program at Cape Canaveral Air Force Station, NASA Kennedy Space Center, and Patrick AFB on the east coast of central Florida. This new radar replaces the modified WSR-74C at Patrick AFB that has been in use since 1984. The new radar is a Radtec TDR 43-250, which has Doppler and dual polarization capability. A new fixed scan strategy was designed to best support the space program. The fixed scan strategy represents a complex compromise between many competing factors and relies on climatological heights of various temperatures that are important for improved lightning forecasting and evaluation of Lightning Launch Commit Criteria (LCC), which are the weather rules to avoid lightning strikes to in-flight rockets. The 0 C to -20 C layer is vital since most generation of electric charge occurs within it and so it is critical in evaluating Lightning LCC and in forecasting lightning. These are two of the most important duties of 45 WS. While the fixed scan strategy that covers most of the climatological variation of the 0 C to -20 C levels with high resolution ensures that these critical temperatures are well covered most of the time, it also means that on any particular day the radar is spending precious time scanning at angles covering less important heights. The goal of this project is to develop a user-friendly, Interactive Data Language (IDL) computer program that will automatically generate optimized radar scan strategies that adapt to user input of the temperature profile and other important parameters. By using only the required scan angles output by the temperature profile adaptive scan strategy program, faster update times for volume scans and/or collection of more samples per gate for better data quality is possible, while maintaining high resolution at the critical temperature levels. The temperature profile adaptive technique will also take into account earth curvature and refraction