WorldWideScience

Sample records for benchmark parameter scan

  1. Parameter Curation for Benchmark Queries

    NARCIS (Netherlands)

    Gubichev, Andrey; Boncz, Peter

    2014-01-01

    In this paper we consider the problem of generating parameters for benchmark queries so these have stable behavior despite being executed on datasets (real-world or synthetic) with skewed data distributions and value correlations. We show that uniform random sampling of the substitution parameters

  2. Benchmarking Naval Shipbuilding with 3D Laser Scanning, Additive Manufacturing, and Collaborative Product Lifecycle Management

    Science.gov (United States)

    2015-09-20

    can be developed to retrofit projects. • Asset and Facility Management/Documentation. With 3D documentation of complex factory and plant ...SPONSORED REPORT SERIES Benchmarking Naval Shipbuilding with 3D Laser Scanning, Additive Manufacturing, and Collaborative Product Lifecycle...Series Benchmarking Naval Shipbuilding with 3D Laser Scanning, Additive Manufacturing, and Collaborative Product Lifecycle Management 20 September

  3. Development of a benchmark parameter scan for Higgs bosons in the NMSSM Model and a study of the sensitivity for H{yields}AA{yields}4{tau} in vector boson fusion with the ATLAS detector

    Energy Technology Data Exchange (ETDEWEB)

    Rottlaender, Iris

    2008-08-15

    An evaluation of the discovery potential for NMSSM Higgs bosons of the ATLAS experiment at the LHC is presented. For this purpose, seven two-dimensional benchmark planes in the six-dimensional parameter space of the NMSSM Higgs sector are defined. These planes include different types of phenomenology for which the discovery of NMSSM Higgs bosons is especially challenging and which are considered typical for the NMSSM. They are subsequently used to give a detailed evaluation of the Higgs boson discovery potential based on Monte Carlo studies from the ATLAS collaboration. Afterwards, the possibility of discovering NMSSM Higgs bosons via the H{sub 1}{yields}A{sub 1}A{sub 1}{yields}4{tau}{yields}4{mu}+8{nu} decay chain and with the vector boson fusion production mode is investigated. A particular emphasis is put on the mass reconstruction from the complex final state. Furthermore, a study of the jet reconstruction performance at the ATLAS experiment which is of crucial relevance for vector boson fusion searches is presented. A good detectability of the so-called tagging jets that originate from the scattered partons in the vector boson fusion process is of critical importance for an early Higgs boson discovery in many models and also within the framework of the NMSSM. (orig.)

  4. Multi-parameters scanning in HTI media

    KAUST Repository

    Masmoudi, Nabil

    2014-08-05

    Building credible anisotropy models is crucial in imaging. One way to estimate anisotropy parameters is to relate them analytically to traveltime, which is challenging in inhomogeneous media. Using perturbation theory, we develop traveltime approximations for transversely isotropic media with horizontal symmetry axis (HTI) as explicit functions of the anellipticity parameter η and the symmetry axis azimuth ϕ in inhomogeneous background media. Specifically, our expansion assumes an inhomogeneous elliptically anisotropic background medium, which may be obtained from well information and stacking velocity analysis in HTI media. This formulation has advantages on two fronts: on one hand, it alleviates the computational complexity associated with solving the HTI eikonal equation, and on the other hand, it provides a mechanism to scan for the best fitting parameters η and ϕ without the need for repetitive modeling of traveltimes, because the traveltime coefficients of the expansion are independent of the perturbed parameters η and ϕ. The accuracy of our expansion is further enhanced by the use of shanks transform. We show the effectiveness of our scheme with tests on a 3D model and we propose an approach for multi-parameters scanning in TI media.

  5. Scanning anisotropy parameters in complex media

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-03-21

    Parameter estimation in an inhomogeneous anisotropic medium offers many challenges; chief among them is the trade-off between inhomogeneity and anisotropy. It is especially hard to estimate the anisotropy anellipticity parameter η in complex media. Using perturbation theory and Taylor’s series, I have expanded the solutions of the anisotropic eikonal equation for transversely isotropic (TI) media with a vertical symmetry axis (VTI) in terms of the independent parameter η from a generally inhomogeneous elliptically anisotropic medium background. This new VTI traveltime solution is based on a set of precomputed perturbations extracted from solving linear partial differential equations. The traveltimes obtained from these equations serve as the coefficients of a Taylor-type expansion of the total traveltime in terms of η. Shanks transform is used to predict the transient behavior of the expansion and improve its accuracy using fewer terms. A homogeneous medium simplification of the expansion provides classical nonhyperbolic moveout descriptions of the traveltime that are more accurate than other recently derived approximations. In addition, this formulation provides a tool to scan for anisotropic parameters in a generally inhomogeneous medium background. A Marmousi test demonstrates the accuracy of this approximation. For a tilted axis of symmetry, the equations are still applicable with a slightly more complicated framework because the vertical velocity and δ are not readily available from the data.

  6. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  7. Parameter Scan for the CLIC Damping Rings

    CERN Document Server

    Papaphilippou, Y; Korostelev, M

    2008-01-01

    Triggered by the RF frequency reduction of the CLIC main linac cavities, the damping ring parameters had to be reevaluated and the rings performance adapted to the new luminosity requirements. In view of a staged approach for reaching the ultimate energy of the collider, the dependence of the rings output emittances under the influence of Intrabeam Scattering is evaluated with respect to different beam characteristics such as bunch population, beam energy, coupling and longitudinal beam characteristics.

  8. New insights on crystallization in a benchmark organic photovoltaic system by fast scanning chip calorimetry

    Science.gov (United States)

    van den Brande, Niko; van Mele, Bruno; Ediger, Mark

    Using the advanced thermal analysis technique of Fast Scanning Chip Calorimetry, which relies on thin membrane chips, a methodology was developed which allows for a true isothermal study, i.e. avoiding non-isothermal effects which may alter metastable structures, by employing scanning rates of 30000 K.s-1. Isothermally formed structures, which were not observable before, were now conserved and analyzed in the subsequent heating. This methodology was used to investigate the P3HT/PC61BM (poly(3-hexylthiophene/[6 6] phenyl C61 butyric acid methyl ester) benchmark system used in organic photovoltaics, as well as its pure components. By applying the methodology to P3HT, the bell-shaped curve of isothermal crystallization rate was constructed for a P3HT layer with a thickness of ca. 550. Surprisingly, the PC61BM acceptor is capable of crystallizing significantly below its glass transition, a type of behavior seen before for several non-polymeric organic glasses.

  9. Benchmarking Parameter-free AMaLGaM on Functions With and Without Noise

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); J. Grahl; D. Thierens (Dirk)

    2013-01-01

    htmlabstractWe describe a parameter-free estimation-of-distribution algorithm (EDA) called the adapted maximum-likelihood Gaussian model iterated density-estimation evolutionary algorithm (AMaLGaM-IDEA, or AMaLGaM for short) for numerical optimization. AMaLGaM is benchmarked within the 2009 black

  10. On material modelling, identification of material parameters and application to two benchmark exercises

    International Nuclear Information System (INIS)

    Laemmer, H.; Diegele, E.

    2000-01-01

    The thermoviscoplastic model of finite deformation thermoviscoplasticity, presented in 1997, and the identification of material parameters as given in 1998 was applied to two benchmark exercises within the REVISA (Reactor Vessel Integrity in Severe Accidents) project in 1999. Starting from a simplified version of the theory which only includes the kinematic hardening assumption new sets of parameters were identified for 16MND5 reactor pressure vessel steel from simple tensile and creep tests. The model implemented in the ABAQUS finite element code was applied to two exercises. The first was a benchmark exercise which follows the loading conditions of the RUPTURE experiment number 15 as performed at CEA. The numerical analysis was compared to the experimental data. The second example was a scenario of small hot spot and external cooling by radiation. (orig.) [de

  11. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    International Nuclear Information System (INIS)

    Kljenak, Ivo; Kuznetsov, Mikhail; Kostka, Pal; Kubišova, Lubica; Maltsev, Mikhail; Manzini, Giovanni; Povilaitis, Mantas

    2015-01-01

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description

  12. Key performance indicators for stroke from the Ministry of Health of Brazil: benchmarking and indicator parameters.

    Science.gov (United States)

    Lange, Marcos C; Braga, Gabriel Pereira; Nóvak, Edison M; Harger, Rodrigo; Felippe, Maria Justina Dalla Bernardina; Canever, Mariana; Dall'Asta, Isabella; Rauen, Jordana; Bazan, Rodrigo; Zetola, Viviane

    2017-06-01

    All 16 KPIs were analyzed, including the percentage of patients admitted to the stroke unit, venous thromboembolism prophylaxis in the first 48 hours after admission, pneumonia and hospital mortality due to stroke, and hospital discharge on antithrombotic therapy in patients without cardioembolic mechanism. Both centers admitted over 80% of the patients in their stroke unit. The incidence of venous thromboembolism prophylaxis was > 85%, that of in-hospital pneumonia was 70%. Our results suggest using the parameters of all of the 16 KPIs required by the Ministry of Health of Brazil, and the present results for the two stroke units for future benchmarking.

  13. Multipinhole SPECT helical scan parameters and imaging volume.

    Science.gov (United States)

    Yao, Rutao; Deng, Xiao; Wei, Qingyang; Dai, Tiantian; Ma, Tianyu; Lecomte, Roger

    2015-11-01

    The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluated by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.

  14. Scanning anisotropy parameters in horizontal transversely isotropic media

    KAUST Repository

    Masmoudi, Nabil

    2016-10-12

    The horizontal transversely isotropic model, with arbitrary symmetry axis orientation, is the simplest effective representative that explains the azimuthal behaviour of seismic data. Estimating the anisotropy parameters of this model is important in reservoir characterisation, specifically in terms of fracture delineation. We propose a travel-time-based approach to estimate the anellipticity parameter η and the symmetry axis azimuth ϕ of a horizontal transversely isotropic medium, given an inhomogeneous elliptic background model (which might be obtained from velocity analysis and well velocities). This is accomplished through a Taylor\\'s series expansion of the travel-time solution (of the eikonal equation) as a function of parameter η and azimuth angle ϕ. The accuracy of the travel time expansion is enhanced by the use of Shanks transform. This results in an accurate approximation of the solution of the non-linear eikonal equation and provides a mechanism to scan simultaneously for the best fitting effective parameters η and ϕ, without the need for repetitive modelling of travel times. The analysis of the travel time sensitivity to parameters η and ϕ reveals that travel times are more sensitive to η than to the symmetry axis azimuth ϕ. Thus, η is better constrained from travel times than the azimuth. Moreover, the two-parameter scan in the homogeneous case shows that errors in the background model affect the estimation of η and ϕ differently. While a gradual increase in errors in the background model leads to increasing errors in η, inaccuracies in ϕ, on the other hand, depend on the background model errors. We also propose a layer-stripping method valid for a stack of arbitrary oriented symmetry axis horizontal transversely isotropic layers to convert the effective parameters to the interval layer values.

  15. Benchmarking Naval Shipbuilding With 3D Laser Scanning, Additive Manufacturing, and Collaborative Product Lifecycle Management

    Science.gov (United States)

    2016-04-30

    laser scanning services were used for shipcheck of a three-story hangar bay on the USS Abraham Lincoln (CVN 72). Scanning the HVAC , piping, fuel...referred to as 3D printing. In the automotive industry, Ford Motor Co. uses 3D printing in several areas, including the tooling used to create production

  16. A benchmark study of automated intra-retinal cyst segmentation algorithms using optical coherence tomography B-scans.

    Science.gov (United States)

    Girish, G N; Anima, V A; Kothari, Abhishek R; Sudeep, P V; Roychowdhury, Sohini; Rajan, Jeny

    2018-01-01

    Retinal cysts are formed by accumulation of fluid in the retina caused by leakages from inflammation or vitreous fractures. Analysis of the retinal cystic spaces holds significance in detection and treatment of several ocular diseases like age-related macular degeneration, diabetic macular edema etc. Thus, segmentation of intra-retinal cysts and quantification of cystic spaces are vital for retinal pathology and severity detection. In the recent years, automated segmentation of intra-retinal cysts using optical coherence tomography B-scans has gained significant importance in the field of retinal image analysis. The objective of this paper is to compare different intra-retinal cyst segmentation algorithms for comparative analysis and benchmarking purposes. In this work, we employ a modular approach for standardizing the different segmentation algorithms. Further, we analyze the variations in automated cyst segmentation performances and method scalability across image acquisition systems by using the publicly available cyst segmentation challenge dataset (OPTIMA cyst segmentation challenge). Several key automated methods are comparatively analyzed using quantitative and qualitative experiments. Our analysis demonstrates the significance of variations in signal-to-noise ratio (SNR), retinal layer morphology and post-processing steps on the automated cyst segmentation processes. This benchmarking study provides insights towards the scalability of automated processes across vendor-specific imaging modalities to provide guidance for retinal pathology diagnostics and treatment processes. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Identification of critical parameters for PEMFC stack performance characterization and control strategies for reliable and comparable stack benchmarking

    DEFF Research Database (Denmark)

    Mitzel, Jens; Gülzow, Erich; Kabza, Alexander

    2016-01-01

    This paper is focused on the identification of critical parameters and on the development of reliable methodologies to achieve comparable benchmark results. Possibilities for control sensor positioning and for parameter variation in sensitivity tests are discussed and recommended options for the ......This paper is focused on the identification of critical parameters and on the development of reliable methodologies to achieve comparable benchmark results. Possibilities for control sensor positioning and for parameter variation in sensitivity tests are discussed and recommended options...... in an average cell voltage deviation of 21 mV. Test parameters simulating different stack applications are summarized. The stack demonstrated comparable average cell voltage of 0.63 V for stationary and portable conditions. For automotive conditions, the voltage increased to 0.69 V, mainly caused by higher...

  18. Calibration of Mounted Parameter for Ship-borne 3D Laser Scanning System

    Directory of Open Access Journals (Sweden)

    XU Wenxue

    2018-02-01

    Full Text Available Ship-borne 3D laser scanning technology has vital theoretical significance and practical value in marine surveying and mapping.As one of the key steps of ship-borne 3D laser scanning, calibration of mounted parameter is urgent to be solved.This paper proposes a calibration method of mounted parameter without control points for ship-borne 3D laser scanning system.Based on the corresponding points in overlapping area, mounted parameter calibration model of scan system is built by time and spatia registration model of ship-borne 3D laser scanning data.Finally, differential least squares are applied to obtain optimum mounted parameters.Experiments demonstrate the reasonable and effectiveness of this method,the quality of scanning data can be significantly improved.

  19. Precision and Accuracy Parameters in Structured Light 3-D Scanning

    DEFF Research Database (Denmark)

    Eiríksson, Eyþór Rúnar; Wilm, Jakob; Pedersen, David Bue

    2016-01-01

    measure is the established VDI/VDE 2634 (Part 2) guideline using precision made calibration artifacts. Experiments are performed on our own structured light setup, consisting of two cameras and a projector. We place our focus on the influence of calibration design parameters, the calibration procedure...

  20. Success and complication parameters for laparoscopic surgery: a benchmark for natural orifice transluminal endoscopic surgery.

    Science.gov (United States)

    Gutt, C N; Müller-Stich, B P; Reiter, M A

    2009-01-01

    Natural orifice transluminal endoscopic surgery (NOTES) currently represents an issue of particular interest among surgeons and gastroenterologists. The principle of NOTES is the reduction of the operative trauma by using natural orifices of the human body to access the abdominal cavity. Regarding the tendency to further minimization of the surgical trauma, NOTES may be considered as a logical step in the evolution of minimally invasive surgery. Pioneers of this technique regard NOTES as the successor to laparoscopic surgery in enabling surgeons and gastroenterologists to conduct scarless surgery. This might not only lead to better cosmetic results but also enhance the prospect of decreases in wound infections and incisional hernias, as well as reducing operative stress, postoperative immobility, and pain. In this article the authors collect and review the existing literature concerning NOTES and establish a benchmark for the assessment of this new technique by stating results from conventional minimally invasive surgery as the gold standard. It is shown that publications investigating possible advantages or long-term results of NOTES are scarce. However, the investigation and verification of potential advantages and disadvantages represent the most important step in the development of a new technique. Only proven advantages would justify the broad implementation of a new technique in relation to its specific risks. Conventional laparoscopic surgery as the current standard of minimally invasive surgery will be the benchmark for NOTES with regard to most issues. Superiority of NOTES in at least several issues would be the best argument for its further implementation into clinical practice.

  1. Benchmarking environmental and operational parameters through eco-efficiency criteria for dairy farms.

    Science.gov (United States)

    Iribarren, Diego; Hospido, Almudena; Moreira, María Teresa; Feijoo, Gumersindo

    2011-04-15

    Life Cycle Assessment (LCA) is often used for the environmental evaluation of agri-food systems due to its holistic perspective. In particular, the assessment of milk production at farm level requires the evaluation of multiple dairy farms to guarantee the representativeness of the study when a regional perspective is adopted. This article shows the joint implementation of LCA and Data Envelopment Analysis (DEA) in order to avoid the formulation of an average farm, therefore preventing standard deviations associated with the use of average inventory data while attaining the characterization and benchmarking of the operational and environmental performance of dairy farms. Within this framework, 72 farms located in Galicia (NW Spain) were subject to an LCA+DEA study which led to identify those farms with an efficient operation. Furthermore, target input consumption levels were benchmarked for each inefficient farm, and the corresponding target environmental impacts were calculated so that eco-efficiency criteria were verified. Thus, average reductions of up to 38% were found for input consumption levels, leading to impact reductions above 20% for every environmental impact category. Finally, the economic savings arising from efficient farming practices were also estimated. Economic savings of up to 0.13€ per liter of raw milk were calculated, which means extra profits of up to 40% of the final raw milk price. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2015-01-01

    method based uncertainty and reliability analysis. The reliability of the scanning paths are established using cumulative probability distribution functions for process output criteria such as sample density, thermal homogeneity, etc. A customized genetic algorithm is used along with the simulation model......Selective laser melting is yet to become a standardized industrial manufacturing technique. The process continues to suffer from defects such as distortions, residual stresses, localized deformations and warpage caused primarily due to the localized heating, rapid cooling and high temperature...... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths...

  3. Overview and benchmark analysis of fuel cell parameters estimation for energy management purposes

    Science.gov (United States)

    Kandidayeni, M.; Macias, A.; Amamou, A. A.; Boulon, L.; Kelouwani, S.; Chaoui, H.

    2018-03-01

    Proton exchange membrane fuel cells (PEMFCs) have become the center of attention for energy conversion in many areas such as automotive industry, where they confront a high dynamic behavior resulting in their characteristics variation. In order to ensure appropriate modeling of PEMFCs, accurate parameters estimation is in demand. However, parameter estimation of PEMFC models is highly challenging due to their multivariate, nonlinear, and complex essence. This paper comprehensively reviews PEMFC models parameters estimation methods with a specific view to online identification algorithms, which are considered as the basis of global energy management strategy design, to estimate the linear and nonlinear parameters of a PEMFC model in real time. In this respect, different PEMFC models with different categories and purposes are discussed first. Subsequently, a thorough investigation of PEMFC parameter estimation methods in the literature is conducted in terms of applicability. Three potential algorithms for online applications, Recursive Least Square (RLS), Kalman filter, and extended Kalman filter (EKF), which has escaped the attention in previous works, have been then utilized to identify the parameters of two well-known semi-empirical models in the literature, Squadrito et al. and Amphlett et al. Ultimately, the achieved results and future challenges are discussed.

  4. A simulation study on proton computed tomography (CT) stopping power accuracy using dual energy CT scans as benchmark

    DEFF Research Database (Denmark)

    Hansen, David Christoffer; Seco, Joao; Sørensen, Thomas Sangild

    2015-01-01

    of detectors and the corresponding noise characteristics. Stopping power maps were calculated for all three scans, and compared with the ground truth stopping power from the phantoms. Results. Proton CT gave slightly better stopping power estimates than the dual energy CT method, with root mean square errors...... development) have both been proposed as methods for obtaining patient stopping power maps. The purpose of this work was to assess the accuracy of proton CT using dual energy CT scans of phantoms to establish reference accuracy levels. Material and methods. A CT calibration phantom and an abdomen cross section...... phantom containing inserts were scanned with dual energy and single energy CT with a state-of-the-art dual energy CT scanner. Proton CT scans were simulated using Monte Carlo methods. The simulations followed the setup used in current prototype proton CT scanners and included realistic modeling...

  5. Efficient cascaded parameter scan approach for studying top-off safety in storage rings

    Directory of Open Access Journals (Sweden)

    Yongjun Li

    2011-03-01

    Full Text Available We introduce a new algorithm, which we call the cascaded parameter scan method, to efficiently carry out the scan over magnet parameters in the safety analysis for top-off injection in synchrotron radiation storage rings. In top-off safety analysis, one must track particles populating phase space through a beam line containing magnets and apertures and clearly demonstrate that, for all possible magnet settings and errors, all particles are lost on scrapers within the properly shielded region. In the usual approach, if one considers m magnets and scans each magnet through n setpoints, then one must carry out n^{m} tracking runs. In the cascaded parameter scan method, the number of tracking runs is reduced to n×m. This reduction of exponential to linear dependence on the number of setpoints n greatly reduces the required computation time and allows one to more densely populate phase space and to increase the number n of setpoints scanned for each magnet.

  6. Automatic measuring method of catenary geometric parameters based on laser scanning and imaging

    Science.gov (United States)

    Fu, Luhua; Chang, Songhong; Liu, Changjie

    2018-01-01

    The catenary geometric parameters are important factors that affect the safe operation of the railway. Among them, height of conductor and stagger value are two key parameters. At present, the two parameters are mainly measured by laser distance sensor and angle measuring device with manual aiming method, with low measuring speed and poor efficiency. In order to improve the speed and accuracy of catenary geometric parameters detection, a new automatic measuring method of contact wire's parameters based on laser scanning and imaging is proposed. The DLT method is used to calibrate the parameters of the linear array CCD camera. The direction of the scanning laser beam and the spatial coordinate of the starting point of the beam are calculated by geometric method. Finally, the equation is established using the calibrated parameters and the imaginary coordinates of the imaging point, to solve the spatial coordinate of the measured point on the contact wire, so as to calculate height of conductor and stagger value. Different from the traditional hand-held laser phase measuring method, the new method can achieve measurement of the catenary geometric parameters automatically without manual aiming. Through measurement results, accuracy can reach 2mm.

  7. Reliability of capturing foot parameters using digital scanning and the neutral suspension casting technique

    Directory of Open Access Journals (Sweden)

    Rome Keith

    2011-03-01

    Full Text Available Abstract Background A clinical study was conducted to determine the intra and inter-rater reliability of digital scanning and the neutral suspension casting technique to measure six foot parameters. The neutral suspension casting technique is a commonly utilised method for obtaining a negative impression of the foot prior to orthotic fabrication. Digital scanning offers an alternative to the traditional plaster of Paris techniques. Methods Twenty one healthy participants volunteered to take part in the study. Six casts and six digital scans were obtained from each participant by two raters of differing clinical experience. The foot parameters chosen for investigation were cast length (mm, forefoot width (mm, rearfoot width (mm, medial arch height (mm, lateral arch height (mm and forefoot to rearfoot alignment (degrees. Intraclass correlation coefficients (ICC with 95% confidence intervals (CI were calculated to determine the intra and inter-rater reliability. Measurement error was assessed through the calculation of the standard error of the measurement (SEM and smallest real difference (SRD. Results ICC values for all foot parameters using digital scanning ranged between 0.81-0.99 for both intra and inter-rater reliability. For neutral suspension casting technique inter-rater reliability values ranged from 0.57-0.99 and intra-rater reliability values ranging from 0.36-0.99 for rater 1 and 0.49-0.99 for rater 2. Conclusions The findings of this study indicate that digital scanning is a reliable technique, irrespective of clinical experience, with reduced measurement variability in all foot parameters investigated when compared to neutral suspension casting.

  8. Scanning Electrochemical Microscopy: A Comprehensive Review of Experimental Parameters from 1989 to 2015.

    Science.gov (United States)

    Polcari, David; Dauphin-Ducharme, Philippe; Mauzeroll, Janine

    2016-11-23

    Scanning electrochemical microscopy (SECM) is an electroanalytical scanning probe technique capable of imaging substrate topography and local reactivity with high resolution. Since its inception in 1989, it has expanded into a wide variety of research areas including biology, corrosion, energy, kinetics, instrumental development, and surface modification. In the past 25 years, over 1800 peer-reviewed publications have focused on SECM, including several topical reviews. However, these reviews often omit key details, forcing readers to search the literature. In this review, we provide a comprehensive summary of the experimental parameters (e.g., solvents, probes, and mediators) used in all SECM publications since 1989, irrespective of the application. It can be used to rapidly assess experimental possibilities and make an informed decision about experimental design. In other words, it is a practical guide to SECM.

  9. Benchmarks on effective delayed neutron parameters and reactivity: a Brazilian IPEN/MB-01 contribution to the IRPhE project

    Energy Technology Data Exchange (ETDEWEB)

    Dos Santos, Adimir; Yoichi Ribeiro Kuramoto, Renato; Diniz, Ricardo; Jereza Graciete Simoes de Andrade e Silva, Rogerio; Yamaguchi, Mitsuo [Instituto de Pesquisas Energeticas e Nucleares, IPEN - CNEN/SP, Sao Paulo (Brazil)

    2008-07-01

    The purpose of this work is to present the experimental results of the in-pile experiments performed at the IPEN/MB-01 Reactor for the determination of the effective delayed neutron parameters and reactivity. The methodologies employed were the macroscopic noise in the frequency domain, where the very low frequency range (< 1.0 Hz) was also exploited and analyzed, and the microscopic noise, which is based on the measurement of Rossi-alpha and Feynmann-alpha distributions at several subcritical levels. In this last case, a Two- Region Model was developed. The main advantage of these methodologies is to obtain the effective delayed neutron parameters in a purely experimental way, eliminating all parameters that are difficult to measure or calculate. Consequently, the uncertainties associated with these parameters are eliminated and the accuracy in the effective delayed neutron parameters is improved. Both techniques are claimed to be well defined and produce experimental data of very high quality. Finally, it is proposed to assign benchmark-values to beta{sub eff} (the effective delayed neutron fraction), to LAMBDA (the prompt neutron generation time), to their ratio (beta{sub eff}/LAMBDA) and also for the first time to the reactivity by means of the in-hour equation. It is concluded that the experiments are acceptable benchmarks. (authors)

  10. Parameter scans and accuracy estimates of the dynamical aperture of the CERN LHC

    CERN Document Server

    Giovannozzi, Massimo

    2006-01-01

    Techniques to make use of large distributed computing facilities allow for denser parameter scans of the dynamic aperture, i.e., the domain in phase space where bounded single-particle motion prevails. Moreover, one can also increase the number of 'seeds' each of which represents a possible realization of multipolar components around the machine. In this paper the dependence of the dynamic aperture on the step size of the grid of initial conditions and on the number of seeds is studied. Estimates on the accuracy of the dynamic aperture are derived and the definition of an improved protocol for numerical simulations is presented.

  11. Investigation of scanning parameters for thyroid fine needle aspiration cytology specimens: A pilot study

    Directory of Open Access Journals (Sweden)

    Maheswari S Mukherjee

    2015-01-01

    Full Text Available Background: Interest in developing more feasible and affordable applications of virtual microscopy in the field of cytology continues to grow. Aims: The aim of this study was to investigate the scanning parameters for the thyroid fine needle aspiration (FNA cytology specimens. Subjects and Methods: A total of twelve glass slides from thyroid FNA cytology specimens were digitized at ×40 with 1 micron (μ interval using seven focal plane (FP levels (Group 1, five FP levels (Group 2, and three FP levels (Group 3 using iScan Coreo Au scanner (Ventana, AZ, USA producing 36 virtual images (VI. With an average wash out period of 2 days, three participants diagnosed the preannotated cells of Groups 1, 2, and 3 using BioImagene′s Image Viewer (version 3.1 (Ventana, Inc., Tucson, AZ, USA, and the corresponding 12 glass slides (Group 4 using conventional light microscopy. Results: All three raters correctly identified and showed complete agreement on the glass and VI for: 86% of the cases at FP Level 3, 83% of the cases at both the FP Levels 5 and 7. The intra-observer concordance between the glass slides and VI for all three raters was highest (97% for Level 3 and glass, same (94% for Level 5 and glass; and Level 7 and glass. The inter-rater reliability was found to be highest for the glass slides, and three FP levels (77%, followed by five FP levels (69.5%, and seven FP levels (69.1%. Conclusions: This pilot study found that among the three different FP levels, the VI digitized using three FP levels had slightly higher concordance, intra-observer concordance, and inter-rater reliability. Scanning additional levels above three FP levels did not improve concordance. We believe that there is no added benefit of acquiring five FP levels or more especially when considering the file size, and storage costs. Hence, this study reports that FP level three and 1 μ could be the potential scanning parameters for the thyroid FNA cytology specimens.

  12. Parameter Estimation of Actuators for Benchmark Active Control Technology (BACT) Wind Tunnel Model with Analysis of Wear and Aerodynamic Loading Effects

    Science.gov (United States)

    Waszak, Martin R.; Fung, Jimmy

    1998-01-01

    This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.

  13. Whole-heart 3D late gadolinium-enhanced MR imaging. Investigation of optimal scan parameters and clinical usefulness

    International Nuclear Information System (INIS)

    Yorimitsu, Misako; Yokoyama, Kenichi; Nitatori, Toshiaki; Yoshino, Hideaki; Isono, Sachiko; Kuhara, Shigehide

    2012-01-01

    Whole-heart 3-dimensional (3D) late-gadolinium-enhanced magnetic resonance (MR) imaging (WH-LGE) uses respiratory gating combined with acquisition of 3D data for the entire heart in a single scan, which permits reconstruction of any plane with high resolution. We investigated the optimal scan parameters and compared WH-LGE with the conventional scanning method. We employed inversion recovery 3D fast field echo using a 1.5-tesla system and scan parameters: repetition time (TR), 6.6 ms; echo time (TE), 2.5 ms; number of segments, 2; parallel imaging factor, 1.8; matrix size, 128 x 256; field of view (FOV), 320 x 320 mm; and acquisition slice thickness, 3 mm (reconstruction slice thickness, 1.5 mm). Five healthy volunteers underwent scanning during free breathing with real-time motion correction, from which we determined optimal scan parameters. We then used those parameters to scan 25 patients with myocardial infarction to compare scan time and image quality between the WH-LGE and conventional 3D breath-holding methods (slice thickness, 10 mm; matrix size, 128 x 256). Results in volunteers showed optimal scan parameters of 12deg flip angle, fat suppression turned off in combination, and interleaved ordering. In clinical cases, scan times did not differ significantly. Sharpness of the margins of normal myocardium at the apex of the heart and contrast between enhanced and nonenhanced myocardium improved significantly with WH-LGE. WH-LGE yields high resolution images during free breathing and is considered useful for accurately estimating the area and transmural extent of myocardial infarction. (author)

  14. Full parameter scan of the Zee model: exploring Higgs lepton flavor violation

    Energy Technology Data Exchange (ETDEWEB)

    Herrero-García, Juan [ARC Center of Excellence for Particle Physics at the Terascale, University of Adelaide,Adelaide, SA 5005 (Australia); Department of Physics, School of Engineering Sciences, KTH Royal Institute of Technology,AlbaNova University Center, Roslagstullsbacken 21, 106 91 Stockholm (Sweden); Ohlsson, Tommy; Riad, Stella; Wirén, Jens [Department of Physics, School of Engineering Sciences, KTH Royal Institute of Technology,AlbaNova University Center, Roslagstullsbacken 21, 106 91 Stockholm (Sweden)

    2017-04-21

    We study the general Zee model, which includes an extra Higgs scalar doublet and a new singly-charged scalar singlet. Neutrino masses are generated at one-loop level, and in order to describe leptonic mixing, both the Standard Model and the extra Higgs scalar doublets need to couple to leptons (in a type-III two-Higgs doublet model), which necessarily generates large lepton flavor violating signals, also in Higgs decays. Imposing all relevant phenomenological constraints and performing a full numerical scan of the parameter space, we find that both normal and inverted neutrino mass orderings can be fitted, although the latter is disfavored with respect to the former. In fact, inverted ordering can only be accommodated if θ{sub 23} turns out to be in the first octant. A branching ratio for h→τμ of up to 10{sup −2} is allowed, but it could be as low as 10{sup −6}. In addition, if future expected sensitivities of τ→μγ are achieved, normal ordering can be almost completely tested. Also, μe conversion is expected to probe large parts of the parameter space, excluding completely inverted ordering if no signal is observed. Furthermore, non-standard neutrino interactions are found to be smaller than 10{sup −6}, which is well below future experimental sensitivity. Finally, the results of our scan indicate that the masses of the additional scalars have to be below 2.5 TeV, and typically they are lower than that and therefore within the reach of the LHC and future colliders.

  15. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  16. ALGORITHM FOR THE AUTOMATIC ESTIMATION OF AGRICULTURAL TREE GEOMETRIC PARAMETERS USING AIRBORNE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    E. Hadaś

    2016-06-01

    Full Text Available The estimation of dendrometric parameters has become an important issue for the agricultural planning and management. Since the classical field measurements are time consuming and inefficient, Airborne Laser Scanning (ALS data can be used for this purpose. Point clouds acquired for orchard areas allow to determine orchard structures and geometric parameters of individual trees. In this research we propose an automatic method that allows to determine geometric parameters of individual olive trees using ALS data. The method is based on the α-shape algorithm applied for normalized point clouds. The algorithm returns polygons representing crown shapes. For points located inside each polygon, we select the maximum height and the minimum height and then we estimate the tree height and the crown base height. We use the first two components of the Principal Component Analysis (PCA as the estimators for crown diameters. The α-shape algorithm requires to define the radius parameter R. In this study we investigated how sensitive are the results to the radius size, by comparing the results obtained with various settings of the R with reference values of estimated parameters from field measurements. Our study area was the olive orchard located in the Castellon Province, Spain. We used a set of ALS data with an average density of 4 points m−2. We noticed, that there was a narrow range of the R parameter, from 0.48 m to 0.80 m, for which all trees were detected and for which we obtained a high correlation coefficient (> 0.9 between estimated and measured values. We compared our estimates with field measurements. The RMSE of differences was 0.8 m for the tree height, 0.5 m for the crown base height, 0.6 m and 0.4 m for the longest and shorter crown diameter, respectively. The accuracy obtained with the method is thus sufficient for agricultural applications.

  17. Parametric modeling and optimization of laser scanning parameters during laser assisted machining of Inconel 718

    Science.gov (United States)

    Venkatesan, K.; Ramanujam, R.; Kuppan, P.

    2016-04-01

    This paper presents a parametric effect, microstructure, micro-hardness and optimization of laser scanning parameters (LSP) on heating experiments during laser assisted machining of Inconel 718 alloy. The laser source used for experiments is a continuous wave Nd:YAG laser with maximum power of 2 kW. The experimental parameters in the present study are cutting speed in the range of 50-100 m/min, feed rate of 0.05-0.1 mm/rev, laser power of 1.25-1.75 kW and approach angle of 60-90°of laser beam axis to tool. The plan of experiments are based on central composite rotatable design L31 (43) orthogonal array. The surface temperature is measured via on-line measurement using infrared pyrometer. Parametric significance on surface temperature is analysed using response surface methodology (RSM), analysis of variance (ANOVA) and 3D surface graphs. The structural change of the material surface is observed using optical microscope and quantitative measurement of heat affected depth that are analysed by Vicker's hardness test. The results indicate that the laser power and approach angle are the most significant parameters to affect the surface temperature. The optimum ranges of laser power and approach angle was identified as 1.25-1.5 kW and 60-65° using overlaid contour plot. The developed second order regression model is found to be in good agreement with experimental values with R2 values of 0.96 and 0.94 respectively for surface temperature and heat affected depth.

  18. Validation study of SRAC2006 code system based on evaluated nuclear data libraries for TRIGA calculations by benchmarking integral parameters of TRX and BAPL lattices of thermal reactors

    International Nuclear Information System (INIS)

    Khan, M.J.H.; Sarker, M.M.; Islam, S.M.A.

    2013-01-01

    Highlights: ► To validate the SRAC2006 code system for TRIGA neutronics calculations. ► TRX and BAPL lattices are treated as standard benchmarks for this purpose. ► To compare the calculated results with experiment as well as MCNP values in this study. ► The study demonstrates a good agreement with the experiment and the MCNP results. ► Thus, this analysis reflects the validation study of the SRAC2006 code system. - Abstract: The goal of this study is to present the validation study of the SRAC2006 code system based on evaluated nuclear data libraries ENDF/B-VII.0 and JENDL-3.3 for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. This study is achieved through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors. In integral measurements, the thermal reactor lattices TRX-1, TRX-2, BAPL-UO 2 -1, BAPL-UO 2 -2 and BAPL-UO 2 -3 are treated as standard benchmarks for validating/testing the SRAC2006 code system as well as nuclear data libraries. The integral parameters of the said lattices are calculated using the collision probability transport code PIJ of the SRAC2006 code system at room temperature 20 °C based on the above libraries. The calculated integral parameters are compared to the measured values as well as the MCNP values based on the Chinese evaluated nuclear data library CENDL-3.0. It was found that in most cases, the values of integral parameters demonstrate a good agreement with the experiment and the MCNP results. In addition, the group constants in SRAC format for TRX and BAPL lattices in fast and thermal energy range respectively are compared between the above libraries and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation study of the SRAC2006 code system based on evaluated nuclear data libraries JENDL-3.3 and ENDF/B-VII.0 and can also be essential to implement further neutronics calculations

  19. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  20. Josephson scanning tunneling microscopy -- a local and direct probe of the superconducting order parameter

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Hikari; Dynes, Robert; Barber Jr., Richard. P.; Ono, S.; Ando, Y.

    2009-09-01

    Direct measurements of the superconducting superfluid on the surface of vacuum-cleaved Bi2Sr2CaCu2O8+delta (BSCCO) samples are reported. These measurements are accomplished via Josephson tunneling into the sample using a novel scanning tunneling microscope (STM) equipped with a superconducting tip. The spatial resolution of the STM of lateral distances less than the superconducting coherence length allows it to reveal local inhomogeneities in the pair wavefunction of the BSCCO. Instrument performance is demonstrated first with Josephson measurements of Pb films followed by the layered superconductor NbSe2. The relevant measurement parameter, the Josephson ICRN product, is discussed within the context of both BCS superconductors and the high transition temperature superconductors. The local relationship between the ICRN product and the quasiparticle density of states (DOS) gap are presented within the context of phase diagrams for BSCCO. Excessive current densities can be produced with these measurements and have been found to alter the local DOS in the BSCCO. Systematic studies of this effect were performed to determine the practical measurement limits for these experiments. Alternative methods for preparation of the BSCCO surface are also discussed.

  1. Improvement of the fringe analysis algorithm for wavelength scanning interferometry based on filter parameter optimization.

    Science.gov (United States)

    Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian

    2018-03-20

    The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.

  2. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators...

  3. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...

  4. Optimized treatment parameters to account for interfractional variability in scanned ion beam therapy of lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Brevet, Romain

    2015-02-04

    Scanned ion beam therapy of lung tumors is severely limited in its clinical applicability by intrafractional organ motion, interference effects between beam and tumor motion (interplay) as well as interfractional anatomic changes. To compensate for dose deterioration by intrafractional motion, motion mitigation techniques, such as gating have been developed. The latter confines the irradiation to a predetermined breathing state, usually the stable end-exhale phase. However, optimization of the treatment parameters is needed to further improve target dose coverage and normal tissue sparing. The aim of the study presented in this dissertation was to determine treatment planning parameters that permit to recover good target coverage and homogeneity during a full course of lung tumor treatments. For 9 lung tumor patients from MD Anderson Cancer Center (MDACC), a total of 70 weekly time-resolved computed tomography (4DCT) datasets were available, which depict the evolution of the patient anatomy over the several fractions of the treatment. Using the GSI in-house treatment planning system (TPS) TRiP4D, 4D simulations were performed on each weekly 4DCT for each patient using gating and optimization of a single treatment plan based on a planning CT acquired prior to treatment. It was found that using a large beam spot size, a short gating window (GW), additional margins and multiple fields permitted to obtain the best results, yielding an average target coverage (V95) of 96.5%. Two motion mitigation techniques, one approximating the rescanning process (multiple irradiations of the target with a fraction of the planned dose) and one combining the latter and gating, were then compared to gating. Both did neither show an improvement in target dose coverage nor in normal tissue sparing. Finally, the total dose delivered to each patient in a simulation of a fractioned treatment was calculated and clinical requirements in terms of target coverage and normal tissue sparing were

  5. Key performance indicators score (KPIs-score) based on clinical and laboratorial parameters can establish benchmarks for internal quality control in an ART program.

    Science.gov (United States)

    Franco, José G; Petersen, Claudia G; Mauri, Ana L; Vagnini, Laura D; Renzi, Adriana; Petersen, Bruna; Mattila, M C; Comar, Vanessa A; Ricci, Juliana; Dieamant, Felipe; Oliveira, João Batista A; Baruffi, Ricardo L R

    2017-06-01

    KPIs have been employed for internal quality control (IQC) in ART. However, clinical KPIs (C-KPIs) such as age, AMH and number of oocytes collected are never added to laboratory KPIs (L-KPIs), such as fertilization rate and morphological quality of the embryos for analysis, even though the final endpoint is the evaluation of clinical pregnancy rates. This paper analyzed if a KPIs-score strategy with clinical and laboratorial parameters could be used to establish benchmarks for IQC in ART cycles. In this prospective cohort study, 280 patients (36.4±4.3years) underwent ART. The total KPIs-score was obtained by the analysis of age, AMH (AMH Gen II ELISA/pre-mixing modified, Beckman Coulter Inc.), number of metaphase-II oocytes, fertilization rates and morphological quality of the embryonic lot. The total KPIs-score (C-KPIs+L-KPIs) was correlated with the presence or absence of clinical pregnancy. The relationship between the C-KPIs and L-KPIs scores was analyzed to establish quality standards, to increase the performance of clinical and laboratorial processes in ART. The logistic regression model (LRM), with respect to pregnancy and total KPIs-score (280 patients/102 clinical pregnancies), yielded an odds ratio of 1.24 (95%CI = 1.16-1.32). There was also a significant difference (pperformed to assess quality standards. This total KPIs-score could set up benchmarks for clinical pregnancy. Moreover, IQC can use C-KPIs and L-KPIs scores to detect problems in the clinical-laboratorial interface.

  6. Highest performance in 3D metal cutting at smallest footprint: benchmark of a robot based system vs. parameters of gantry systems

    Science.gov (United States)

    Scheller, Torsten; Bastick, André; Michel-Triller, Robert; Manzella, Christon

    2014-02-01

    In the automotive industry as well as in other industries ecological aspects regarding energy savings are driving new technologies and materials, e.g. lightweight materials as aluminium or press hardened steels. Processing such parts especially complex 3D shaped parts laser manufacturing has become the key process offering highest efficiency. The most established systems for 3D cutting applications are based on gantry systems. The disadvantage of those systems is their huge footprint to realize the required stability and work envelope. Alternatively a robot based system might be of advantage if accuracy, speed and overall performance would be capable processing automotive parts. With the BIM "beam in motion" system, JENOPTIK Automatisierungstechnik GmbH has developed a modular robot based laser processing machine, which meets all OEM specs processing press hardened steel parts. A benchmark of the BIM versus a gantry system was done regarding all required parameters to fulfil OEM specifications for press hardened steel parts. As a result a highly productive, accurate and efficient system can be described based on one or multiple robot modules working simultaneously together. The paper presents the improvements on the robot machine concept BIM addressed in 2012 [1] leading to an industrial proven system approach for the automotive industry. It further compares the performance and the parameters for 3D cutting applications of the BIM system versus a gantry system by samples of applied parts. Finally an overview of suitable applications for processing complex 3D parts with high productivity at small footprint is given.

  7. A benchmark study for different numerical parameters and their impact on the calculated strain levels for a model part door outer

    Science.gov (United States)

    Berger, E.; Brenne, T.; Heath, A.; Hochholdinger, B.; Kassem-Manthey, K.; Keßler, L.; Koch, N.; Kortmann, G.; Kröff, A.; Otto, T.; Steinbeck, G.; Till, E.; Verhoeven, H.; Vu, T.-C.; Wiegand, K.

    2005-08-01

    To increase the accuracy of finite element simulations in daily practice the local German and Austrian Deep Drawing Research Groups of IDDRG founded a special Working Group in year 2000. The main objective of this group was the continuously ongoing study and discussion of numerical / material effects in simulation jobs and to work out possible solutions. As a first theme of this group the intensive study of small die radii and the possibility of detecting material failure in these critical forming positions was selected. The part itself is a fictional body panel outside in which the original door handle of the VW Golf A4 has been constructed, a typical position of possible material necking or rupture in the press shop. All conditions to do a successful simulation have been taken care of in advance, material data, boundary conditions, friction, FLC and others where determined for the two materials in investigation — a mild steel and a dual phase steel HXT500X. The results of the experiments have been used to design the descriptions of two different benchmark runs for the simulation. The simulations with different programs as well as with different parameters showed on one hand negligible and on the other hand parameters with strong impact on the result — thereby having a different impact on a possible material failure prediction.

  8. Roughness parameters as the elements of surface condition and deformation assessment based on the results of TLS scanning

    Directory of Open Access Journals (Sweden)

    Kowalska Maria E.

    2017-03-01

    Full Text Available Roughness parameters as the elements of surface condition and deformation assessment based on the results of TLS scanning. Roughness is the attribute of a surface that can be defined as a collection of small surface unevennesses that can be identified optically or detected mechanically which do not result from the surface’s shape and their size depends on a material type as well as on undergone processing. The most often utilised roughness parameters are: Ra - mean distance value of the points on the observed profile from the average line on the sampling length, and Rz - difference between arithmetic mean height of the five highest peaks and arithmetic mean depth of the five deepest valleys regarding to the average line on the length of the measured fragment. In practice, roughness parameters are most often defined for surface elements that require relevant manufacturing or processing through grinding, founding or polishing in order to provide the expected surface roughness. To measure those parameters for the produced elements profilometers are used. In this paper the authors present an alternative approach of determining and utilising such parameters. Instead of the utilising methods based on sampling length measurement, roughness parameters are determined on the basis of point clouds, that represent a surface of rough concrete, obtained through terrestrial laser scanning. The authors suggest using the surface roughness parameter data acquired in this way as a supplementary data in the condition assessment (erosion rate of surfaces being a part of engineering constructions made of concrete.

  9. Effect of CT scanning parameters on volumetric measurements of pulmonary nodules by 3D active contour segmentation: a phantom study

    International Nuclear Information System (INIS)

    Way, Ted W; Chan, H-P; Goodsitt, Mitchell M; Sahiner, Berkman; Hadjiiski, Lubomir M; Zhou Chuan; Chughtai, Aamer

    2008-01-01

    The purpose of this study is to investigate the effects of CT scanning and reconstruction parameters on automated segmentation and volumetric measurements of nodules in CT images. Phantom nodules of known sizes were used so that segmentation accuracy could be quantified in comparison to ground-truth volumes. Spherical nodules having 4.8, 9.5 and 16 mm diameters and 50 and 100 mg cc -1 calcium contents were embedded in lung-tissue-simulating foam which was inserted in the thoracic cavity of a chest section phantom. CT scans of the phantom were acquired with a 16-slice scanner at various tube currents, pitches, fields-of-view and slice thicknesses. Scans were also taken using identical techniques either within the same day or five months apart for study of reproducibility. The phantom nodules were segmented with a three-dimensional active contour (3DAC) model that we previously developed for use on patient nodules. The percentage volume errors relative to the ground-truth volumes were estimated under the various imaging conditions. There was no statistically significant difference in volume error for repeated CT scans or scans taken with techniques where only pitch, field of view, or tube current (mA) were changed. However, the slice thickness significantly (p < 0.05) affected the volume error. Therefore, to evaluate nodule growth, consistent imaging conditions and high resolution should be used for acquisition of the serial CT scans, especially for smaller nodules. Understanding the effects of scanning and reconstruction parameters on volume measurements by 3DAC allows better interpretation of data and assessment of growth. Tracking nodule growth with computerized segmentation methods would reduce inter- and intraobserver variabilities

  10. Benchmarking of a treatment planning system for spot scanning proton therapy: Comparison and analysis of robustness to setup errors of photon IMRT and proton SFUD treatment plans of base of skull meningioma

    Energy Technology Data Exchange (ETDEWEB)

    Harding, R., E-mail: ruth.harding2@wales.nhs.uk [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF, United Kingdomand Abertawe Bro Morgannwg University Health Board, Medical Physics and Clinical Engineering, Swansea SA2 8QA (United Kingdom); Trnková, P.; Lomax, A. J. [Paul Scherrer Institute, Centre for Proton Therapy, Villigen 5232 (Switzerland); Weston, S. J.; Lilley, J.; Thompson, C. M.; Cosgrove, V. P. [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF (United Kingdom); Short, S. C. [Leeds Institute of Molecular Medicine, Oncology and Clinical Research, Leeds LS9 7TF, United Kingdomand St James’s Institute of Oncology, Oncology, Leeds LS9 7TF (United Kingdom); Loughrey, C. [St James’s Institute of Oncology, Oncology, Leeds LS9 7TF (United Kingdom); Thwaites, D. I. [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF, United Kingdomand Institute of Medical Physics, School of Physics, University of Sydney, Sydney NSW 2006 (Australia)

    2014-11-01

    Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was to benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.

  11. Optimization of imaging parameters for SPECT scans of [99mTc]TRODAT-1 using Taguchi analysis.

    Directory of Open Access Journals (Sweden)

    Cheng-Kai Huang

    Full Text Available Parkinson's disease (PD is a neurodegenerative disease characterized by progressive loss of dopaminergic neurons in the basal ganglia. Single photon emission computed tomography (SPECT scans using [99mTc]TRODAT-1 can image dopamine transporters and provide valuable diagnostic information of PD. In this study, we optimized the scanning parameters for [99mTc]TRODAT-1/SPECT using the Taguchi analysis to improve image quality. SPECT scans were performed on forty-five healthy volunteers according to an L9 orthogonal array. Three parameters were considered, including the injection activity, uptake duration, and acquisition time per projection. The signal-to-noise ratio (SNR was calculated from the striatum/occipital activity ratio as an image quality index. Ten healthy subjects and fifteen PD patients were used to verify the optimal parameters. The estimated optimal parameters were 962 MBq for [99mTc]TRODAT-1 injection, 260 min for uptake duration, and 60 s/projection for data acquisition. The uptake duration and time per projection were the two dominant factors which had an F-value of 18.638 (38% and 25.933 (53%, respectively. Strong cross interactions existed between the injection activity/uptake duration and injection activity/time per projection. Therefore, under the consideration of as low as reasonably achievable (ALARA for radiation protection, we can decrease the injection activity to 740 MBq. The image quality remains almost the same for clinical applications.

  12. Directional radiative transfer by SCOPE, SLC and DART using laser scan derived structural forest parameters

    Science.gov (United States)

    Timmermans, Joris; Gastellu-Etchegorry, Jean Philippe; van der Tol, Christiaan; Verhoef, Wout; Vekerdy, Zoltan; Su, Zhongbo

    2017-04-01

    Accurate estimation of the radiative transfer (RT) over vegetation is the corner stone of agricultural and hydrological remote sensing applications. Present remote sensing sensors mostly use traditional optical, thermal and microwave observations. However with these traditional observations characterization of the light efficiency and photosynthetic rate can only be accomplished indirectly. A promising new method of observing these processes is by using the fluorescent emitted radiation. This approach was recently highlighted due to the selection of the FLEX sensor as a future Earth Explorer by the European Space agency (ESA). Several modelling activities have been undertaken to better understand the technical feasibilities of this sensor. Within these studies, the SCOPE model has been chosen as the baseline algorithm. This model combines a detailed RT description of the canopy, using a discrete version of the SAIL model, with a description of photosynthetic processes (by use of the Farquhar/Ball-Berry model). Consequently, this model is capable of simulating simultaneously the biophysical processes and jointly the fluorescent, optical and thermal RT. The SAIL model however is a 1D RT model and consequently provides higher uncertainties with increasing vegetation structures. The main objective of this research is to investigate the limitations of the RT model component of the SCOPE model over complex canopies. In particular the aim of this research is to evaluate the validity for increasingly structural complex canopies', on the bidirectional reflectance distribution functions (BRDF) of these canopies. This was accomplished by evaluating the simulated outgoing radiation from SCOPE/SAIL against simulations of the DART 3D RT model. In total nine different scenarios were simulated with the DART RTM with increasing structural complexity, ranging from the simple 'Plot' scenario to the highly complex 'Multiple Crown' scenario. The canopy parameters are retrieved from a

  13. Stripping chronopotentiometry at scanned deposition potential (SSCP). Part 2. Determination of metal ion speciation parameters

    NARCIS (Netherlands)

    Leeuwen, van H.P.; Town, R.M.

    2003-01-01

    Stripping chronopotentiometry at scanned deposition potential (SSCP) generates curves that are fundamentally different in form from classical polarographic waves. Still, despite their steeper slope and non-linear log plot, the shift in the SSCP half-wave deposition potential can be interpreted in a

  14. Evaluation and benchmarking of an EC-QCL-based mid-infrared spectrometer for monitoring metabolic blood parameters in critical care units

    Science.gov (United States)

    Grafen, M.; Delbeck, S.; Busch, H.; Heise, H. M.; Ostendorf, A.

    2018-02-01

    Mid-infrared spectroscopy hyphenated with micro-dialysis is an excellent method for monitoring metabolic blood parameters as it enables the concurrent, reagent-free and precise measurement of multiple clinically relevant substances such as glucose, lactate and urea in micro-dialysates of blood or interstitial fluid. For a marketable implementation, quantum cascade lasers (QCL) seem to represent a favourable technology due to their high degree of miniaturization and potentially low production costs. In this work, an external cavity (EC) - QCL-based spectrometer and two Fourier-transform infrared (FTIR) spectrometers were benchmarked with regard to the precision, accuracy and long-term stability needed for the monitoring of critically ill patients. For the tests, ternary aqueous solutions of glucose, lactate and mannitol (the latter for dialysis recovery determination) were measured in custom-made flow-through transmission cells of different pathlengths and analyzed by Partial Least Squares calibration models. It was revealed, that the wavenumber tuning speed of the QCL had a severe impact on the EC-mirror trajectory due to matching the digital-analog-converter step frequency with the mechanical resonance frequency of the mirror actuation. By selecting an appropriate tuning speed, the mirror oscillations acted as a hardware smoothing filter for the significant intensity variations caused by mode hopping. Besides the tuning speed, the effects of averaging over multiple spectra and software smoothing parameters (Savitzky-Golay-filters and FT-smoothing) were investigated. The final settings led to a performance of the QCL-system, which was comparable with a research FTIR-spectrometer and even surpassed the performance of a small FTIR-mini-spectrometer.

  15. Breast dose reduction for chest CT by modifying the scanning parameters based on the pre-scan size-specific dose estimate (SSDE)

    Energy Technology Data Exchange (ETDEWEB)

    Kidoh, Masafumi; Utsunomiya, Daisuke; Oda, Seitaro; Nakaura, Takeshi; Yuki, Hideaki; Hirata, Kenichiro; Namimoto, Tomohiro; Sakabe, Daisuke; Hatemura, Masahiro; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Faculty of Life Sciences, Honjo, Kumamoto (Japan); Funama, Yoshinori [Kumamoto University, Department of Medical Physics, Faculty of Life Sciences, Honjo, Kumamoto (Japan)

    2017-06-15

    To investigate the usefulness of modifying scanning parameters based on the size-specific dose estimate (SSDE) for a breast-dose reduction for chest CT. We scanned 26 women with a fixed volume CT dose index (CTDI{sub vol}) (15 mGy) and another 26 with a fixed SSDE (15 mGy) protocol (protocol 1 and 2, respectively). In protocol 2, tube current was calculated based on the patient habitus obtained on scout images. We compared the mean breast dose and the inter-patient breast dose variability and performed linear regression analysis of the breast dose and the body mass index (BMI) of the two protocols. The mean breast dose was about 35 % lower under protocol 2 than protocol 1 (10.9 mGy vs. 16.8 mGy, p < 0.01). The inter-patient breast dose variability was significantly lower under protocol 2 than 1 (1.2 mGy vs. 2.5 mGy, p < 0.01). We observed a moderate negative correlation between the breast dose and the BMI under protocol 1 (r = 0.43, p < 0.01); there was no significant correlation (r = 0.06, p = 0.35) under protocol 2. The SSDE-based protocol achieved a reduction in breast dose and in inter-patient breast dose variability. (orig.)

  16. Choice of scans and optimization of instrument parameters in neutron diffraction

    International Nuclear Information System (INIS)

    Sequeira, A.

    1975-01-01

    With neutron intensities available at medium flux reactors, the study of crystal and molecular structures is now restricted to molecules having less than about 50 atoms per asymmetric unit. This limit could perhaps be extended to structures having upto about 100 atoms in the asymmetric unit if all the experimental parameters associated with the neutron diffractometer could be ideally optimized. In view of the fact that most of the structures of current biological interest fall in this category, such as the mono-, di-, and oligonucleotides, as well as small peptides, it is important that all the instrument parameters are chosen so as to stretch the power of a given neutron source to its limit. Some ways of optimizing the various instrument parameters in order to obtain the maximum neutron intensity at a given resolution are discussed. The small effects of vertical divergences on the resolution are ignored

  17. Scatter radiation breast exposure during head CT: impact of scanning conditions and anthropometric parameters on shielded and unshielded breast dose

    International Nuclear Information System (INIS)

    Klasic, B.; Knezevic, Z.; Vekic, B.; Brnic, Z.; Novacic, K.

    2006-01-01

    Constantly increasing clinical requests for CT scanning of the head on our facility continue to raise concern regarding radiation exposure of patients, especially radiosensitive tissues positioned close to the scanning plane. The aim of our prospective study was to estimate scatter radiation doses to the breast from routine head CT scans, both with and without use of lead shielding, and to establish influence of various technical and anthropometric factors on doses using statistical data analysis. In 85 patient referred to head CT for objective medical reasons, one breast was covered with lead apron during CT scanning. Radiation doses were measured at skin of both breasts and over the apron simultaneously, by the use of thermo luminescent dosimeters. The doses showed a mean reduction by 37% due to lead shielding. After we statistically analyzed our data, we observed significant correlation between under-the-shield dose and values of technical parameters. We used multiple linear regression model to describe the relationships of doses to unshielded and shielded breast respectively, with anthropometric and technical factors. Our study proved lead shielding of the breast to be effective, easy to use and leading to a significant reduction in scatter dose. (author)

  18. Scatter radiation breast exposure during head CT: impact of scanning conditions and anthropometric parameters on shielded and unshielded breast dose

    Energy Technology Data Exchange (ETDEWEB)

    Klasic, B. [Hospital for pulmonary diseases, Zagreb (Croatia); Knezevic, Z.; Vekic, B. [Rudjer Boskovic Institute, Zagreb (Croatia); Brnic, Z.; Novacic, K. [Merkur Univ. Hospital, Zagreb (Croatia)

    2006-07-01

    Constantly increasing clinical requests for CT scanning of the head on our facility continue to raise concern regarding radiation exposure of patients, especially radiosensitive tissues positioned close to the scanning plane. The aim of our prospective study was to estimate scatter radiation doses to the breast from routine head CT scans, both with and without use of lead shielding, and to establish influence of various technical and anthropometric factors on doses using statistical data analysis. In 85 patient referred to head CT for objective medical reasons, one breast was covered with lead apron during CT scanning. Radiation doses were measured at skin of both breasts and over the apron simultaneously, by the use of thermo luminescent dosimeters. The doses showed a mean reduction by 37% due to lead shielding. After we statistically analyzed our data, we observed significant correlation between under-the-shield dose and values of technical parameters. We used multiple linear regression model to describe the relationships of doses to unshielded and shielded breast respectively, with anthropometric and technical factors. Our study proved lead shielding of the breast to be effective, easy to use and leading to a significant reduction in scatter dose. (author)

  19. METHODOLOGY FOR DETERMINING OPTIMAL EXPOSURE PARAMETERS OF A HYPERSPECTRAL SCANNING SENSOR

    Directory of Open Access Journals (Sweden)

    P. Walczykowski

    2016-06-01

    Full Text Available The purpose of the presented research was to establish a methodology that would allow the registration of hyperspectral images with a defined spatial resolution on a horizontal plane. The results obtained within this research could then be used to establish the optimum sensor and flight parameters for collecting aerial imagery data using an UAV or other aerial system. The methodology is based on an user-selected optimal camera exposure parameters (i.e. time, gain value and flight parameters (i.e. altitude, velocity. A push-broom hyperspectral imager- the Headwall MicroHyperspec A-series VNIR was used to conduct this research. The measurement station consisted of the following equipment: a hyperspectral camera MicroHyperspec A-series VNIR, a personal computer with HyperSpec III software, a slider system which guaranteed the stable motion of the sensor system, a white reference panel and a Siemens star, which was used to evaluate the spatial resolution. Hyperspectral images were recorded at different distances between the sensor and the target- from 5m to 100m. During the registration process of each acquired image, many exposure parameters were changed, such as: the aperture value, exposure time and speed of the camera’s movement on the slider. Based on all of the registered hyperspectral images, some dependencies between chosen parameters had been developed: - the Ground Sampling Distance – GSD and the distance between the sensor and the target, - the speed of the camera and the distance between the sensor and the target, - the exposure time and the gain value, - the Density Number and the gain value. The developed methodology allowed us to determine the speed and the altitude of an unmanned aerial vehicle on which the sensor would be mounted, ensuring that the registered hyperspectral images have the required spatial resolution.

  20. Best ellipse and cylinder parameters estimation from laser profile scan sections

    Science.gov (United States)

    Rahayem, Mohamed; Werghi, Naoufel; Kjellander, Johan

    2012-09-01

    Industrial applications like robot-aided welding, automated inspection, and 3D measurements require 3D points to be captured from the surfaces of objects and processed to calculate the information-of-interest. The lack of research focused on fitting ellipses to 3D laser profile data, and the intrinsic features that distinguish it from 2D digital images, motivated us to conduct a comparative study involving the most popular ellipse-fitting methods. After describing our laser profile scanning system, and a survey of ellipse-fitting methods, we compare, using extensive experiments performed with synthetic and real data, the fitting algorithms in terms of stability and accuracy with respect to a variety of factors. The estimate obtained with the best method is used to initialize a robust non-linear iterative ellipse fitting method. Finally, we describe a novel method for the construction of cylindrical surfaces from estimated elliptical sections.

  1. Removal of evidential motion-contaminated and poorly fitted image data improves IVIM diffusion MRI parameter scan-rescan reproducibility.

    Science.gov (United States)

    Chevallier, Olivier; Zhou, Nan; He, Jian; Loffroy, Romaric; Wáng, Yì Xiáng J

    2018-01-01

    Background It has been reported that intravoxel incoherent motion (IVIM) diffusion magnetic resonance imaging (MRI) scan-rescan reproducibility is unsatisfactory. Purpose To study IVIM MRI parameter reproducibility for liver parenchyma after the removal of motion-contaminated and/or poorly fitted image data. Material and Methods Eighteen healthy volunteers had liver scans twice in the same session to assess scan-rescan repeatability, and again in another session after an average interval of 13 days to assess reproducibility. Diffusion-weighted images were acquired with a 3-T scanner using respiratory-triggered echo-planar sequence and 16 b-values (0-800 s/mm 2 ). Measurement was performed on the right liver with segment-unconstrained least square fitting. Image series with evidential anatomical mismatch, apparent artifacts, and poorly fitted signal intensity vs. b-value curve were excluded. A minimum of three slices was deemed necessary for IVIM parameter estimation. Results With a total 54 examinations, six did not satisfy inclusion criteria, leading to a success rate of 89%, and 14 volunteers were finally included for the repeatability/reproducibility study. A total of 3-10 slices per examination (mean = 5.3 slices, median = 5 slices) were utilized for analysis. Using threshold b-value = 80 s/mm 2 , the coefficient of variation and within-subject coefficient of variation for repeatability were 2.86% and 3.36% for D slow , 3.81% and 4.24% for perfusion fraction (PF), 18.16% and 24.88% for D fast ; and those for reproducibility were 2.48% and 3.24% for D slow , 4.91% and 5.38% for PF, and 21.18% and 30.89% for D fast . Conclusion Removal of motion-contaminated and/or poorly fitted image data improves IVIM parameter reproducibility.

  2. Reduction of MRI acoustic noise achieved by manipulation of scan parameters – A study using veterinary MR sequences

    International Nuclear Information System (INIS)

    Baker, Martin A.

    2013-01-01

    Sound pressure levels were measured within an MR scan room for a range of sequences employed in veterinary brain scanning, using a test phantom in an extremity coil. Variation of TR and TE, and use of a quieter gradient mode (‘whisper’ mode) were evaluated to determine their effect on sound pressure levels (SPLs). Use of a human head coil and a human brain sequence was also evaluated. Significant differences in SPL were achieved for T2, T1, T2* gradient echo and VIBE sequences by varying TR or TE, or by selecting the ‘whisper’ gradient mode. An appreciable reduction was achieved for the FLAIR sequence. Noise levels were not affected when a head coil was used in place of an extremity coil. Due to sequence parameters employed, veterinary patients and anaesthetists may be exposed to higher sound levels than those experienced in human MR examinations. The techniques described are particularly valuable in small animal MR scanning where ear protection is not routinely provided for the patient.

  3. Optimization of Parameters in 16-slice CT-‌‌scan Protocols for Reduction of the Absorbed Dose

    Directory of Open Access Journals (Sweden)

    Shahrokh Naseri

    2014-08-01

    Full Text Available Introduction In computed tomography (CT technology, an optimal radiation dose can be achieved via changing radiation parameters such as mA, pitch factor, rotation time and tube voltage (kVp for diagnostic images. Materials and Methods In this study, the brain, abdomen, and thorax scaning was performed using Toshiba 16-slice scannerand standard AAPM and CTDI phantoms. AAPM phantom was used for the measurement of image-related parameters and CTDI phantom was utilized for the calculation of absorbed dose to patients. Imaging parameters including mA (50-400 mA, pitch factor (1 and 1.5 and rotation time (range of 0.5, 0.75, 1, 1.5 and 2 seconds were considered as independent variables. The brain, abdomen and chest imaging was performed multi-slice and spiral modes. Changes in image quality parameters including contrast resolution (CR and spatial resolution (SR in each condition were measured and determined by MATLAB software. Results After normalizing data by plotting the full width at half maximum (FWHM of point spread function (PSF in each condition, it was observed that image quality was not noticeably affected by each cases. Therefore, in brain scan, the lowest patient dose was in 150 mA and rotation time of 1.5 seconds. Based on results of scanning of the abdomen and chest, the lowest patient dose was obtained by 100 mA and pitch factors of 1 and 1.5. Conclusion It was found that images with acceptable quality and reliable detection ability could be obtained using smaller doses of radiation, compared to protocols commonly used by operators.

  4. Investigating the minimum scan parameters required to generate free-breathing motion artefact-free fast-helical CT.

    Science.gov (United States)

    Thomas, David H; Tan, Jun; Neylon, Jack; Dou, Tai; O'Connell, Dylan; McNitt-Gray, Michael; Lee, Percy; Lamb, James; Low, Daniel A

    2018-02-01

    A recently proposed "5DCT" protocol uses deformable registration of free-breathing fast-helical CT scans to generate a breathing motion model. In order to allow accurate registration, free-breathing images are required to be free of doubling-artefacts, which arise when tissue motion is greater than scan speed. Using a unique set of digital phantoms based on patient data and verified with a motion phantom, this work identifies the minimum scanner parameters required to successfully generate free-breathing artefact-free fast-helical scans. A motion phantom and 5 patients were imaged 25 times under free-breathing conditions in alternating directions with a 64-slice CT scanner employing a low-dose fast-helical protocol. A series of high temporal resolution (0.1 s) 5DCT scan data sets was generated in each case. A simulated CT scanner was used to "image" each free-breathing data set. Various CT scanner detector widths and rotation times were simulated, and verified using the motion phantom results. Motion-induced artefacts were quantified in patient images using structural similarity maps to determine the similarity between axial slices. Increasing amounts of motion-induced artefacts were observed with increasing rotation times >0.2 s for 16 mm detector configuration. The current generation of 16-slice CT scanners, which are present in the majority of Radiation Oncology departments, are not capable of generating free-breathing sorting artefact-free images required for 5DCT. Advances in knowledge: A recently proposed "5DCT" protocol uses deformable registration of free-breathing fast-helical CT scans to generate a breathing motion model. In order to allow accurate registration, free-breathing images are required to be free of doubling-artefacts, which arise when tissue motion is greater than scan speed. The results suggest that the current generation of 16-slice CT scanners, present in the majority of Radiation Oncology departments, are not capable of generating

  5. Effect of cataract surgery on retinal nerve fiber layer thickness parameters using scanning laser polarimetry (GDxVCC)

    Science.gov (United States)

    Dada, Tanuj; Behera, Geeta; Agarwal, Anand; Kumar, Sanjeev; Sihota, Ramanjit; Panda, Anita

    2010-01-01

    Purpose: To study the effect of cataract extraction on the retinal nerve fiber layer (RNFL) thickness, and assessment by scanning laser polarimetry (SLP), with variable corneal compensation (GDx VCC), at the glaucoma service of a tertiary care center in North India. Materials and Methods: Thirty-two eyes of 32 subjects were enrolled in the study. The subjects underwent RNFL analysis by SLP (GDx VCC) before undergoing phacoemulsification cataract extraction with intraocular lens (IOL) implantation (Acrysof SA 60 AT) four weeks following cataract surgery. The RNFL thickness parameters evaluated both before and after surgery included temporal, superior, nasal, inferior, temporal (TSNIT) average, superior average, inferior average, and nerve fiber index (NFI). Results: The mean age of subjects was 57.6 ± 11.7 years (18 males, 14 females). Mean TSNIT average thickness (μm) pre- and post-cataract surgery was 49.2 ± 14.1 and 56.5 ± 7.6 (P = 0.001). There was a statistically significant increase in RNFL thickness parameters (TSNIT average, superior average, and inferior average) and decrease in NFI post-cataract surgery as compared to the baseline values. Mean NFI pre- and post-cataract surgery was 41.3 ± 15.3 and 21.6 ± 11.8 (P = 0.001). Conclusions: Measurement of RNFL thickness parameters by scanning laser polarimetry is significantly altered following cataract surgery. Post the cataract surgery, a new baseline needs to be established for assessing the longitudinal follow-up of a glaucoma patient. The presence of cataract may lead to an underestimation of the RNFL thickness, and this should be taken into account when analyzing progression in a glaucoma patient. PMID:20689193

  6. General squark flavour mixing: constraints, phenomenology and benchmarks

    CERN Document Server

    De Causmaecker, Karen; Herrmann, Bjoern; Mahmoudi, Farvah; O'Leary, Ben; Porod, Werner; Sekmen, Sezen; Strobbe, Nadja

    2015-11-19

    We present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.

  7. Evaluation of retinal nerve fiber layer thickness parameters in myopic population using scanning laser polarimetry (GDxVCC).

    Science.gov (United States)

    Dada, Tanuj; Aggarwal, A; Bali, S J; Sharma, A; Shah, B M; Angmo, D; Panda, A

    2013-01-01

    Myopia presents a significant challenge to the ophthalmologist as myopic discs are often large, tilted, with deep cups and have a thinner neuroretinal rim all of which may mimic glaucomatous optic nerve head changes causing an error in diagnosis. To evaluate the retinal fiber layer (RNFL) thickness in low, moderate and high myopia using scanning laser polarimetry with variable corneal compensation (GDxVCC). One hundred eyes of 100 emmetropes, 30 eyes of low myopes (0 to - 4 D spherical equivalent(SE), 45 eyes with moderate myopia (- 4 to - 8D SE), and 30 eyes with high myopia (- 8 to - 15D SE) were subjected to retinal nerve fiber layer assessment using the scanning laser polarimetry (GDxVCC) in all subjects using the standard protocol. Subjects with IOP > 21 mm Hg, optic nerve head or visual field changes suggestive of glaucoma were excluded from the study. The major outcome parameters were temporal-superior-nasal-inferiortemporal (TSNIT) average, the superior and inferior average and the nerve fibre indicator (NFI). The TSNIT average (p = 0.009), superior (p = 0.001) and inferior average (p = 0.008) were significantly lower; the NFI was higher (P less than 0.001) in moderate myopes as compared to that in emmetropes. In high myopia the RNFL showed supranormal values; the TSNIT average, superior and inferior average was significantly higher(p less than 0.001) as compared to that in emmetropes. The RNFL measurements on scanning laser polarimetry are affected by the myopic refractive error. Moderate myopes show a significant thinning of the RNFL. In high myopia due to peripapillary chorioretinal atrophy and contribution of scleral birefringence, the RNFL values are abnormally high. These findings need to be taken into account while assessing and monitoring glaucoma damage in moderate to high myopes on GDxVCC. © NEPjOPH.

  8. Quantifying non-contact tip-sample thermal exchange parameters for accurate scanning thermal microscopy with heated microprobes

    Science.gov (United States)

    Wilson, Adam A.; Borca-Tasciuc, Theodorian

    2017-07-01

    Simplified heat-transfer models are widely employed by heated probe scanning thermal microscopy techniques for determining thermal conductivity of test samples. These parameters have generally been assumed to be independent of sample properties; however, there has been little investigation of this assumption in non-contact mode, and the impact calibration procedures have on sample thermal conductivity results has not been explored. However, there has been little investigation of the commonly used assumption that thermal exchange parameters are sample independent in non-contact mode, or of the impact calibration procedures have on sample thermal conductivity results. This article establishes conditions under which quantitative, localized, non-contact measurements using scanning thermal microscopy with heated microprobes may be most accurately performed. The work employs a three-dimensional finite element (3DFE) model validated using experimental results and no fitting parameters, to determine the dependence of a heated microprobe thermal resistance as a function of sample thermal conductivity at several values of probe-to-sample clearance. The two unknown thermal exchange parameters were determined by fitting the 3DFE simulated probe thermal resistance with the predictions of a simplified probe heat transfer model, for two samples with different thermal conductivities. This calibration procedure known in experiments as the intersection method was simulated for sample thermal conductivities in the range of 0.1-50 W m-1 K-1 and clearance values in the 260-1010 nm range. For a typical Wollaston wire microprobe geometry as simulated here, both the thermal exchange radius and thermal contact resistance were found to increase with the sample thermal conductivity in the low thermal conductivity range while they remained approximately constant for thermal conductivities >1 W m-1 K-1, with similar trends reported for all clearance values investigated. It is shown that

  9. EXTRACTING ROOF PARAMETERS AND HEAT BRIDGES OVER THE CITY OF OLDENBURG FROM HYPERSPECTRAL, THERMAL, AND AIRBORNE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    L. Bannehr

    2012-09-01

    Full Text Available Remote sensing methods are used to obtain different kinds of information about the state of the environment. Within the cooperative research project HiReSens, funded by the German BMBF, a hyperspectral scanner, an airborne laser scanner, a thermal camera, and a RGB-camera are employed on a small aircraft to determine roof material parameters and heat bridges of house tops over the city Oldenburg, Lower Saxony. HiReSens aims to combine various geometrical highly resolved data in order to achieve relevant evidence about the state of the city buildings. Thermal data are used to obtain the energy distribution of single buildings. The use of hyperspectral data yields information about material consistence of roofs. From airborne laser scanning data (ALS digital surface models are inferred. They build the basis to locate the best orientations for solar panels of the city buildings. The combination of the different data sets offers the opportunity to capitalize synergies between differently working systems. Central goals are the development of tools for the collection of heat bridges by means of thermal data, spectral collection of roofs parameters on basis of hyperspectral data as well as 3D-capture of buildings from airborne lasers scanner data. Collecting, analyzing and merging of the data are not trivial especially not when the resolution and accuracy is aimed in the domain of a few decimetre. The results achieved need to be regarded as preliminary. Further investigations are still required to prove the accuracy in detail.

  10. Influence of Confocal Scanning Laser Microscopy specific acquisition parameters on the detection and matching of Speeded-Up Robust Features

    International Nuclear Information System (INIS)

    Stanciu, Stefan G.; Hristu, Radu; Stanciu, George A.

    2011-01-01

    The robustness and distinctiveness of local features to various object or scene deformations and to modifications of the acquisition parameters play key roles in the design of many computer vision applications. In this paper we present the results of our experiments on the behavior of a recently developed technique for local feature detection and description, Speeded-Up Robust Features (SURF), regarding image modifications specific to Confocal Scanning Laser Microscopy (CSLM). We analyze the repeatability of detected SURF keypoints and the precision-recall of their matching under modifications of three important CSLM parameters: pinhole aperture, photomultiplier (PMT) gain and laser beam power. During any investigation by CSLM these three parameters have to be modified, individually or together, in order to optimize the contrast and the Signal Noise Ratio (SNR), being also inherently modified when changing the microscope objective. Our experiments show that an important amount of SURF features can be detected at the same physical locations in images collected at different values of the pinhole aperture, PMT gain and laser beam power, and further on can be successfully matched based on their descriptors. In the final part, we exemplify the potential of SURF in CSLM imaging by presenting a SURF-based computer vision application that deals with the mosaicing of images collected by this technique. -- Research highlights: → Influence of pinhole aperture modifications on SURF detection and matching in CSLM images. → Influence of photomultiplier gain modifications on SURF detection and matching in CSLM images. → Influence of laser beam power modifications on SURF detection and matching in CSLM images. → SURF-based automated mosaicing of CSLM images.

  11. Quantitative assessment of image artifacts from root filling materials on CBCT scans made using several exposure parameters

    Energy Technology Data Exchange (ETDEWEB)

    Rabelo, Katharina Alves; Cavalcanti, Yuri Wanderley; De Oliveira Pinto, Martina Gerlane; De Melo, Daniela Pita [Dept. of Oral Diagnosis, State University of Paraiba, Campina Grande (Brazil); Melo, Saulo Leonardo Sousa [Dept. of Oral Pathology, Radiology and Medicine, University of Iowa, Iowa City (United States); Campos, Paulo Sergio Flores; De Andrade Freitas Oliveira, Luciana Soares [Federal University of Bahia, Salvador (Brazil)

    2017-09-15

    To quantify artifacts from different root filling materials in cone-beam computed tomography (CBCT) images acquired using different exposure parameters. Fifteen single-rooted teeth were scanned using 8 different exposure protocols with 3 different filling materials and once without filling material as a control group. Artifact quantification was performed by a trained observer who made measurements in the central axial slice of all acquired images in a fixed region of interest using ImageJ. Hyperdense artifacts, hypodense artifacts, and the remaining tooth area were identified, and the percentages of hyperdense and hypodense artifacts, remaining tooth area, and tooth area affected by the artifacts were calculated. Artifacts were analyzed qualitatively by 2 observers using the following scores: absence (0), moderate presence (1), and high presence (2) for hypodense halos, hypodense lines, and hyperdense lines. Two-way ANOVA and the post-hoc Tukey test were used for quantitative and qualitative artifact analysis. The Dunnet test was also used for qualitative analysis. The significance level was set at P<.05. There were no significant interactions among the exposure parameters in the quantitative or qualitative analysis. Significant differences were observed among the studied filling materials in all quantitative analyses. In the qualitative analyses, all materials differed from the control group in terms of hypodense and hyperdense lines (P<.05). Fiberglass posts did not differ statistically from the control group in terms of hypodense halos (P>.05). Different exposure parameters did not affect the objective or subjective observations of artifacts in CBCT images; however, the filling materials used in endodontic restorations did affect both types of assessments.

  12. Quantitative assessment of image artifacts from root filling materials on CBCT scans made using several exposure parameters

    International Nuclear Information System (INIS)

    Rabelo, Katharina Alves; Cavalcanti, Yuri Wanderley; De Oliveira Pinto, Martina Gerlane; De Melo, Daniela Pita; Melo, Saulo Leonardo Sousa; Campos, Paulo Sergio Flores; De Andrade Freitas Oliveira, Luciana Soares

    2017-01-01

    To quantify artifacts from different root filling materials in cone-beam computed tomography (CBCT) images acquired using different exposure parameters. Fifteen single-rooted teeth were scanned using 8 different exposure protocols with 3 different filling materials and once without filling material as a control group. Artifact quantification was performed by a trained observer who made measurements in the central axial slice of all acquired images in a fixed region of interest using ImageJ. Hyperdense artifacts, hypodense artifacts, and the remaining tooth area were identified, and the percentages of hyperdense and hypodense artifacts, remaining tooth area, and tooth area affected by the artifacts were calculated. Artifacts were analyzed qualitatively by 2 observers using the following scores: absence (0), moderate presence (1), and high presence (2) for hypodense halos, hypodense lines, and hyperdense lines. Two-way ANOVA and the post-hoc Tukey test were used for quantitative and qualitative artifact analysis. The Dunnet test was also used for qualitative analysis. The significance level was set at P<.05. There were no significant interactions among the exposure parameters in the quantitative or qualitative analysis. Significant differences were observed among the studied filling materials in all quantitative analyses. In the qualitative analyses, all materials differed from the control group in terms of hypodense and hyperdense lines (P<.05). Fiberglass posts did not differ statistically from the control group in terms of hypodense halos (P>.05). Different exposure parameters did not affect the objective or subjective observations of artifacts in CBCT images; however, the filling materials used in endodontic restorations did affect both types of assessments

  13. Effect of Scanning and Reconstruction Parameters on Three Dimensional Volume and CT Value Measurement of Pulmonary Nodules: A Phantom Study

    Directory of Open Access Journals (Sweden)

    Datong SU

    2017-08-01

    Full Text Available Background and objective The computed tomography (CT follow-up of indeterminate pulmonary nodules aiming to evaluate the change of the volume and CT value is the common strategy in clinic. The CT dose needs to considered on serious CT scans in addition to the measurement accuracy. The purpose of this study is to quantify the precision of pulmonary nodule volumetric measurement and CT value measurement with various tube currents and reconstruction algorithms in a phantom study with dual-energy CT. Methods A chest phantom containing 9 artificial spherical solid nodules with known diameter (D=2.5 mm, 5 mm, 10 mm and density (-100 HU, 60 HU and 100 HU was scanned using a 64-row detector CT canner at 120 Kilovolt & various currents (10 mA, 20 mA, 50 mA, 80 mA,100 mA, 150 mA and 350 mA. Raw data were reconstructed with filtered back projection and three levels of adaptive statistical iterative reconstruction algorithm (FBP, ASIR; 30%, 50% and 80%. Automatic volumetric measurements were performed using commercially available software. The relative volume error (RVE and the absolute attenuation error (AAE between the software measures and the reference-standard were calculated. Analyses of the variance were performed to evaluate the effect of reconstruction methods, different scan parameters, nodule size and attenuation on the RPE. Results The software substantially overestimated the very small (D=2.5 mm nodule's volume [mean RVE: (100.8%±28%] and underestimated it attenuation [mean AAE: (-756±80 HU]. The mean RVEs of nodule with diameter as 5 mm and 10 mm were small [(-0.9%±1.1% vs (0.9%±1.4%], however, the mean AAEs [(-243±26 HU vs (-129±7 HU] were large. The ANOVA analysis for repeated measurements showed that different tube current and reconstruction algorithm had no significant effect on the volumetric measurements for nodules with diameter of 5 mm and 10 mm (F=5.60, P=0.10 vs F=11.13, P=0.08, but significant effects on the measurement of CT

  14. Instrumental fundamental parameters and selected applications of the microfocus X-ray fluorescence analysis at a scanning electron microscope

    International Nuclear Information System (INIS)

    Rackwitz, Vanessa

    2012-01-01

    For a decade X-ray sources have been commercially available for the microfocus X-ray fluorescence analysis (μ-XRF) and offer the possibility of extending the analytics at a scanning electron microscope (SEM) with an attached energy dispersive X-ray spectrometer (EDS). By using the μ-XRF it is possible to determine the content of chemical elements in a microscopic sample volume in a quantitative, reference-free and non-destructive way. For the reference-free quantification with the XRF the Sherman equation is referred to. This equation deduces the intensity of the detected X-ray intensity of a fluorescence peak to the content of the element in the sample by means of fundamental parameters. The instrumental fundamental parameters of the μ-XRF at a SEM/EDS system are the excitation spectrum consisting of X-ray tube spectrum and the transmission of the X-ray optics, the geometry and the spectrometer efficiency. Based on a calibrated instrumentation the objectives of this work are the development of procedures for the characterization of all instrumental fundamental parameters as well as the evaluation and reduction of their measurement uncertainties: The algorithms known from the literature for the calculation of X-ray tube spectrum are evaluated with regard to their deviations in the spectral distribution. Within this work a novel semi-empirical model is improved with respect to its uncertainties and enhanced in the low energy range as well as extended for another three anodes. The emitted X-ray tube spectrum is calculated from the detected one, which is measured at an especially developed setup for the direct measurement of X-ray tube spectra. This emitted X-ray tube spectrum is compared to the one calculated on base of the model of this work. A procedure for the determination of the most important parameters of an X-ray semi-lens in parallelizing mode is developed. The temporal stability of the transmission of X-ray full lenses, which have been in regular use at

  15. Influence of scan duration on the estimation of pharmacokinetic parameters for breast lesions: a study based on CAIPIRINHA-Dixon-TWIST-VIBE technique

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Wen; Zhao, Bin; Wang, Guangbin; Wang, Cuiyan [Shandong University, Department of MR Imaging, Shandong Medical Imaging Research Institute, Jinan, Shandong (China); Liu, Hui [Siemens Healthcare, MR Collaborations NE Asia, Shanghai (China)

    2015-04-01

    To evaluate the influence of scan duration on pharmacokinetic parameters and their performance in differentiating benign from malignant breast lesions. Dynamic breast imaging was performed on a 3.0-T MR system using a prototype CAIPIRINHA-Dixon-TWISTVIBE (CDT-VIBE) sequence with a temporal resolution of 11.9 s. Enrolled in the study were 53 women with 55 lesions (26 benign and 29 malignant). Pharmacokinetic parameters (Ktrans, ve, kep and iAUC) were calculated for various scan durations from 1 to 7 min after injection of contrast medium using the Tofts model. Ktrans, kep and ve calculated from the 1-min dataset were significantly different from those calculated from the other datasets. In benign lesions, Ktrans, kep and ve were significantly different only between 1 min and 2 min (corrected P > 0.05), but in malignant lesions there were significant differences for any of the comparisons up to 6 min vs. 7 min (corrected P > 0.05). There were no significant differences in AUCs for any of the parameters (P > 0.05). In breast dynamic contrast-enhanced MRI the scan duration has a significant impact on pharmacokinetic parameters, but the diagnostic ability may not be significantly affected. A scan duration of 5 min after injection of contrast medium may be sufficient for calculation of Tofts model pharmacokinetic parameters. (orig.)

  16. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  17. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    International Nuclear Information System (INIS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm 3 , which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm 3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique

  18. Visual information transfer. Part 1: Assessment of specific information needs. Part 2: Parameters of appropriate instrument scanning behavior

    Science.gov (United States)

    Comstock, J. R., Jr.; Kirby, R. H.; Coates, G. D.

    1985-01-01

    The present study explored eye scan behavior as a function of level of subject training. Oculometric (eye scan) measures were recorded from each of ten subjects during training trials on a CRT based flight simulation task. The task developed for the study incorporated subtasks representative of specific activities performed by pilots, but which could be performed at asymptotic levels within relatively short periods of training. Changes in eye scan behavior were examined as initially untrained subjects developed skill in the task. Eye scan predictors of performance on the task were found. Examination of eye scan in proximity to selected task events revealed differences in the distribution of looks at the instruments as a function of level of training.

  19. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part II: Benchmark comparisons of PUMA core parameters with MCNP5 and improvements due to a simple cell heterogeneity correction

    International Nuclear Information System (INIS)

    Grant, C.; Mollerach, R.; Leszczynski, F.; Serra, O.; Marconi, J.; Fink, J.

    2006-01-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO 2 rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)

  20. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  1. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  2. Effect of imaging parameters of spiral CT scanning on image quality for the dental implants. Visual evaluation using a semi-anthropomorphic mandible phantom

    International Nuclear Information System (INIS)

    Morita, Yasuhiko; Indou, Hiroko; Honda Eiichi

    2002-01-01

    The purpose of this study was to evaluate the effect of parameters of spiral CT scanning on the image quality required for the planning of dental implants operations. A semi-anthropomorphic mandible phantom which has artificial mandibular canals and teeth roots was used as a standard object for imaging. Spiral CT scans for the phantom settled in water phantom with diameters of 20 and 16 cm were performed. Visibility of the artificial mandibular canal made of a Teflon tube and gaps between tooth apex and canal in the mandibular phantom was evaluated for various combinations of the slice thickness, tables speeds, angles to the canal, and x-ray tube currents. Teeth roots were made of PVC (poly vinyl chloride). The artificial mandibular canal was clearly observed on the images of 1 mm slice thickness. At the same table speed of 2 mm /rotation, the images of thin slice (1 mm) were superior to that of thick slice (2 mm). The gap between teeth apex and canal was erroneously diagnosed on the images with table speeds of 3 mm/rotation. Horizontal scanning in parallel to the canal result in poor image quality for observation of mandibular canals because of the partial volume effect. A relatively high x-ray tube current (125 mA) at thin slice (1 mm) scanning was required for scanning the mandibular phantom in 20 cm water vessel. Spiral scanning with slice thickness of 1 mm and table speeds of 1 of 2 mm/rotation seemed to be suitable for dental implants. The result of this study suggested that diagnosis from two independent spiral scans with a different angle to the object was more accurate and more efficient than single spiral scanning. (author)

  3. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  4. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  5. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo......Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...

  6. Analysis of improvement in performance and design parameters for enhancing resolution in an atmospheric scanning electron microscope.

    Science.gov (United States)

    Yoon, Yeo Hun; Kim, Seung Jae; Kim, Dong Hwan

    2015-12-01

    The scanning electron microscope is used in various fields to go beyond diffraction limits of the optical microscope. However, the electron pathway should be conducted in a vacuum so as not to scatter electrons. The pretreatment of the sample is needed for use in the vacuum. To directly observe large and fully hydrophilic samples without pretreatment, the atmospheric scanning electron microscope (ASEM) is needed. We developed an electron filter unit and an electron detector unit for implementation of the ASEM. The key of the electron filter unit is that electrons are transmitted while air molecules remain untransmitted through the unit. The electron detector unit collected the backscattered electrons. We conducted experiments using the selected materials with Havar foil, carbon film and SiN film. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  8. Benchmark risk analysis models

    NARCIS (Netherlands)

    Ale BJM; Golbach GAM; Goos D; Ham K; Janssen LAM; Shield SR; LSO

    2002-01-01

    A so-called benchmark exercise was initiated in which the results of five sets of tools available in the Netherlands would be compared. In the benchmark exercise a quantified risk analysis was performed on a -hypothetical- non-existing hazardous establishment located on a randomly chosen location in

  9. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  10. Internet Based Benchmarking

    OpenAIRE

    Bogetoft, Peter; Nielsen, Kurt

    2002-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as non-parametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore alternative improvement strategies. An implementation of both a parametric and a non parametric model are presented.

  11. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  12. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  13. Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern

    Directory of Open Access Journals (Sweden)

    Alexander Wendel

    2017-10-01

    Full Text Available Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera’s 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera’s pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m/1.05 ∘ and 0.18 m/2.39 ∘ . We also propose several approaches to displaying and interpreting the 6D results in a human readable way.

  14. Repeatability and Reproducibility of Retinal Nerve Fiber Layer Parameters Measured by Scanning Laser Polarimetry with Enhanced Corneal Compensation in Normal and Glaucomatous Eyes

    Directory of Open Access Journals (Sweden)

    Mirian Ara

    2015-01-01

    Full Text Available Objective. To assess the intrasession repeatability and intersession reproducibility of peripapillary retinal nerve fiber layer (RNFL thickness parameters measured by scanning laser polarimetry (SLP with enhanced corneal compensation (ECC in healthy and glaucomatous eyes. Methods. One randomly selected eye of 82 healthy individuals and 60 glaucoma subjects was evaluated. Three scans were acquired during the first visit to evaluate intravisit repeatability. A different operator obtained two additional scans within 2 months after the first session to determine intervisit reproducibility. The intraclass correlation coefficient (ICC, coefficient of variation (COV, and test-retest variability (TRT were calculated for all SLP parameters in both groups. Results. ICCs ranged from 0.920 to 0.982 for intravisit measurements and from 0.910 to 0.978 for intervisit measurements. The temporal-superior-nasal-inferior-temporal (TSNIT average was the highest (0.967 and 0.946 in normal eyes, while nerve fiber indicator (NFI; 0.982 and inferior average (0.978 yielded the best ICC in glaucomatous eyes for intravisit and intervisit measurements, respectively. All COVs were under 10% in both groups, except NFI. TSNIT average had the lowest COV (2.43% in either type of measurement. Intervisit TRT ranged from 6.48 to 12.84. Conclusions. The reproducibility of peripapillary RNFL measurements obtained with SLP-ECC was excellent, indicating that SLP-ECC is sufficiently accurate for monitoring glaucoma progression.

  15. Repeatability and Reproducibility of Retinal Nerve Fiber Layer Parameters Measured by Scanning Laser Polarimetry with Enhanced Corneal Compensation in Normal and Glaucomatous Eyes.

    Science.gov (United States)

    Ara, Mirian; Ferreras, Antonio; Pajarin, Ana B; Calvo, Pilar; Figus, Michele; Frezzotti, Paolo

    2015-01-01

    To assess the intrasession repeatability and intersession reproducibility of peripapillary retinal nerve fiber layer (RNFL) thickness parameters measured by scanning laser polarimetry (SLP) with enhanced corneal compensation (ECC) in healthy and glaucomatous eyes. One randomly selected eye of 82 healthy individuals and 60 glaucoma subjects was evaluated. Three scans were acquired during the first visit to evaluate intravisit repeatability. A different operator obtained two additional scans within 2 months after the first session to determine intervisit reproducibility. The intraclass correlation coefficient (ICC), coefficient of variation (COV), and test-retest variability (TRT) were calculated for all SLP parameters in both groups. ICCs ranged from 0.920 to 0.982 for intravisit measurements and from 0.910 to 0.978 for intervisit measurements. The temporal-superior-nasal-inferior-temporal (TSNIT) average was the highest (0.967 and 0.946) in normal eyes, while nerve fiber indicator (NFI; 0.982) and inferior average (0.978) yielded the best ICC in glaucomatous eyes for intravisit and intervisit measurements, respectively. All COVs were under 10% in both groups, except NFI. TSNIT average had the lowest COV (2.43%) in either type of measurement. Intervisit TRT ranged from 6.48 to 12.84. The reproducibility of peripapillary RNFL measurements obtained with SLP-ECC was excellent, indicating that SLP-ECC is sufficiently accurate for monitoring glaucoma progression.

  16. Workshop: Monte Carlo computational performance benchmark - Contributions

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.; Petrovic, B.; Martin, W.R.; Sutton, T.; Leppaenen, J.; Forget, B.; Romano, P.; Siegel, A.; Hoogenboom, E.; Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, Y.; Yu, J.; Sun, J.; Fan, X.; Yu, G.; Bernard, F.; Cochet, B.; Jinaphanh, A.; Jacquet, O.; Van der Marck, S.; Tramm, J.; Felker, K.; Smith, K.; Horelik, N.; Capellan, N.; Herman, B.

    2013-01-01

    This series of slides is divided into 3 parts. The first part is dedicated to the presentation of the Monte-Carlo computational performance benchmark (aims, specifications and results). This benchmark aims at performing a full-size Monte Carlo simulation of a PWR core with axial and pin-power distribution. Many different Monte Carlo codes have been used and their results have been compared in terms of computed values and processing speeds. It appears that local power values mostly agree quite well. The first part also includes the presentations of about 10 participants in which they detail their calculations. In the second part, an extension of the benchmark is proposed in order to simulate a more realistic reactor core (for instance non-uniform temperature) and to assess feedback coefficients due to change of some parameters. The third part deals with another benchmark, the BEAVRS benchmark (Benchmark for Evaluation And Validation of Reactor Simulations). BEAVRS is also a full-core PWR benchmark for Monte Carlo simulations

  17. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  18. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  19. Optimisation of the CT parameters with evaluation of MDCT double-scan images in the planning of the dental implant treatment

    International Nuclear Information System (INIS)

    Wojciechowski, W.; Urbanik, A.; Kownacki, P.; Kownacki, S.; Sowa, A.

    2009-01-01

    Background: The aim of the present study was optimisation of the examination parameters and evaluation of reliability of the MDCT double-scan images obtained with computer navigation for dental implant treatment. Material/Methods: With the use of a MDCT scanner SOMATOM Sensation (Siemens), CT-images of a phantom were performed: slice-collimation (10 A - 0.75 mm, 10 A - 1.5 mm), slice-thickness (0.75, 1, 2, 3, 5 mm), pitch (0.5, 1, 1.5). Additionally, the analysis on various filters from H20f to H60f was performed. For study used a phantom of the human cadaver head. Qualitative analysis was done using Nobel Guide (Nobel Biocare, Sweden), assessing possible artefacts on the images, and measurements of the bone structure on all filters in comparison with the real image. Results: The quality of the phantom images was assessed as optimal for the slices thickness 0.75 and 1 mm. The use of various values of the pitch did not have statistically significant difference on the image quality. Application of various filters did not alter the parameters of the bone structure, however the use of lower filters (H30f and H40f) had a beneficial effect on the quality of 3D reconstruction. The arrangement of the 'window' parameters in CT seemed to have a greater influence on the measurement and evaluation of the bone structure. Conclusions: Slice-collimation and slice-thickness are the most important parameters in selection of the optimal scan-protocol. It is recommended to use in the postprocessing, the mentioned above parameter succession with the application of various filters (H30f and H60f) at a stable arrangement of the 'window' in the CT examination. (authors)

  20. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  1. Parameter optimization for Ti-47Al-2Cr-2Nb in selective laser melting based on geometric characteristics of single scan tracks

    Science.gov (United States)

    Shi, Xuezhi; Ma, Shuyuan; Liu, Changmeng; Wu, Qianru

    2017-05-01

    A rapid method for parameter optimization for Ti-47Al-2Cr-2Nb powder in selective laser melting (SLM) based on melt pool characterizations is developed. The laser power and scanning speed are selected through the surface morphologies of single tracks to fabricate the high-density samples. The influences of the parameters on sectional geometric characteristics (width, height, depth, bulge area, penetration area and dilution) are analyzed by linear regression analysis, and the regression models are established to predict the geometric characteristics of single tracks. An inappropriate combination of process parameters can cause material evaporation and the melt pool instability, leading to generation of cracking and balling, which will reduce the linear correlation. The loss of Al is also an important reference criterion for the optimization of the process parameters, so Al content measurements are conducted to validate evaporation during the SLM processing. The Al loss from single tracks varies from 5.73 to 0.32 at%. Applying optimal process parameters to the fabrication of TiAl parts result in densities ranging from 97.34% to 98.95%. However, more serious losses and an uneven distribution of Al in the fabricated samples are observed.

  2. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  3. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  4. Radiation injury vs. recurrent brain metastasis: combining textural feature radiomics analysis and standard parameters may increase {sup 18}F-FET PET accuracy without dynamic scans

    Energy Technology Data Exchange (ETDEWEB)

    Lohmann, Philipp; Stoffels, Gabriele; Stegmayr, Carina; Neumaier, Bernd [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); Ceccon, Garry [University of Cologne, Department of Neurology, Cologne (Germany); Rapp, Marion; Sabel, Michael; Kamp, Marcel A. [Heinrich Heine University Duesseldorf, Department of Neurosurgery, Duesseldorf (Germany); Filss, Christian P. [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); RWTH Aachen University Hospital, Department of Nuclear Medicine, Aachen (Germany); Shah, Nadim J. [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); RWTH Aachen University Hospital, Department of Neurology, Aachen (Germany); Juelich-Aachen Research Alliance (JARA) - Section JARA-Brain, Department of Neurology, Juelich (Germany); Langen, Karl-Josef [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); RWTH Aachen University Hospital, Department of Nuclear Medicine, Aachen (Germany); Juelich-Aachen Research Alliance (JARA) - Section JARA-Brain, Department of Neurology, Juelich (Germany); Galldiks, Norbert [Forschungszentrum Juelich, Institute of Neuroscience and Medicine, Juelich (Germany); University of Cologne, Department of Neurology, Cologne (Germany); University of Cologne, Center of Integrated Oncology (CIO), Cologne (Germany)

    2017-07-15

    We investigated the potential of textural feature analysis of O-(2-[{sup 18}F]fluoroethyl)-L-tyrosine ({sup 18}F-FET) PET to differentiate radiation injury from brain metastasis recurrence. Forty-seven patients with contrast-enhancing brain lesions (n = 54) on MRI after radiotherapy of brain metastases underwent dynamic {sup 18}F-FET PET. Tumour-to-brain ratios (TBRs) of {sup 18}F-FET uptake and 62 textural parameters were determined on summed images 20-40 min post-injection. Tracer uptake kinetics, i.e., time-to-peak (TTP) and patterns of time-activity curves (TAC) were evaluated on dynamic PET data from 0-50 min post-injection. Diagnostic accuracy of investigated parameters and combinations thereof to discriminate between brain metastasis recurrence and radiation injury was compared. Diagnostic accuracy increased from 81 % for TBR{sub mean} alone to 85 % when combined with the textural parameter Coarseness or Short-zone emphasis. The accuracy of TBR{sub max} alone was 83 % and increased to 85 % after combination with the textural parameters Coarseness, Short-zone emphasis, or Correlation. Analysis of TACs resulted in an accuracy of 70 % for kinetic pattern alone and increased to 83 % when combined with TBR{sub max}. Textural feature analysis in combination with TBRs may have the potential to increase diagnostic accuracy for discrimination between brain metastasis recurrence and radiation injury, without the need for dynamic {sup 18}F-FET PET scans. (orig.)

  5. A Voxel-Based Method for Automated Identification and Morphological Parameters Estimation of Individual Street Trees from Mobile Laser Scanning Data

    Directory of Open Access Journals (Sweden)

    Hongxing Liu

    2013-01-01

    Full Text Available As an important component of urban vegetation, street trees play an important role in maintenance of environmental quality, aesthetic beauty of urban landscape, and social service for inhabitants. Acquiring accurate and up-to-date inventory information for street trees is required for urban horticultural planning, and municipal urban forest management. This paper presents a new Voxel-based Marked Neighborhood Searching (VMNS method for efficiently identifying street trees and deriving their morphological parameters from Mobile Laser Scanning (MLS point cloud data. The VMNS method consists of six technical components: voxelization, calculating values of voxels, searching and marking neighborhoods, extracting potential trees, deriving morphological parameters, and eliminating pole-like objects other than trees. The method is validated and evaluated through two case studies. The evaluation results show that the completeness and correctness of our method for street tree detection are over 98%. The derived morphological parameters, including tree height, crown diameter, diameter at breast height (DBH, and crown base height (CBH, are in a good agreement with the field measurements. Our method provides an effective tool for extracting various morphological parameters for individual street trees from MLS point cloud data.

  6. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  7. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  8. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  9. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    International Nuclear Information System (INIS)

    Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V

    2014-01-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  10. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  11. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  12. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...

  13. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    Science.gov (United States)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-08-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  14. Radiation injury vs. recurrent brain metastasis: combining textural feature radiomics analysis and standard parameters may increase18F-FET PET accuracy without dynamic scans.

    Science.gov (United States)

    Lohmann, Philipp; Stoffels, Gabriele; Ceccon, Garry; Rapp, Marion; Sabel, Michael; Filss, Christian P; Kamp, Marcel A; Stegmayr, Carina; Neumaier, Bernd; Shah, Nadim J; Langen, Karl-Josef; Galldiks, Norbert

    2017-07-01

    We investigated the potential of textural feature analysis of O-(2-[ 18 F]fluoroethyl)-L-tyrosine ( 18 F-FET) PET to differentiate radiation injury from brain metastasis recurrence. Forty-seven patients with contrast-enhancing brain lesions (n = 54) on MRI after radiotherapy of brain metastases underwent dynamic 18 F-FET PET. Tumour-to-brain ratios (TBRs) of 18 F-FET uptake and 62 textural parameters were determined on summed images 20-40 min post-injection. Tracer uptake kinetics, i.e., time-to-peak (TTP) and patterns of time-activity curves (TAC) were evaluated on dynamic PET data from 0-50 min post-injection. Diagnostic accuracy of investigated parameters and combinations thereof to discriminate between brain metastasis recurrence and radiation injury was compared. Diagnostic accuracy increased from 81 % for TBR mean alone to 85 % when combined with the textural parameter Coarseness or Short-zone emphasis. The accuracy of TBR max alone was 83 % and increased to 85 % after combination with the textural parameters Coarseness, Short-zone emphasis, or Correlation. Analysis of TACs resulted in an accuracy of 70 % for kinetic pattern alone and increased to 83 % when combined with TBR max . Textural feature analysis in combination with TBRs may have the potential to increase diagnostic accuracy for discrimination between brain metastasis recurrence and radiation injury, without the need for dynamic 18 F-FET PET scans. • Textural feature analysis provides quantitative information about tumour heterogeneity • Textural features help improve discrimination between brain metastasis recurrence and radiation injury • Textural features might be helpful to further understand tumour heterogeneity • Analysis does not require a more time consuming dynamic PET acquisition.

  15. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  16. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  17. Benchmarking of workplace performance

    NARCIS (Netherlands)

    van der Voordt, Theo; Jensen, Per Anker

    2017-01-01

    This paper aims to present a process model of value adding corporate real estate and facilities management and to discuss which indicators can be used to measure and benchmark workplace performance.

    In order to add value to the organisation, the work environment has to provide value for

  18. Comparison between FDG Uptake and Clinicopathologic and Immunohistochemical Parameters in Pre-operative PET/CT Scan of Primary Gastric Carcinoma

    International Nuclear Information System (INIS)

    Han, Eun Ji; Choi, Woo Hee; Chung, Yong An; Kim, Ki Jun; Maeng, Lee So; Sohn, Kyung Myung; Jung, Hyun Suk; Sohn, Hyung Sun; Chung, Soo Kyo

    2009-01-01

    The purpose of this study was to find out what clinicopathologic or immunohistochemical parameter that may affect FDG uptake of primary tumor in PET/CT scan of the gastric carcinoma patient. Eighty-nine patients with stomach cancer who underwent pre-operative FDG PET/CT scans were included. In cases with perceptible FDG uptake in primary tumor, the maximum standardized uptake value (SUVmax) was calculated. The clinicopathologic results such as depth of invasion (T stage), tumor size, lymph node metastasis, tumor differentiation and Lauren's classification and immunohistochemical markers such as Ki-67 index, expression of p53, EGFR, Cathepsin D, c-erb-B2 and COX-2 were reviewed. Nineteen out of 89 gastric carcinomas showed imperceptible FDG uptake on PET/CT images. In cases with perceptible FDG uptake in primary tumor, SUVmax was significantly higher in T2, T3 and T4 tumors than T1 tumors (5.8±3.1 vs. 3.7±2.1, p=0.002). SUVmax of large tumors (above or equal to 3 cm) was also significantly higher than SUVmax of small ones (less than 3 cm) (5.7±3.2 vs. 3.7±2.0, p=0.002). The intestinal types of gastric carcinomas according to Lauren showed higher FDG uptake compared to the non-intestinal types (5.4±2.8 vs. 3.7±1.3, p=0.003). SUVmax between p53 positive group and negative group was significantly different (6.0±2.8 vs. 4.4±3.0, p=0.035). No significant difference was found in presence of LN metastasis, tumor differentiation, Ki-67 index, and expression of EGFR, Cathepsin D, c-erb-B2 and COX-2. T stage of gastric carcinoma influenced the detectability of gastric cancer on FDG PET/CT scan. When gastric carcinoma was perceptible on PET/CT scan, T stage, size of primary tumor, Lauren's classification and p53 expression were related to degree of FDG uptake in primary tumor

  19. Comparison between FDG Uptake and Clinicopathologic and Immunohistochemical Parameters in Pre-operative PET/CT Scan of Primary Gastric Carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Han, Eun Ji; Choi, Woo Hee; Chung, Yong An; Kim, Ki Jun; Maeng, Lee So; Sohn, Kyung Myung; Jung, Hyun Suk; Sohn, Hyung Sun; Chung, Soo Kyo [College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of)

    2009-02-15

    The purpose of this study was to find out what clinicopathologic or immunohistochemical parameter that may affect FDG uptake of primary tumor in PET/CT scan of the gastric carcinoma patient. Eighty-nine patients with stomach cancer who underwent pre-operative FDG PET/CT scans were included. In cases with perceptible FDG uptake in primary tumor, the maximum standardized uptake value (SUVmax) was calculated. The clinicopathologic results such as depth of invasion (T stage), tumor size, lymph node metastasis, tumor differentiation and Lauren's classification and immunohistochemical markers such as Ki-67 index, expression of p53, EGFR, Cathepsin D, c-erb-B2 and COX-2 were reviewed. Nineteen out of 89 gastric carcinomas showed imperceptible FDG uptake on PET/CT images. In cases with perceptible FDG uptake in primary tumor, SUVmax was significantly higher in T2, T3 and T4 tumors than T1 tumors (5.8{+-}3.1 vs. 3.7{+-}2.1, p=0.002). SUVmax of large tumors (above or equal to 3 cm) was also significantly higher than SUVmax of small ones (less than 3 cm) (5.7{+-}3.2 vs. 3.7{+-}2.0, p=0.002). The intestinal types of gastric carcinomas according to Lauren showed higher FDG uptake compared to the non-intestinal types (5.4{+-}2.8 vs. 3.7{+-}1.3, p=0.003). SUVmax between p53 positive group and negative group was significantly different (6.0{+-}2.8 vs. 4.4{+-}3.0, p=0.035). No significant difference was found in presence of LN metastasis, tumor differentiation, Ki-67 index, and expression of EGFR, Cathepsin D, c-erb-B2 and COX-2. T stage of gastric carcinoma influenced the detectability of gastric cancer on FDG PET/CT scan. When gastric carcinoma was perceptible on PET/CT scan, T stage, size of primary tumor, Lauren's classification and p53 expression were related to degree of FDG uptake in primary tumor.

  20. Prediction of therapeutic response in steroid-treated pulmonary sarcoidosis. Evaluation of clinical parameters, bronchoalveolar lavage, gallium-67 lung scanning, and serum angiotensin-converting enzyme levels

    International Nuclear Information System (INIS)

    Hollinger, W.M.; Staton, G.W. Jr.; Fajman, W.A.; Gilman, M.J.; Pine, J.R.; Check, I.J.

    1985-01-01

    To find a pretreatment predictor of steroid responsiveness in pulmonary sarcoidosis the authors studied 21 patients before and after steroid treatment by clinical evaluation, pulmonary function tests, bronchoalveolar lavage (BAL), gallium-67 lung scan, and serum angiotensin-converting enzyme (SACE) level. Although clinical score, forced vital capacity (FVC), BAL percent lymphocytes (% lymphs), quantitated gallium-67 lung uptake, and SACE levels all improved with therapy, only the pretreatment BAL % lymphs correlated with the improvement in FVC (r = 0.47, p less than 0.05). Pretreatment BAL % lymphs of greater than or equal to 35% predicted improvement in FVC of 10/11 patients, whereas among 10 patients with BAL % lymphs less than 35%, 5 patients improved and 5 deteriorated. Clinical score, pulmonary function parameters, quantitated gallium-67 lung uptake, and SACE level used alone, in combination with BAL % lymphs or in combination with each other, did not improve this predictive value. The authors conclude that steroid therapy improves a number of clinical and laboratory parameters in sarcoidosis, but only the pretreatment BAL % lymphs are useful in predicting therapeutic responsiveness

  1. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...

  2. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  3. Controlled attenuation parameter using the FibroScan® XL probe for quantification of hepatic steatosis for non-alcoholic fatty liver disease in an Asian population.

    Science.gov (United States)

    Chan, Wah-Kheong; Nik Mustapha, Nik Raihan; Wong, Grace Lai-Hung; Wong, Vincent Wai-Sun; Mahadeva, Sanjiv

    2017-02-01

    The FibroScan® XL probe reduces failure of liver stiffness measurement (LSM) and unreliable results in obese patients. The objective of this article is to evaluate the accuracy of controlled attenuation parameter (CAP) obtained using the XL probe for the estimation of hepatic steatosis in patients with non-alcoholic fatty liver disease (NAFLD). Adult NAFLD patients with a liver biopsy within six months were included and were examined with the FibroScan® M and XL probes. Histopathological findings were reported according to the Non-Alcoholic Steatohepatitis Clinical Research Network Scoring System. Participants who did not have fatty liver on ultrasonography were recruited as controls. A total of 57 NAFLD patients and 22 controls were included. The mean age of the NAFLD patients and controls was 50.1 ± 10.4 years and 20.2 ± 1.3 years, respectively ( p  = 0.000). The mean body mass index was 30.2 ± 5.0 kg per m 2 and 20.5 ± 2.4 kg per m 2 , respectively ( p  = 0.000). The distribution of steatosis grades were: S0, 29%; S1, 17%; S2, 35%; S3, 19%. The AUROC for estimation of steatosis grade ≥ S1, S2 and S3 was 0.94, 0.80 and 0.69, respectively, using the M probe, and 0.97, 0.81 and 0.67, respectively, using the XL probe. CAP obtained using the XL probe had similar accuracy as the M probe for the estimation of hepatic steatosis in NAFLD patients.

  4. Algebraic Multigrid Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    2017-08-01

    AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the BoomerAMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL and is very similar to the AMG2013 benchmark with additional optimizations. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem with a 27-point stencil, which can be scaled up and is designed to solve a very large problem. A second problem simulates a time dependent problem, in which successively various smnllcr systems are solved.

  5. Mask Waves Benchmark

    Science.gov (United States)

    2007-10-01

    frequenciesfoeahpbeswllsa"gdnsmtrc fo eah/Rbe. /Qthe acuation are de fiamn aprltmethod raetheorta cmiurve fTtn,wihe ies whynee select ful cycle wisdoimporat tob...See Figure 22 for a comparison of measured waves, linear waves, and non- linear Stokes waves. Looking at the selected 16 runs from the trough-to-peak...Figure 23 for the benchmark data set, the relation of obtained frequency verses desired frequency is almost completely linear . The slight variation at

  6. Benchmarking Cloud Resources for HEP

    Science.gov (United States)

    Alef, M.; Cordeiro, C.; De Salvo, A.; Di Girolamo, A.; Field, L.; Giordano, D.; Guerri, M.; Schiavi, F. C.; Wiebalck, A.

    2017-10-01

    In a commercial cloud environment, exhaustive resource profiling is beneficial to cope with the intrinsic variability of the virtualised environment, allowing to promptly identify performance degradation. In the context of its commercial cloud initiatives, CERN has acquired extensive experience in benchmarking commercial cloud resources. Ultimately, this activity provides information on the actual delivered performance of invoiced resources. In this report we discuss the experience acquired and the results collected using several fast benchmark applications adopted by the HEP community. These benchmarks span from open-source benchmarks to specific user applications and synthetic benchmarks. The workflow put in place to collect and analyse performance metrics is also described.

  7. CREDIT BUREAU BENCHMARKING AS A TOOL FOR ESTIMATION OF BANK'S POSITION AT THE MARKET

    Directory of Open Access Journals (Sweden)

    A. Kaminsky

    2015-03-01

    Full Text Available The article presents the conception of benchmarking on the market of consumer loans. The essence of such benchmarking is comparative analysis of bank’s activity parameters with market average values from bureau of credit histories. Such benchmarking using is considering as a tool for estimation of bank’s market position.

  8. Spatial and optical parameters of contrails in the vortex and dispersion regime determined by means of a ground-based scanning lidar

    Energy Technology Data Exchange (ETDEWEB)

    Freudenthaler, V.; Homburg, F.; Jaeger, H. [Fraunhofer-Inst. fuer Atmosphaerische Umweltforschung (IFU), Garmisch-Partenkirchen (Germany)

    1997-12-31

    The spatial growth of individual condensation trails (contrails) of commercial aircrafts in the time range from 15 s to 60 min behind the aircraft is investigated by means of a ground-based scanning backscatter lidar. The growth in width is mainly governed by wind shear and varies between 18 m/min and 140 m/min. The growth of the cross-section varies between 3500 m{sup 2}/min and 25000 m{sup 2}/min. These values are in agreement with results of model calculations and former field measurements. The vertical growth is often limited by boundaries of the humid layer at flight level, but values up to 18 m/min were observed. Optical parameters like depolarization, optical depth and lidar ratio, i.e. the extinction-to-backscatter ratio, have been retrieved from the measurements at a wavelength of 532 nm. The linear depolarization rises from values as low as 0.06 for a young contrail (10 s old) to values around 0.5, typical for aged contrails. The latter indicates the transition from non-crystalline to crystalline particles in persistent contrails within a few minutes. The scatter of depolarization values measured in individual contrails is narrow, independent of the contrails age, and suggests a rather uniform growth of the particles inside a contrail. (author) 18 refs.

  9. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  10. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless......This report is based on the survey "Industrial Companies in Denmark - Today and Tomorrow',section IV: Supply Chain Management - Practices and Performance, question number 4.9 onperformance assessment. To our knowledge, this survey is unique, as we have not been able to findresults from any...

  11. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  12. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  13. Computational shielding benchmarks

    International Nuclear Information System (INIS)

    The American Nuclear Society Standards Committee 6.2.1 is engaged in the documentation of radiation transport problems and their solutions. The primary objective of this effort is to test computational methods used within the international shielding community. Dissemination of benchmarks will, it is hoped, accomplish several goals: (1) Focus attention on problems whose solutions represent state-of-the-art methodology for representative transport problems of generic interest; (2) Specification of standard problems makes comparisons of alternate computational methods, including use of approximate vs. ''exact'' computer codes, more meaningful; (3) Comparison with experimental data may suggest improvements in computer codes and/or associated data sets; (4) Test reliability of new methods as they are introduced for the solution of specific problems; (5) Verify user ability to apply a given computational method; and (6) Verify status of a computer program being converted for use on a different computer (e.g., CDC vs IBM) or facility

  14. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  15. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  16. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  17. A Benchmark of Lidar-Based Single Tree Detection Methods Using Heterogeneous Forest Data from the Alpine Space

    Directory of Open Access Journals (Sweden)

    Lothar Eysn

    2015-05-01

    Full Text Available In this study, eight airborne laser scanning (ALS-based single tree detection methods are benchmarked and investigated. The methods were applied to a unique dataset originating from different regions of the Alpine Space covering different study areas, forest types, and structures. This is the first benchmark ever performed for different forests within the Alps. The evaluation of the detection results was carried out in a reproducible way by automatically matching them to precise in situ forest inventory data using a restricted nearest neighbor detection approach. Quantitative statistical parameters such as percentages of correctly matched trees and omission and commission errors are presented. The proposed automated matching procedure presented herein shows an overall accuracy of 97%. Method based analysis, investigations per forest type, and an overall benchmark performance are presented. The best matching rate was obtained for single-layered coniferous forests. Dominated trees were challenging for all methods. The overall performance shows a matching rate of 47%, which is comparable to results of other benchmarks performed in the past. The study provides new insight regarding the potential and limits of tree detection with ALS and underlines some key aspects regarding the choice of method when performing single tree detection for the various forest types encountered in alpine regions.

  18. Nuclear Scans

    Science.gov (United States)

    Nuclear scans use radioactive substances to see structures and functions inside your body. They use a special ... images. Most scans take 20 to 45 minutes. Nuclear scans can help doctors diagnose many conditions, including ...

  19. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  20. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  1. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport...

  2. Assessing and benchmarking multiphoton microscopes for biologists.

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. © 2014 Elsevier Inc. All rights reserved.

  3. Assessing and benchmarking multiphoton microscopes for biologists

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F.

    2017-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. PMID:24974026

  4. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  5. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  6. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  7. Influence of raster scan parameters on the image quality for the THz phase imaging in collimated beam with a wide aperture

    International Nuclear Information System (INIS)

    Kulya, M S; Petrov, N V; Tcypkin, A N; Bespalov, V G

    2014-01-01

    We numerically investigated the influence of wavelength of terahertz radiation and size of scanning pinhole on the quality of reconstructed image by terahertz pulse time domain holography method. The improvement of the quality is achieved mainly by reducing the wavelength of the radiation, pinhole size and pinhole step (value of the pinhole displacement at each step) affect the spatial resolution of the reconstructed object and significantly affect the scanning time. However, pinhole does not significantly affect the quality only if pinhole size is not less than used wavelength.

  8. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  9. Benchmarking & european sustainable transport policies

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2003-01-01

    way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...

  10. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  11. Visual information transfer. 1: Assessment of specific information needs. 2: The effects of degraded motion feedback. 3: Parameters of appropriate instrument scanning behavior

    Science.gov (United States)

    Comstock, J. R., Jr.; Kirby, R. H.; Coates, G. D.

    1984-01-01

    Pilot and flight crew assessment of visually displayed information is examined as well as the effects of degraded and uncorrected motion feedback, and instrument scanning efficiency by the pilot. Computerized flight simulation and appropriate physiological measurements are used to collect data for standardization.

  12. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  13. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together...... to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  14. Benchmarking Complications Associated with Esophagectomy

    NARCIS (Netherlands)

    Low, Donald E.; Kuppusamy, Madhan Kumar; Alderson, Derek; Cecconello, Ivan; Chang, Andrew C.; Darling, Gail; Davies, Andrew; D'journo, Xavier Benoit; Gisbertz, Suzanne S.; Griffin, S. Michael; Hardwick, Richard; Hoelscher, Arnulf; Hofstetter, Wayne; Jobe, Blair; Kitagawa, Yuko; Law, Simon; Mariette, Christophe; Maynard, Nick; Morse, Christopher R.; Nafteux, Philippe; Pera, Manuel; Pramesh, C. S.; Puig, Sonia; Reynolds, John V.; Schroeder, Wolfgang; Smithers, Mark; Wijnhoven, B. P. L.

    2017-01-01

    Utilizing a standardized dataset with specific definitions to prospectively collect international data to provide a benchmark for complications and outcomes associated with esophagectomy. Outcome reporting in oncologic surgery has suffered from the lack of a standardized system for reporting

  15. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  16. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  17. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  18. CT in acute stroke: improved detection of dense intracranial arteries by varying window parameters and performing a thin-slice helical scan

    International Nuclear Information System (INIS)

    Gadda, D.; Vannucchi, L.; Niccolai, F.; Neri, A.T.; Carmignani, L.; Pacini, P.

    2002-01-01

    We evaluated the possibility of improving detection of a dense intracranial artery on CT in acute stroke by narrowing window width, varying window level and performing a thin-slice helical scan for the circle of Willis, in some cases followed by postprocessing maximum-intensity projections. We carried out 32 examinations of 31 patients with a documented cerebral ischaemic attack, performing cranial CT within 6 h of the onset of symptoms. Patients with intracranial haemorrhage were excluded, as were patients who went on to thrombolytic therapy. Varying window width and centre level on standard 5 mm thick contiguous axial slices, we detected a dense proximal middle cerebral artery (MCA) in a higher proportion of patients. A 1.1 mm thick helical scan through the circle of Willis improved recognition of a dense distal horizontal segment and the temporoinsular branches of the MCA and of a dense posterior cerebral artery. (orig.)

  19. International Benchmarking of Electricity Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2014-01-01

    Electricity transmission system operators (TSO) in Europe are increasing subject to high-powered performance-based regulation, such as revenue-cap regimes. The determination of the parameters in such regimes is challenging for national regulatory authorities (NRA), since there is normally a single...... TSO operating in each jurisdiction. The solution for European regulators has been found in international regulatory benchmarking, organized in collaboration with the Council of European Energy Regulators (CEER) in 2008 and 2012 for 22 and 23 TSOs, respectively. The frontier study provides static cost...

  20. Benchmark simulations of ICRF antenna coupling

    International Nuclear Information System (INIS)

    Louche, F.; Lamalle, P. U.; Messiaen, A. M.; Compernolle, B. van; Milanesio, D.; Maggiora, R.

    2007-01-01

    The paper reports on ongoing benchmark numerical simulations of antenna input impedance parameters in the ion cyclotron range of frequencies with different coupling codes: CST Microwave Studio, TOPICA and ANTITER 2. In particular we study the validity of the approximation of a magnetized plasma slab by a dielectric medium of suitably chosen permittivity. Different antenna models are considered: a single-strap antenna, a 4-strap antenna and the 24-strap ITER antenna array. Whilst the diagonal impedances are mostly in good agreement, some differences between the mutual terms predicted by Microwave Studio and TOPICA have yet to be resolved

  1. UAV CAMERAS: OVERVIEW AND GEOMETRIC CALIBRATION BENCHMARK

    Directory of Open Access Journals (Sweden)

    M. Cramer

    2017-08-01

    Full Text Available Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial calibrations runs. Already (pre-calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  2. Uav Cameras: Overview and Geometric Calibration Benchmark

    Science.gov (United States)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  3. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  4. Acquisition parameters optimization of a transmission electron forward scatter diffraction system in a cold-field emission scanning electron microscope for nanomaterials characterization.

    Science.gov (United States)

    Brodusch, Nicolas; Demers, Hendrix; Trudeau, Michel; Gauvin, Raynald

    2013-01-01

    Transmission electron forward scatter diffraction (t-EFSD) is a new technique providing crystallographic information with high resolution on thin specimens by using a conventional electron backscatter diffraction (EBSD) system in a scanning electron microscope. In this study, the impact of tilt angle, working distance, and detector distance on the Kikuchi pattern quality were investigated in a cold-field emission scanning electron microscope (CFE-SEM). We demonstrated that t-EFSD is applicable for tilt angles ranging from -20° to -40°. Working distance (WD) should be optimized for each material by choosing the WD for which the EBSD camera screen illumination is the highest, as the number of detected electrons on the screen is directly dependent on the scattering angle. To take advantage of the best performances of the CFE-SEM, the EBSD camera should be close to the sample and oriented towards the bottom to increase forward scattered electron collection efficiency. However, specimen chamber cluttering and beam/mechanical drift are important limitations in the CFE-SEM used in this work. Finally, the importance of t-EFSD in materials science characterization was illustrated through three examples of phase identification and orientation mapping. © Wiley Periodicals, Inc.

  5. FLOWTRAN-TF code benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.P. (ed.)

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  6. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  7. Benchmark of a Cubieboard cluster

    Science.gov (United States)

    Schnepf, M. J.; Gudu, D.; Rische, B.; Fischer, M.; Jung, C.; Hardt, M.

    2015-12-01

    We built a cluster of ARM-based Cubieboards2 which has a SATA interface to connect a harddrive. This cluster was set up as a storage system using Ceph and as a compute cluster for high energy physics analyses. To study the performance in these applications, we ran two benchmarks on this cluster. We also checked the energy efficiency of the cluster using the preseted benchmarks. Performance and energy efficency of our cluster were compared with a network-attached storage (NAS), and with a desktop PC.

  8. Cooperative scans

    NARCIS (Netherlands)

    M. Zukowski (Marcin); P.A. Boncz (Peter); M.L. Kersten (Martin)

    2004-01-01

    textabstractData mining, information retrieval and other application areas exhibit a query load with multiple concurrent queries touching a large fraction of a relation. This leads to individual query plans based on a table scan or large index scan. The implementation of this access path in most

  9. SU-E-T-266: Development of Evaluation System of Optimal Synchrotron Controlling Parameter for Spot Scanning Proton Therapy with Multiple Gate Irradiations in One Operation Cycle

    International Nuclear Information System (INIS)

    Yamada, T; Fujii, Y; Miyamoto, N; Matsuura, T; Takao, S; Matsuzaki, Y; Koyano, H; Shirato, H; Nihongi, H; Umezawa, M; Matsuda, K; Umegaki, K

    2015-01-01

    Purpose: We have developed a gated spot scanning proton beam therapy system with real-time tumor-tracking. This system has the ability of multiple-gated irradiation in a single synchrotron operation cycle controlling the wait-time for consecutive gate signals during a flat-top phase so that the decrease in irradiation efficiency induced by irregular variation of gate signal is reduced. Our previous studies have shown that a 200 ms wait-time is appropriate to increase the average irradiation efficiency, but the optimal wait-time can vary patient by patient and day by day. In this research, we have developed an evaluation system of the optimal wait-time in each irradiation based on the log data of the real-time-image gated proton beam therapy (RGPT) system. Methods: The developed system consists of logger for operation of RGPT system and software for evaluation of optimal wait-time. The logger records timing of gate on/off, timing and the dose of delivered beam spots, beam energy and timing of X-ray irradiation. The evaluation software calculates irradiation time in the case of different wait-time by simulating the multiple-gated irradiation operation using several timing information. Actual data preserved in the log data are used for gate on and off time, spot irradiation time, and time moving to the next spot. Design values are used for the acceleration and deceleration times. We applied this system to a patient treated with the RGPT system. Results: The evaluation system found the optimal wait-time of 390 ms that reduced the irradiation time by about 10 %. The irradiation time with actual wait-time used in treatment was reproduced with accuracy of 0.2 ms. Conclusion: For spot scanning proton therapy system with multiple-gated irradiation in one synchrotron operation cycle, an evaluation system of the optimal wait-time in each irradiation based on log data has been developed. Funding Support: Japan Society for the Promotion of Science (JSPS) through the FIRST

  10. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    DEFF Research Database (Denmark)

    Menze, Bjoern H.; Jakab, Andras; Bauer, Stefan

    2015-01-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low......- and high-grade glioma patients – manually annotated by up to four raters – and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74...

  11. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  12. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  13. Higgs Pair Production: Choosing Benchmarks With Cluster Analysis

    CERN Document Server

    Carvalho, Alexandra; Dorigo, Tommaso; Goertz, Florian; Gottardo, Carlo A.; Tosi, Mia

    2016-01-01

    New physics theories often depend on a large number of free parameters. The precise values of those parameters in some cases drastically affect the resulting phenomenology of fundamental physics processes, while in others finite variations can leave it basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics of different models; a clustering algorithm using that metric may then allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmark points are then guaranteed to be sensitive to a large area of the parameter space. In this doc...

  14. Scanning SQUID microscopy of a ferromanganese crust from the northwestern Pacific: Submillimeter scale magnetostratigraphy as a new tool for age determination and mapping of environmental magnetic parameters

    Science.gov (United States)

    Noguchi, A.; Oda, H.; Yamamoto, Y.; Usui, A.; Sato, M.; Kawai, J.

    2017-06-01

    Ferromanganese crusts record long-term deep-sea environmental changes. Thus, providing their reliable high-resolution age models is important. We applied a magnetostratigraphic technique to estimate the growth rate of a ferromanganese crust using scanning SQUID (superconducting quantum interference device) microscope (SSM). SSM is designed to map the magnetic field across thin sections with submillimeter resolution. The crust sample was taken from the Takuyo-Daigo Seamount, northwestern Pacific, and recorded a limited supply of dust and sediment from continents. After drift correction and removal of spike noises, the magnetic field values were stacked within the areas of high signal-to-noise ratios. By correlating the obtained profiles with a standard geomagnetic polarity timescale, we obtained an average growth rate of 3.37 ± 0.06 mm/Ma, which is consistent with that obtained by 10Be/9Be geochronology (2.93 ± 0.15 mm/Ma). S ratio mapping shows low values after 3 Ma, associated with voids between columnar structures.

  15. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study

    Energy Technology Data Exchange (ETDEWEB)

    Bondt, Timo de; Parizel, Paul M. [Antwerp University Hospital and University of Antwerp, Department of Radiology, Antwerp (Belgium); Mulkens, Tom [H. Hart Hospital, Department of Radiology, Lier (Belgium); Zanca, Federica [GE Healthcare, DoseWatch, Buc (France); KU Leuven, Imaging and Pathology Department, Leuven (Belgium); Pyfferoen, Lotte; Casselman, Jan W. [AZ St. Jan Brugge-Oostende AV Hospital, Department of Radiology, Brugge (Belgium)

    2017-02-15

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value < 0.001) dose differences among hospitals were observed. The hospital with lowest dose levels showed smallest dose variability and used age-stratified protocols for standardizing paediatric head exams. Erroneous selection of adult protocols for children still occurred, mostly in the oldest age-group. Even though all hospitals complied with national and international DRLs, dose tracking and benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. (orig.)

  16. Burn-up TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Persic, A.; Ravnik, M.; Zagar, T.

    1998-01-01

    Different reactor codes are used for calculations of reactor parameters. The accuracy of the programs is tested through comparison of the calculated values with the experimental results. Well-defined and accurately measured benchmarks are required. The experimental results of reactivity measurements, fuel element reactivity worth distribution and fuel-up measurements are presented in this paper. The experiments were performed with partly burnt reactor core. The experimental conditions were well defined, so that the results can be used as a burn-up benchmark test case for a TRIGA Mark II reactor calculations.(author)

  17. In-core fuel management benchmarks for PHWRs

    International Nuclear Information System (INIS)

    1996-06-01

    Under its in-core fuel management activities, the IAEA set up two co-ordinated research programmes (CRPs) on complete in-core fuel management code packages. At a consultant meeting in November 1988 the outline of the CRP on in-core fuel management benchmars for PHWRs was prepared, three benchmarks were specified and the corresponding parameters were defined. At the first research co-ordination meeting in December 1990, seven more benchmarks were specified. The objective of this TECDOC is to provide reference cases for the verification of code packages used for reactor physics and fuel management of PHWRs. 91 refs, figs, tabs

  18. New Parameters for Higher Accuracy in the Computation of Binding Free Energy Differences upon Alanine Scanning Mutagenesis on Protein-Protein Interfaces.

    Science.gov (United States)

    Simões, Inês C M; Costa, Inês P D; Coimbra, João T S; Ramos, Maria J; Fernandes, Pedro A

    2017-01-23

    Knowing how proteins make stable complexes enables the development of inhibitors to preclude protein-protein (P:P) binding. The identification of the specific interfacial residues that mostly contribute to protein binding, denominated as hot spots, is thus critical. Here, we refine an in silico alanine scanning mutagenesis protocol, based on a residue-dependent dielectric constant version of the Molecular Mechanics/Poisson-Boltzmann Surface Area method. We have used a large data set of structurally diverse P:P complexes to redefine the residue-dependent dielectric constants used in the determination of binding free energies. The accuracy of the method was validated through comparison with experimental data, considering the per-residue P:P binding free energy (ΔΔG binding ) differences upon alanine mutation. Different protocols were tested, i.e., a geometry optimization protocol and three molecular dynamics (MD) protocols: (1) one using explicit water molecules, (2) another with an implicit solvation model, and (3) a third where we have carried out an accelerated MD with explicit water molecules. Using a set of protein dielectric constants (within the range from 1 to 20) we showed that the dielectric constants of 7 for nonpolar and polar residues and 11 for charged residues (and histidine) provide optimal ΔΔG binding predictions. An overall mean unsigned error (MUE) of 1.4 kcal mol -1 relative to the experiment was achieved in 210 mutations only with geometry optimization, which was further reduced with MD simulations (MUE of 1.1 kcal mol -1 for the MD employing explicit solvent). This recalibrated method allows for a better computational identification of hot spots, avoiding expensive and time-consuming experiments or thermodynamic integration/ free energy perturbation/ uBAR calculations, and will hopefully help new drug discovery campaigns in their quest of searching spots of interest for binding small drug-like molecules at P:P interfaces.

  19. 2009 South American benchmarking study: natural gas transportation companies

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Nathalie [Gas TransBoliviano S.A. (Bolivia); Walter, Juliana S. [TRANSPETRO, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In the current business environment large corporations are constantly seeking to adapt their strategies. Benchmarking is an important tool for continuous improvement and decision-making. Benchmarking is a methodology that determines which aspects are the most important to be improved upon, and it proposes establishing a competitive parameter in an analysis of the best practices and processes, applying continuous improvement driven by the best organizations in their class. At the beginning of 2008, GTB (Gas TransBoliviano S.A.) contacted several South American gas transportation companies to carry out a regional benchmarking study in 2009. In this study, the key performance indicators of the South American companies, whose reality is similar, for example, in terms of prices, availability of labor, and community relations, will be compared. Within this context, a comparative study of the results, the comparative evaluation among natural gas transportation companies, is becoming an essential management instrument to help with decision-making. (author)

  20. Prognostic role of metabolic parameters of {sup 18}F-FDG PET-CT scan performed during radiation therapy in locally advanced head and neck squamous cell carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Min, Myo; Forstner, Dion [Liverpool Hospital, Cancer Therapy Centre, Liverpool, NSW (Australia); University of New South Wales, Sydney, NSW (Australia); Ingham Institute of Applied Medical Research, Liverpool, NSW (Australia); Lin, Peter; Shon, Ivan Ho; Lin, Michael [University of New South Wales, Sydney, NSW (Australia); Liverpool Hospital, Department of Nuclear Medicine and Positron Emission Tomography, Liverpool, NSW (Australia); University of Western Sydney, Sydney, NSW (Australia); Lee, Mark T. [Liverpool Hospital, Cancer Therapy Centre, Liverpool, NSW (Australia); University of New South Wales, Sydney, NSW (Australia); Bray, Victoria; Fowler, Allan [Liverpool Hospital, Cancer Therapy Centre, Liverpool, NSW (Australia); Chicco, Andrew [Liverpool Hospital, Department of Nuclear Medicine and Positron Emission Tomography, Liverpool, NSW (Australia); Tieu, Minh Thi [Calvary Mater Newcastle, Department of Radiation Oncology, Newcastle, NSW (Australia); University of Newcastle, Newcastle, NSW (Australia)

    2015-12-15

    To evaluate the prognostic value of {sup 18}F-FDG PET-CT performed in the third week (iPET) of definitive radiation therapy (RT) in patients with newly diagnosed locally advanced mucosal primary head and neck squamous-cell-carcinoma (MPHNSCC). Seventy-two patients with MPHNSCC treated with radical RT underwent staging PET-CT and iPET. The maximum standardised uptake value (SUV{sub max}), metabolic tumour volume (MTV) and total lesional glycolysis (TLG) of primary tumour (PT) and index node (IN) [defined as lymph node(s) with highest TLG] were analysed, and results were correlated with loco-regional recurrence-free survival (LRFS), disease-free survival (DFS), metastatic failure-free survival(MFFS) and overall survival (OS), using Kaplan-Meier analysis. Optimal cutoffs (OC) were derived from receiver operating characteristic curves: SUV{sub max-PT} = 4.25 g/mL, MTV{sub PT} = 3.3 cm{sup 3}, TLG{sub PT} = 9.4 g, for PT, and SUV{sub max-IN} = 4.05 g/mL, MTV{sub IN} = 1.85 cm{sup 3} and TLG{sub IN} = 7.95 g for IN. Low metabolic values in iPET for PT below OC were associated with statistically significant better LRFS and DFS. TLG was the best predictor of outcome with 2-year LRFS of 92.7 % vs. 71.1 % [p = 0.005, compared with SUV{sub max} (p = 0.03) and MTV (p = 0.022)], DFS of 85.9 % vs. 60.8 % [p = 0.005, compared with SUV{sub max} (p = 0.025) and MTV (p = 0.018)], MFFS of 85.9 % vs. 83.7 % [p = 0.488, compared with SUV{sub max} (p = 0.52) and MTV (p = 0.436)], and OS of 81.1 % vs. 75.0 % [p = 0.279, compared with SUV{sub max} (p = 0.345) and MTV (p = 0.512)]. There were no significant associations between the percentage reduction of primary tumour metabolic parameters and outcomes. In patients with nodal disease, metabolic parameters below OC (for both PT and IN) were significantly associated with all oncological outcomes, while TLG was again the best predictor: LRFS of 84.0 % vs. 55.3 % (p = 0.017), DFS of 79.4 % vs. 38.6 % (p = 0.001), MFFS 86.4 % vs. 68.2 % (p = 0

  1. Validation of Neutron Calculation Codes and Models by means of benchmark cases in the frame of the Binational Commission of Nuclear Energy. Kinetic Parameters, Temperature Coefficients and Power Distribution

    International Nuclear Information System (INIS)

    Dos Santos, Adimir; Siqueira, Paulo de Tarso D.; Andrade e Silva, Graciete Simões; Grant, Carlos; Tarazaga, Ariel E.; Barberis, Claudia

    2013-01-01

    In year 2008 the Atomic Energy National Commission (CNEA) of Argentina, and the Brazilian Institute of Energetic and Nuclear Research (IPEN), under the frame of Nuclear Energy Argentine Brazilian Agreement (COBEN), among many others, included the project “Validation and Verification of Calculation Methods used for Research and Experimental Reactors . At this time, it was established that the validation was to be performed with models implemented in the deterministic codes HUEMUL and PUMA (cell and reactor codes) developed by CNEA and those ones implemented in MCNP by CNEA and IPEN. The necessary data for these validations would correspond to theoretical-experimental reference cases in the research reactor IPEN/MB-01 located in São Paulo, Brazil. The staff of the group Reactor and Nuclear Power Studies (SERC) of CNEA, from the argentine side, performed calculations with deterministic models (HUEMUL-PUMA) and probabilistic methods (MCNP) modeling a great number of physical situations of de reactor, which previously have been studied and modeled by members of the Center of Nuclear Engineering of the IPEN, whose results were extensively provided to CNEA. In this paper results of comparison of calculated and experimental results for temperature coefficients, kinetic parameters and fission rates spatial distributions are shown. (author)

  2. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  3. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  4. Bone Scan

    Science.gov (United States)

    ... posts Join Mayo Clinic Connect Bone scan About Advertisement Mayo Clinic does not endorse companies or products. ... a Job Site Map About This Site Twitter Facebook Google YouTube Pinterest Mayo Clinic is a not- ...

  5. MRI Scans

    Science.gov (United States)

    Magnetic resonance imaging (MRI) uses a large magnet and radio waves to look at organs and structures inside your body. Health care professionals use MRI scans to diagnose a variety of conditions, from ...

  6. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  7. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  8. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  9. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  10. A Framework for Urban Transport Benchmarking

    OpenAIRE

    Theuns Henning; Mohammed Dalil Essakali; Jung Eun Oh

    2011-01-01

    This report summarizes the findings of a study aimed at exploring key elements of a benchmarking framework for urban transport. Unlike many industries where benchmarking has proven to be successful and straightforward, the multitude of the actors and interactions involved in urban transport systems may make benchmarking a complex endeavor. It was therefore important to analyze what has bee...

  11. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  12. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  13. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  14. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  15. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  16. Closed-Loop Neuromorphic Benchmarks

    Directory of Open Access Journals (Sweden)

    Terrence C Stewart

    2015-12-01

    Full Text Available Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is evenmore difficult when the task of interest is a closed-loop task; that is, a task where the outputfrom the neuromorphic hardware affects some environment, which then in turn affects thehardware’s future input. However, closed-loop situations are one of the primary potential uses ofneuromorphic hardware. To address this, we present a methodology for generating closed-loopbenchmarks that makes use of a hybrid of real physical embodiment and a type of minimalsimulation. Minimal simulation has been shown to lead to robust real-world performance, whilestill maintaining the practical advantages of simulation, such as making it easy for the samebenchmark to be used by many researchers. This method is flexible enough to allow researchersto explicitly modify the benchmarks to identify specific task domains where particular hardwareexcels. To demonstrate the method, we present a set of novel benchmarks that focus on motorcontrol for an arbitrary system with unknown external forces. Using these benchmarks, we showthat an error-driven learning rule can consistently improve motor control performance across arandomly generated family of closed-loop simulations, even when there are up to 15 interactingjoints to be controlled.

  17. Supply network configuration—A benchmarking problem

    Science.gov (United States)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  18. SparseBeads data: benchmarking sparsity-regularized computed tomography

    Science.gov (United States)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  19. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  20. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Avhandling (dr.ing.) - Høgskolen i Telemark / Norges teknisk-naturvitenskapelige universitet Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past t...

  1. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  2. Scanning table

    CERN Multimedia

    1960-01-01

    Before the invention of wire chambers, particles tracks were analysed on scanning tables like this one. Today, the process is electronic and much faster. Bubble chamber film - currently available - (links can be found below) was used for this analysis of the particle tracks.

  3. Peripheral moving-table contrast-enhanced magnetic resonance angiography (CE-MRA) using a prototype 18-channel peripheral vascular coil and scanning parameters optimized to the patient's individual hemodynamics.

    Science.gov (United States)

    Potthast, Silke; Wilson, Gregory J; Wang, Maisie S; Maki, Jeffrey H

    2009-05-01

    To demonstrate that with a priori determination of individual patient hemodynamics, peripheral contrast-enhanced magnetic resonance angiography (pCE-MRA) can be customized to maximize signal-to noise ratio (SNR) and avoid venous enhancement. Using a 1.5T MRI scanner and prototype 18-channel peripheral vascular (PV) coil designed for highly accelerated parallel imaging, geometry (g)-factor maps were determined. SNR-maximized protocols considering the two-dimensional sensitivity encoding (2D SENSE) factor, TE, TR, bandwidth (BW), and flip angle (FA) were precalculated and stored. For each exam, a small aortic timing bolus was performed, followed by dynamic three-dimensional (3D)-MRA of the calf. Using this information, the aorta to pedal artery and calf arteriovenous transit times were measured. This enabled estimation of the maximum upper and middle station acquisition duration to allow lower station acquisition to begin prior to venous arrival. The appropriately succinct SNR-optimized protocol for each station was selected and moving-table pCE-MRA was performed using thigh venous compression and high-relaxivity contrast material. The protocol was successfully applied in 15 patients and all imaging demonstrated good SNR without diagnosis-hindering venous enhancement. By knowing each patient's venous enhancement kinetics, scan parameters can be optimized to utilize maximum possible acquisition time. Some time is added for the timing scans, but in return time-resolved calf CE-MRA, maximized SNR, and decreased risk of venous enhancement are gained.

  4. NEACRP thermal fission product benchmark

    International Nuclear Information System (INIS)

    Halsall, M.J.; Taubman, C.J.

    1989-09-01

    The objective of the thermal fission product benchmark was to compare the range of fission product data in use at the present time. A simple homogeneous problem was set with 200 atoms H/1 atom U235, to be burnt up to 1000 days and then decay for 1000 days. The problem was repeated with 200 atoms H/1 atom Pu239, 20 atoms H/1 atom U235 and 20 atoms H/1 atom Pu239. There were ten participants and the submissions received are detailed in this report. (author)

  5. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  6. PEBBLES Simulation of Static Friction and New Static Friction Benchmark

    International Nuclear Information System (INIS)

    Cogliati, Joshua J.; Ougouag, Abderrafi M.

    2010-01-01

    Pebble bed reactors contain large numbers of spherical fuel elements arranged randomly. Determining the motion and location of these fuel elements is required for calculating certain parameters of pebble bed reactor operation. This paper documents the PEBBLES static friction model. This model uses a three dimensional differential static friction approximation extended from the two dimensional Cundall and Strack model. The derivation of determining the rotational transformation of pebble to pebble static friction force is provided. A new implementation for a differential rotation method for pebble to container static friction force has been created. Previous published methods are insufficient for pebble bed reactor geometries. A new analytical static friction benchmark is documented that can be used to verify key static friction simulation parameters. This benchmark is based on determining the exact pebble to pebble and pebble to container static friction coefficients required to maintain a stable five sphere pyramid.

  7. Isprs Benchmark for Multi-Platform Photogrammetry

    Science.gov (United States)

    Nex, F.; Gerke, M.; Remondino, F.; Przybilla, H.-J.; Bäumker, M.; Zurhorst, A.

    2015-03-01

    Airborne high resolution oblique imagery systems and RPAS/UAVs are very promising technologies that will keep on influencing the development of geomatics in the future years closing the gap between terrestrial and classical aerial acquisitions. These two platforms are also a promising solution for National Mapping and Cartographic Agencies (NMCA) as they allow deriving complementary mapping information. Although the interest for the registration and integration of aerial and terrestrial data is constantly increasing, only limited work has been truly performed on this topic. Several investigations still need to be undertaken concerning algorithms ability for automatic co-registration, accurate point cloud generation and feature extraction from multiplatform image data. One of the biggest obstacles is the non-availability of reliable and free datasets to test and compare new algorithms and procedures. The Scientific Initiative "ISPRS benchmark for multi-platform photogrammetry", run in collaboration with EuroSDR, aims at collecting and sharing state-of-the-art multi-sensor data (oblique airborne, UAV-based and terrestrial images) over an urban area. These datasets are used to assess different algorithms and methodologies for image orientation and dense matching. As ground truth, Terrestrial Laser Scanning (TLS), Aerial Laser Scanning (ALS) as well as topographic networks and GNSS points were acquired to compare 3D coordinates on check points (CPs) and evaluate cross sections and residuals on generated point cloud surfaces. In this paper, the acquired data, the pre-processing steps, the evaluation procedures as well as some preliminary results achieved with commercial software will be presented.

  8. Scanning holograms

    International Nuclear Information System (INIS)

    Natali, S.

    1984-01-01

    This chapter reports on the scanning of 1000 holograms taken in HOBC at CERN. Each hologram is triggered by an interaction in the chamber, the primary particles being pions at 340 GeV/c. The aim of the experiment is the study of charm production. The holograms, recorded on 50 mm film with the ''in line'' technique, can be analyzed by shining a parallel expanded laser beam through the film, obtaining immediately above it the real image of the chamber which can then be scanned and measured with a technique half way between emulsions and bubble chambers. The results indicate that holograms can be analyzed as quickly and reliably as in other visual techniques and that to them is open the same order of magnitude of large scale experiments

  9. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study.

    Science.gov (United States)

    De Bondt, Timo; Mulkens, Tom; Zanca, Federica; Pyfferoen, Lotte; Casselman, Jan W; Parizel, Paul M

    2017-02-01

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. • Significant differences were observed in the delivered dose between age-groups and hospitals. • Using age-adapted scanning protocols gives a nearly linear dose increase. • Sharing dose-data can be a trigger for hospitals to reduce dose levels.

  10. Benchmark calculations of thermal reaction rates. I - Quantal scattering theory

    Science.gov (United States)

    Chatfield, David C.; Truhlar, Donald G.; Schwenke, David W.

    1991-01-01

    The thermal rate coefficient for the prototype reaction H + H2 yields H2 + H with zero total angular momentum is calculated by summing, averaging, and numerically integrating state-to-state reaction probabilities calculated by time-independent quantum-mechanical scattering theory. The results are very carefully converged with respect to all numerical parameters in order to provide high-precision benchmark results for confirming the accuracy of new methods and testing their efficiency.

  11. Reevaluation of the Jezebel Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-10

    Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.

  12. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  13. Benchmarking of radiological departments. Starting point for successful process optimization

    International Nuclear Information System (INIS)

    Busch, Hans-Peter

    2010-01-01

    Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)

  14. Compilation report of VHTRC temperature coefficient benchmark calculations

    International Nuclear Information System (INIS)

    Yasuda, Hideshi; Yamane, Tsuyoshi

    1995-11-01

    A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, 'Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors' to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k eff , by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other's ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.)

  15. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  16. A proposal for benchmarking learning objects

    OpenAIRE

    Rita Falcão; Alfredo Soeiro

    2007-01-01

    This article proposes a methodology for benchmarking learning objects. It aims to deal with twoproblems related to e-learning: the validation of learning using this method and the return oninvestment of the process of development and use: effectiveness and efficiency.This paper describes a proposal for evaluating learning objects (LOs) through benchmarking, basedon the Learning Object Metadata Standard and on an adaptation of the main tools of the BENVICproject. The Benchmarking of Learning O...

  17. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  18. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  19. HPC Benchmark Suite NMx, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  20. Analysis of VENUS-3 benchmark experiment

    International Nuclear Information System (INIS)

    Kodeli, I.; Sartori, E.

    1998-01-01

    The paper presents the revision and the analysis of VENUS-3 benchmark experiment performed at CEN/SCK, Mol (Belgium). This benchmark was found to be particularly suitable for validation of current calculation tools like 3-D neutron transport codes, and in particular of the 3D sensitivity and uncertainty analysis code developed within the EFF project. The compilation of the integral experiment was integrated into the SINBAD electronic data base for storing and retrieving information about the shielding experiments for nuclear systems. SINBAD now includes 33 reviewed benchmark descriptions and several compilations waiting for the review, among them many benchmarks relevant for pressure vessel dosimetry system validation.(author)

  1. Benchmarking of nuclear economics tools

    International Nuclear Information System (INIS)

    Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh

    2017-01-01

    Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and

  2. Benchmarking the UAF Tsunami Code

    Science.gov (United States)

    Nicolsky, D.; Suleimani, E.; West, D.; Hansen, R.

    2008-12-01

    We have developed a robust numerical model to simulate propagation and run-up of tsunami waves in the framework of non-linear shallow water theory. A temporal position of the shoreline is calculated using the free-surface moving boundary condition. The numerical code adopts a staggered leapfrog finite-difference scheme to solve the shallow water equations formulated for depth-averaged water fluxes in spherical coordinates. To increase spatial resolution, we construct a series of telescoping embedded grids that focus on areas of interest. For large scale problems, a parallel version of the algorithm is developed by employing a domain decomposition technique. The developed numerical model is benchmarked in an exhaustive series of tests suggested by NOAA. We conducted analytical and laboratory benchmarking for the cases of solitary wave runup on simple and composite beaches, run-up of a solitary wave on a conical island, and the extreme runup in the Monai Valley, Okushiri Island, Japan, during the 1993 Hokkaido-Nansei-Oki tsunami. Additionally, we field-tested the developed model to simulate the November 15, 2006 Kuril Islands tsunami, and compared the simulated water height to observations at several DART buoys. In all conducted tests we calculated a numerical solution with an accuracy recommended by NOAA standards. In this work we summarize results of numerical benchmarking of the code, its strengths and limits with regards to reproduction of fundamental features of coastal inundation, and also illustrate some possible improvements. We applied the developed model to simulate potential inundation of the city of Seward located in Resurrection Bay, Alaska. To calculate an aerial extent of potential inundation, we take into account available near-shore bathymetry and inland topography on a grid of 15 meter resolution. By choosing several scenarios of potential earthquakes, we calculated the maximal aerial extent of Seward inundation. As a test to validate our model, we

  3. Updates to the Integrated Protein-Protein Interaction Benchmarks : Docking Benchmark Version 5 and Affinity Benchmark Version 2

    NARCIS (Netherlands)

    Vreven, Thom; Moal, Iain H.; Vangone, Anna|info:eu-repo/dai/nl/370549694; Pierce, Brian G.; Kastritis, Panagiotis L.|info:eu-repo/dai/nl/315886668; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M J J|info:eu-repo/dai/nl/113691238; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high-quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were

  4. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  5. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  6. An enhanced RNA alignment benchmark for sequence alignment programs

    Directory of Open Access Journals (Sweden)

    Steger Gerhard

    2006-10-01

    Full Text Available Abstract Background The performance of alignment programs is traditionally tested on sets of protein sequences, of which a reference alignment is known. Conclusions drawn from such protein benchmarks do not necessarily hold for the RNA alignment problem, as was demonstrated in the first RNA alignment benchmark published so far. For example, the twilight zone – the similarity range where alignment quality drops drastically – starts at 60 % for RNAs in comparison to 20 % for proteins. In this study we enhance the previous benchmark. Results The RNA sequence sets in the benchmark database are taken from an increased number of RNA families to avoid unintended impact by using only a few families. The size of sets varies from 2 to 15 sequences to assess the influence of the number of sequences on program performance. Alignment quality is scored by two measures: one takes into account only nucleotide matches, the other measures structural conservation. The performance order of parameters – like nucleotide substitution matrices and gap-costs – as well as of programs is rated by rank tests. Conclusion Most sequence alignment programs perform equally well on RNA sequence sets with high sequence identity, that is with an average pairwise sequence identity (APSI above 75 %. Parameters for gap-open and gap-extension have a large influence on alignment quality lower than APSI ≤ 75 %; optimal parameter combinations are shown for several programs. The use of different 4 × 4 substitution matrices improved program performance only in some cases. The performance of iterative programs drastically increases with increasing sequence numbers and/or decreasing sequence identity, which makes them clearly superior to programs using a purely non-iterative, progressive approach. The best sequence alignment programs produce alignments of high quality down to APSI > 55 %; at lower APSI the use of sequence+structure alignment programs is recommended.

  7. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  8. Ten key short-term sectoral benchmarks to limit warming to 1.5º C

    NARCIS (Netherlands)

    Kuramochi, Takeshi; Hoehne, N.E.; Schaeffer, M.; Cantzler, Jasmin; Hare, William; Deng, Yvonne; Sterl, Sebastian; Hagemann, Markus; Rocha, Marcia; Yanguas-Parra, Paola Andrea; Mir, Goher-Ur-Rehman; Wong, Lindee; El-Laboudy, Tarik; Wouters, Karlien; Deryng, Delphine; Blok, Kornelis

    2018-01-01

    This article identifies and quantifies the 10 most important benchmarks for climate action to be taken by 2020–2025 to keep the window open for a 1.5°C-consistent GHG emission pathway. We conducted a comprehensive review of existing emissions scenarios, scanned all sectors and the respective

  9. Benchmarking Ortec ISOTOPIC measurements and calculations

    International Nuclear Information System (INIS)

    This paper represents a description of eight compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC gamma-ray analysis software program. The paper describes tests of the programs capability to perform finite geometry correction factors and sample-matrix-container photon absorption correction factors. Favorable results are obtained in all benchmark tests. (author)

  10. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  11. Benchmarking nutrient use efficiency of dairy farms

    NARCIS (Netherlands)

    Mu, W.; Groen, E.A.; Middelaar, van C.E.; Bokkers, E.A.M.; Hennart, S.; Stilmant, D.; Boer, de I.J.M.

    2017-01-01

    The nutrient use efficiency (NUE) of a system, generally computed as the amount of nutrients in valuable outputs over the amount of nutrients in all inputs, is commonly used to benchmark the environmental performance of dairy farms. Benchmarking the NUE of farms, however, may lead to biased

  12. Benchmark analysis of railway networks and undertakings

    NARCIS (Netherlands)

    Hansen, I.A.; Wiggenraad, P.B.L.; Wolff, J.W.

    2013-01-01

    Benchmark analysis of railway networks and companies has been stimulated by the European policy of deregulation of transport markets, the opening of national railway networks and markets to new entrants and separation of infrastructure and train operation. Recent international railway benchmarking

  13. The Linked Data Benchmark Council Project

    NARCIS (Netherlands)

    P.A. Boncz (Peter); I. Fundulaki; A. Gubichev (Andrey); J. Larriba-Pey (Josep); T. Neumann (Thomas)

    2013-01-01

    htmlabstractDespite the fast growth and increasing popularity, the broad field of RDF and Graph database systems lacks an independent authority for developing benchmarks, and for neutrally assessing benchmark results through industry-strength auditing which would allow to quantify and compare the

  14. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...

  15. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  16. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... News Physician Resources Professions Site Index A-Z Thyroid Scan and Uptake Thyroid scan and uptake uses ... the Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid scan is a ...

  17. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  18. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  19. Benchmarks for single-phase flow in fractured porous media

    Science.gov (United States)

    Flemisch, Bernd; Berre, Inga; Boon, Wietse; Fumagalli, Alessio; Schwenck, Nicolas; Scotti, Anna; Stefansson, Ivar; Tatomir, Alexandru

    2018-01-01

    This paper presents several test cases intended to be benchmarks for numerical schemes for single-phase fluid flow in fractured porous media. A number of solution strategies are compared, including a vertex and two cell-centred finite volume methods, a non-conforming embedded discrete fracture model, a primal and a dual extended finite element formulation, and a mortar discrete fracture model. The proposed benchmarks test the schemes by increasing the difficulties in terms of network geometry, e.g. intersecting fractures, and physical parameters, e.g. low and high fracture-matrix permeability ratio as well as heterogeneous fracture permeabilities. For each problem, the results presented are the number of unknowns, the approximation errors in the porous matrix and in the fractures with respect to a reference solution, and the sparsity and condition number of the discretized linear system. All data and meshes used in this study are publicly available for further comparisons.

  20. Static benchmarking of the NESTLE advanced nodal code

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1997-01-01

    Results from the NESTLE advanced nodal code are presented for multidimensional numerical benchmarks representing four different types of reactors, and predictions from NESTLE are compared with measured data from pressurized water reactors (PWRs). The numerical benchmarks include cases representative of PWRs, boiling water reactors (BWRs), CANDU heavy water reactors (HWRs), and high-temperature gas-cooled reactors (HTGRs). The measured PWR data include critical soluble boron concentrations and isothermal temperature coefficients of reactivity. The results demonstrate that NESTLE correctly solves the multigroup diffusion equations for both Cartesian and hexagonal geometries, that it reliably calculates k eff and reactivity coefficients for PWRs, and that--subsequent to the incorporation of additional thermal-hydraulic models--it will be able to perform accurate calculations for the corresponding parameters in BWRs, HWRs, and HTGRs as well

  1. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    Science.gov (United States)

    Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B.; Ayache, Nicholas; Buendia, Patricia; Collins, D. Louis; Cordier, Nicolas; Corso, Jason J.; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R.; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M.; Jena, Raj; John, Nigel M.; Konukoglu, Ender; Lashkari, Danial; Mariz, José António; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J.; Raviv, Tammy Riklin; Reza, Syed M. S.; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A.; Sousa, Nuno; Subbanna, Nagesh K.; Szekely, Gabor; Taylor, Thomas J.; Thomas, Owen M.; Tustison, Nicholas J.; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2016-01-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource. PMID:25494501

  2. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center

    2017-10-19

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction

  3. Geant4 Computing Performance Benchmarking and Monitoring

    Science.gov (United States)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-01

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  4. Benchmarking Commercial Conformer Ensemble Generators.

    Science.gov (United States)

    Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes

    2017-11-27

    We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.

  5. What Randomized Benchmarking Actually Measures

    Science.gov (United States)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-09-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r . For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. These theories allow explicit computation of the error rate that RB measures (r ), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  6. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  7. Benchmark tests of JENDL-1

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.

    1982-02-01

    Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)

  8. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  9. Instrumental fundamental parameters and selected applications of the microfocus X-ray fluorescence analysis at a scanning electron microscope; Instrumentelle Fundamentalparameter und ausgewaehlte Anwendungen der Mikrofokus-Roentgenfluoreszenzanalyse am Rasterelektronenmikroskop

    Energy Technology Data Exchange (ETDEWEB)

    Rackwitz, Vanessa

    2012-05-30

    For a decade X-ray sources have been commercially available for the microfocus X-ray fluorescence analysis ({mu}-XRF) and offer the possibility of extending the analytics at a scanning electron microscope (SEM) with an attached energy dispersive X-ray spectrometer (EDS). By using the {mu}-XRF it is possible to determine the content of chemical elements in a microscopic sample volume in a quantitative, reference-free and non-destructive way. For the reference-free quantification with the XRF the Sherman equation is referred to. This equation deduces the intensity of the detected X-ray intensity of a fluorescence peak to the content of the element in the sample by means of fundamental parameters. The instrumental fundamental parameters of the {mu}-XRF at a SEM/EDS system are the excitation spectrum consisting of X-ray tube spectrum and the transmission of the X-ray optics, the geometry and the spectrometer efficiency. Based on a calibrated instrumentation the objectives of this work are the development of procedures for the characterization of all instrumental fundamental parameters as well as the evaluation and reduction of their measurement uncertainties: The algorithms known from the literature for the calculation of X-ray tube spectrum are evaluated with regard to their deviations in the spectral distribution. Within this work a novel semi-empirical model is improved with respect to its uncertainties and enhanced in the low energy range as well as extended for another three anodes. The emitted X-ray tube spectrum is calculated from the detected one, which is measured at an especially developed setup for the direct measurement of X-ray tube spectra. This emitted X-ray tube spectrum is compared to the one calculated on base of the model of this work. A procedure for the determination of the most important parameters of an X-ray semi-lens in parallelizing mode is developed. The temporal stability of the transmission of X-ray full lenses, which have been in regular

  10. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  11. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In

  12. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  13. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  14. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  15. Nonlinear viscoplasticity in ASPECT: benchmarking and applications to subduction

    Directory of Open Access Journals (Sweden)

    A. Glerum

    2018-03-01

    Full Text Available ASPECT (Advanced Solver for Problems in Earth's ConvecTion is a massively parallel finite element code originally designed for modeling thermal convection in the mantle with a Newtonian rheology. The code is characterized by modern numerical methods, high-performance parallelism and extensibility. This last characteristic is illustrated in this work: we have extended the use of ASPECT from global thermal convection modeling to upper-mantle-scale applications of subduction.Subduction modeling generally requires the tracking of multiple materials with different properties and with nonlinear viscous and viscoplastic rheologies. To this end, we implemented a frictional plasticity criterion that is combined with a viscous diffusion and dislocation creep rheology. Because ASPECT uses compositional fields to represent different materials, all material parameters are made dependent on a user-specified number of fields.The goal of this paper is primarily to describe and verify our implementations of complex, multi-material rheology by reproducing the results of four well-known two-dimensional benchmarks: the indentor benchmark, the brick experiment, the sandbox experiment and the slab detachment benchmark. Furthermore, we aim to provide hands-on examples for prospective users by demonstrating the use of multi-material viscoplasticity with three-dimensional, thermomechanical models of oceanic subduction, putting ASPECT on the map as a community code for high-resolution, nonlinear rheology subduction modeling.

  16. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  17. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and the...

  18. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  19. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group

    2018-04-13

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  20. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    R. Angles Rojas (Renzo); M.-D. Pham (Minh-Duc); P.A. Boncz (Peter)

    2014-01-01

    htmlabstractWith inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics

  1. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  2. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...

  3. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    and professional performance but only if prior professional performance was low. Supplemental analyses support the robustness of our results. Findings indicate conditions under which bureaucratic benchmarking information may affect professional performance and advance research on professional control and social...

  4. Head CT scan

    Science.gov (United States)

    ... CT; Cranial CT; CT scan - skull; CT scan - head; CT scan - orbits; CT scan - sinuses; Computed tomography - ... Head CT is done in the hospital or radiology center. You lie on a narrow table that ...

  5. Pelvic CT scan

    Science.gov (United States)

    CAT scan - pelvis; Computed axial tomography scan - pelvis; Computed tomography scan - pelvis; CT scan - pelvis ... Abnormal results may be due to: Abscess (collection of pus) Bladder stones Broken bone Cancer Diverticulitis

  6. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  7. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  8. OR-Benchmark: An Open and Reconfigurable Digital Watermarking Benchmarking Framework

    OpenAIRE

    Wang, Hui; Ho, Anthony TS; Li, Shujun

    2015-01-01

    Benchmarking digital watermarking algorithms is not an easy task because different applications of digital watermarking often have very different sets of requirements and trade-offs between conflicting requirements. While there have been some general-purpose digital watermarking benchmarking systems available, they normally do not support complicated benchmarking tasks and cannot be easily reconfigured to work with different watermarking algorithms and testing conditions. In this paper, we pr...

  9. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  10. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  11. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  12. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  13. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  14. Benchmark assemblies of the Los Alamos Critical Assemblies Facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1985-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  15. TU Electric reactor physics model verification: Power reactor benchmark

    International Nuclear Information System (INIS)

    Willingham, C.E.; Killgore, M.R.

    1988-01-01

    Power reactor benchmark calculations using the advanced code package CASMO-3/SIMULATE-3 have been performed for six cycles of Prairie Island Unit 1. The reload fuel designs for the selected cycles included gadolinia as a burnable absorber, natural uranium axial blankets and increased water-to-fuel ratio. The calculated results for both startup reactor physics tests (boron endpoints, control rod worths, and isothermal temperature coefficients) and full power depletion results were compared to measured plant data. These comparisons show that the TU Electric reactor physics models accurately predict important measured parameters for power reactors

  16. Benchmark assemblies of the Los Alamos critical assemblies facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1985-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  17. Benchmark assemblies of the Los Alamos critical assemblies facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1986-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described. (author)

  18. 239Pu Resonance Evaluation for Thermal Benchmark System Calculations

    Science.gov (United States)

    Leal, L. C.; Noguere, G.; de Saint Jean, C.; Kahler, A. C.

    2014-04-01

    Analyses of thermal plutonium solution critical benchmark systems have indicated a deficiency in the 239Pu resonance evaluation. To investigate possible solutions to this issue, the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) Working Party for Evaluation Cooperation (WPEC) established Subgroup 34 to focus on the reevaluation of the 239Pu resolved resonance parameters. In addition, the impacts of the prompt neutron multiplicity (νbar) and the prompt neutron fission spectrum (PFNS) have been investigated. The objective of this paper is to present the results of the 239Pu resolved resonance evaluation effort.

  19. Design of Test Wrapper Scan Chain Based on Differential Evolution

    Directory of Open Access Journals (Sweden)

    Aijun Zhu

    2013-08-01

    Full Text Available Integrated Circuit has entered the era of design of the IP-based SoC (System on Chip, which makes the IP core reuse become a key issue. SoC test wrapper design for scan chain is a NP Hard problem, we propose an algorithm based on Differential Evolution (DE to design wrapper scan chain. Through group’s mutation, crossover and selection operations, the design of test wrapper scan chain is achieved. Experimental verification is carried out according to the international standard benchmark ITC’02. The results show that the algorithm can obtain shorter longest wrapper scan chains, compared with other algorithms.

  20. Dependable Benchmarking for Storage Systems in High-Energy Physics

    CERN Document Server

    Fleri Soler, Edward

    2017-01-01

    In high-energy physics, storage systems play a crucial role to store and secure very valuable data produced by complex experiments. The effectiveness and efficiency of data acquisition systems of such experiments depends directly on those of these storage systems. Coping with present day rates and reliability requirements of such experiments implies operating high-performance hardware under the best possible conditions, with a broad set of hardware and software parameters existing along the hierarchical levels, from networks down to drives. An extensive number of tests are required for the tuning of parameters to achieve optimised I/O operations. Current approaches to I/O optimisation generally consist of manual test execution and result taking. This approach lacks appropriate modularity, durability and reproducibility, attainable through dedicated testing facilities. The aim of this project is to conceive a user-friendly, dedicated storage benchmarking tool for the improved comparison of I/O parameters in re...

  1. Benchmarks and statistics of entanglement dynamics

    International Nuclear Information System (INIS)

    Tiersch, Markus

    2009-01-01

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  2. Benchmarks and statistics of entanglement dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tiersch, Markus

    2009-09-04

    In the present thesis we investigate how the quantum entanglement of multicomponent systems evolves under realistic conditions. More specifically, we focus on open quantum systems coupled to the (uncontrolled) degrees of freedom of an environment. We identify key quantities that describe the entanglement dynamics, and provide efficient tools for its calculation. For quantum systems of high dimension, entanglement dynamics can be characterized with high precision. In the first part of this work, we derive evolution equations for entanglement. These formulas determine the entanglement after a given time in terms of a product of two distinct quantities: the initial amount of entanglement and a factor that merely contains the parameters that characterize the dynamics. The latter is given by the entanglement evolution of an initially maximally entangled state. A maximally entangled state thus benchmarks the dynamics, and hence allows for the immediate calculation or - under more general conditions - estimation of the change in entanglement. Thereafter, a statistical analysis supports that the derived (in-)equalities describe the entanglement dynamics of the majority of weakly mixed and thus experimentally highly relevant states with high precision. The second part of this work approaches entanglement dynamics from a topological perspective. This allows for a quantitative description with a minimum amount of assumptions about Hilbert space (sub-)structure and environment coupling. In particular, we investigate the limit of increasing system size and density of states, i.e. the macroscopic limit. In this limit, a universal behaviour of entanglement emerges following a ''reference trajectory'', similar to the central role of the entanglement dynamics of a maximally entangled state found in the first part of the present work. (orig.)

  3. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  4. Benchmark studies of the gyro-Landau-fluid code and gyro-kinetic codes on kinetic ballooning modes

    Energy Technology Data Exchange (ETDEWEB)

    Tang, T. F. [Dalian University of Technology, Dalian 116024 (China); Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Xu, X. Q. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Ma, C. H. [Fusion Simulation Center, School of Physics, Peking University, Beijing (China); Bass, E. M.; Candy, J. [General Atomics, P.O. Box 85608, San Diego, California 92186-5608 (United States); Holland, C. [University of California San Diego, La Jolla, California 92093-0429 (United States)

    2016-03-15

    A Gyro-Landau-Fluid (GLF) 3 + 1 model has been recently implemented in BOUT++ framework, which contains full Finite-Larmor-Radius effects, Landau damping, and toroidal resonance [Ma et al., Phys. Plasmas 22, 055903 (2015)]. A linear global beta scan has been conducted using the JET-like circular equilibria (cbm18 series), showing that the unstable modes are kinetic ballooning modes (KBMs). In this work, we use the GYRO code, which is a gyrokinetic continuum code widely used for simulation of the plasma microturbulence, to benchmark with GLF 3 + 1 code on KBMs. To verify our code on the KBM case, we first perform the beta scan based on “Cyclone base case parameter set.” We find that the growth rate is almost the same for two codes, and the KBM mode is further destabilized as beta increases. For JET-like global circular equilibria, as the modes localize in peak pressure gradient region, a linear local beta scan using the same set of equilibria has been performed at this position for comparison. With the drift kinetic electron module in the GYRO code by including small electron-electron collision to damp electron modes, GYRO generated mode structures and parity suggest that they are kinetic ballooning modes, and the growth rate is comparable to the GLF results. However, a radial scan of the pedestal for a particular set of cbm18 equilibria, using GYRO code, shows different trends for the low-n and high-n modes. The low-n modes show that the linear growth rate peaks at peak pressure gradient position as GLF results. However, for high-n modes, the growth rate of the most unstable mode shifts outward to the bottom of pedestal and the real frequency of what was originally the KBMs in ion diamagnetic drift direction steadily approaches and crosses over to the electron diamagnetic drift direction.

  5. Benchmarking as a strategy policy tool for energy management

    NARCIS (Netherlands)

    Rienstra, S.A.; Nijkamp, P.

    2002-01-01

    In this paper we analyse to what extent benchmarking is a valuable tool in strategic energy policy analysis. First, the theory on benchmarking is concisely presented, e.g., by discussing the benchmark wheel and the benchmark path. Next, some results of surveys among business firms are presented. To

  6. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  7. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  8. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  9. Integral benchmark test of JENDL-4.0 for U-233 systems with ICSBEP handbook

    International Nuclear Information System (INIS)

    Kuwagaki, Kazuki; Nagaya, Yasunobu

    2017-03-01

    The integral benchmark test of JENDL-4.0 for U-233 systems using the continuous-energy Monte Carlo code MVP was conducted. The previous benchmark test was performed only for U-233 thermal solution and fast metallic systems in the ICSBEP handbook. In this study, MVP input files were prepared for uninvestigated benchmark problems in the handbook including compound thermal systems (mainly lattice systems) and integral benchmark test was performed. The prediction accuracy of JENDL-4.0 was evaluated for effective multiplication factors (k eff 's) of the U-233 systems. As a result, a trend of underestimation was observed for all the categories of U-233 systems. In the benchmark test of ENDF/B-VII.1 for U-233 systems with the ICSBEP handbook, it is reported that a decreasing trend of calculated k eff values in association with a parameter ATFF (Above-Thermal Fission Fraction) is observed. The ATFF values were also calculated in this benchmark test of JENDL-4.0 and the same trend as ENDF/B-VII.1 was observed. A CD-ROM is attached as an appendix. (J.P.N.)

  10. The institutionalization of benchmarking in the Danish construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard; Gottlieb, Stefan Christoffer

    and disseminated to the construction industry. The fourth chapter demonstrates how benchmarking was concretized into a benchmarking system and articulated to address several political focus areas for the construction industry. BEC accordingly became a political arena where many local perspectives and strategic...... interests had to be managed. The fifth chapter is about the operationalization of benchmarking and demonstrates how the concretizing and implementation of benchmarking gave rise to reactions from different actors with different and diverse interests in the benchmarking initiative. Political struggles...

  11. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  12. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  13. A benchmark server using high resolution protein structure data, and benchmark results for membrane helix predictions.

    Science.gov (United States)

    Rath, Emma M; Tessier, Dominique; Campbell, Alexander A; Lee, Hong Ching; Werner, Tim; Salam, Noeris K; Lee, Lawrence K; Church, W Bret

    2013-03-27

    Helical membrane proteins are vital for the interaction of cells with their environment. Predicting the location of membrane helices in protein amino acid sequences provides substantial understanding of their structure and function and identifies membrane proteins in sequenced genomes. Currently there is no comprehensive benchmark tool for evaluating prediction methods, and there is no publication comparing all available prediction tools. Current benchmark literature is outdated, as recently determined membrane protein structures are not included. Current literature is also limited to global assessments, as specialised benchmarks for predicting specific classes of membrane proteins were not previously carried out. We present a benchmark server at http://sydney.edu.au/pharmacy/sbio/software/TMH_benchmark.shtml that uses recent high resolution protein structural data to provide a comprehensive assessment of the accuracy of existing membrane helix prediction methods. The server further allows a user to compare uploaded predictions generated by novel methods, permitting the comparison of these novel methods against all existing methods compared by the server. Benchmark metrics include sensitivity and specificity of predictions for membrane helix location and orientation, and many others. The server allows for customised evaluations such as assessing prediction method performances for specific helical membrane protein subtypes.We report results for custom benchmarks which illustrate how the server may be used for specialised benchmarks. Which prediction method is the best performing method depends on which measure is being benchmarked. The OCTOPUS membrane helix prediction method is consistently one of the highest performing methods across all measures in the benchmarks that we performed. The benchmark server allows general and specialised assessment of existing and novel membrane helix prediction methods. Users can employ this benchmark server to determine the most

  14. Brain PET scan

    Science.gov (United States)

    ... results on a PET scan. Blood sugar or insulin levels may affect the test results in people with diabetes . PET scans may be done along with a CT scan. This combination scan is called a PET/CT. Alternative Names Brain positron emission tomography; PET scan - brain References Chernecky ...

  15. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... solvers in IPOPT and FMINCON, and the sequential quadratic programming method in SNOPT, are benchmarked on the library using performance profiles. Whenever possible the methods are applied to both the nested and the Simultaneous Analysis and Design (SAND) formulations of the problem. The performance...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  16. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  17. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  18. Heart PET scan

    Science.gov (United States)

    ... nuclear medicine scan; Heart positron emission tomography; Myocardial PET scan ... A PET scan requires a small amount of radioactive material (tracer). This tracer is given through a vein (IV), ...

  19. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... of the Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid scan is ... taking our brief survey: Survey Do you have a personal story about radiology? Share your patient story ...

  20. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II.

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  1. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  2. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    Science.gov (United States)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  3. Los Alamos benchmarks: calculations based on ENDF/B-V data

    International Nuclear Information System (INIS)

    Kidman, R.B.

    1981-11-01

    The new and revised benchmark specifications for nine Los Alamos National Laboratory critical assemblies are used to compute the entire set of parameters that were measured in the experiments. A comparison between the computed and experimental values provides a measure of the adequacy of the specifications, cross sections, and physics codes used in the calculations

  4. Ten key short-term sectoral benchmarks to limit warming to 1.5°C

    NARCIS (Netherlands)

    Kuramochi, Takeshi; Höhne, Niklas; Schaeffer, Michiel; Cantzler, Jasmin; Hare, Bill; Deng, Yvonne; Sterl, Sebastian; Hagemann, Markus; Rocha, Marcia; Yanguas-Parra, Paola Andrea; Mir, Goher Ur Rehman; Wong, Lindee; El-Laboudy, Tarik; Wouters, Karlien; Deryng, Delphine; Blok, Kornelis

    2018-01-01

    This article identifies and quantifies the 10 most important benchmarks for climate action to be taken by 2020–2025 to keep the window open for a 1.5°C-consistent GHG emission pathway. We conducted a comprehensive review of existing emissions scenarios, scanned all sectors and the respective

  5. Influência de alguns parâmetros experimentais nos resultados de análises calorimétricas diferenciais - DSC Influence of some experimental parameters on the results of differential scanning calorimetry - DSC.

    OpenAIRE

    Cláudia Bernal; Andréa Boldarini Couto; Susete Trazzi Breviglieri; Éder Tadeu Gomes Cavalheiro

    2002-01-01

    A series of experiments were performed in order to demonstrate to undergraduate students or users of the differential scanning calorimetry (DSC), that several factors can influence the qualitative and quantitative aspects of DSC results. Saccharin, an artificial sweetner, was used as a probe and its thermal behavior is also discussed on the basis of thermogravimetric (TG) and DSC curves.

  6. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    Science.gov (United States)

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible

  7. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  8. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available of dynamic multi-objective optimisation algorithms (DMOAs) are highlighted. In addition, new DMOO benchmark functions with complicated Pareto-optimal sets (POSs) and approaches to develop DMOOPs with either an isolated or deceptive Pareto-optimal front (POF...

  9. Benchmarking 2009: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica; Kilgore, Gin

    2009-01-01

    "Benchmarking 2009: Trends in Education Philanthropy" is Grantmakers for Education's (GFE) second annual study of grantmaking trends and priorities among members. As a national network dedicated to improving education outcomes through philanthropy, GFE members are mindful of their role in fostering greater knowledge in the field. They believe it's…

  10. Resolution for the Loviisa benchmark problem

    International Nuclear Information System (INIS)

    Garcia, C.R.; Quintero, R.; Milian, D.

    1992-01-01

    In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations

  11. Parton Distribution Benchmarking with LHC Data

    NARCIS (Netherlands)

    Ball, Richard D.; Carrazza, Stefano; Debbio, Luigi Del; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C. -P.

    2012-01-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross

  12. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  13. Why and How to Benchmark XML Databases

    NARCIS (Netherlands)

    A.R. Schmidt; F. Waas; M.L. Kersten (Martin); D. Florescu; M.J. Carey; I. Manolescu; R. Busse

    2001-01-01

    textabstractBenchmarks belong to the very standard repertory of tools deployed in database development. Assessing the capabilities of a system, analyzing actual and potential bottlenecks, and, naturally, comparing the pros and cons of different systems architectures have become indispensable tasks

  14. Prague texture segmentation data generator and benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal

    2006-01-01

    Roč. 2006, č. 64 (2006), s. 67-68 ISSN 0926-4981 R&D Projects: GA MŠk(CZ) 1M0572; GA AV ČR(CZ) 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * texture * benchmark * web Subject RIV: BD - Theory of Information

  15. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  16. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...... dimensions of knowledge thought to be essential for success following graduation....

  17. Determination of Benchmarks Stability within Ahmadu Bello ...

    African Journals Online (AJOL)

    Heights of six geodetic benchmarks over a total distance of 8.6km at the Ahmadu Bello University (ABU), Zaria, Nigeria were recomputed and analysed using least squares adjustment technique. The network computations were tied to two fix primary reference pillars situated outside the campus. The two-tail Chi-square ...

  18. Benchmarking and performance management in health care

    OpenAIRE

    Buttigieg, Sandra; ; EHMA Annual Conference : Public Health Care : Who Pays, Who Provides?

    2012-01-01

    Current economic conditions challenge health care providers globally. Healthcare organizations need to deliver optimal financial, operational, and clinical performance to sustain quality of service delivery. Benchmarking is one of the most potent and under-utilized management tools available and an analytic tool to understand organizational performance. Additionally, it is required for financial survival and organizational excellence.

  19. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  20. Benchmarking transaction and analytical processing systems the creation of a mixed workload benchmark and its application

    CERN Document Server

    Bog, Anja

    2014-01-01

    This book introduces a new benchmark for hybrid database systems, gauging the effect of adding OLAP to an OLTP workload and analyzing the impact of commonly used optimizations in historically separate OLTP and OLAP domains in mixed-workload scenarios.

  1. Benchmark experiments to test plutonium and stainless steel cross sections

    International Nuclear Information System (INIS)

    Jenquin, U.P.; Bierman, S.R.

    1978-06-01

    Neutronics calculations of physical systems containing fissionable material in various configurations are often necessary to assess criticality safety and economic parameters. Criticality safety of the material must be assured for all configurations in the fuel fabrication, spent fuel reprocessing, and transportation processes. Integral criticality experiments are utilized to evaluate neutron cross sections, test theoretical methods, and validate calculational procedures. The Nuclear Regulatory Commission (NRC) commissioned Battelle, Pacific Northwest Laboratory (PNL) to ascertain the accuracy of the neutron cross sections for the isotopes of plutonium and the constituents of stainless steel and determine if improvements can be made in application to criticality safety analysis. NRC's particular area of interest is in the transportation of light--water reactor spent fuel assemblies. The project was divided into two tasks. The first task was to define a set of integral experimental measurements (benchmarks). The second task is to use these benchmarks in neutronics calculations such that the accuracy of ENDF/B-IV plutonium and stainless steel cross sections can be assessed. The results of the second task should help to identify deficiencies in the neutron cross sections. The results of the first task are given

  2. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  3. OECD/NEA benchmark for time-dependent neutron transport calculations without spatial homogenization

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Jason, E-mail: jason.hou@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Ivanov, Kostadin N. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Boyarinov, Victor F.; Fomichenko, Peter A. [National Research Centre “Kurchatov Institute”, Kurchatov Sq. 1, Moscow (Russian Federation)

    2017-06-15

    Highlights: • A time-dependent homogenization-free neutron transport benchmark was created. • The first phase, known as the kinetics phase, was described in this work. • Preliminary results for selected 2-D transient exercises were presented. - Abstract: A Nuclear Energy Agency (NEA), Organization for Economic Co-operation and Development (OECD) benchmark for the time-dependent neutron transport calculations without spatial homogenization has been established in order to facilitate the development and assessment of numerical methods for solving the space-time neutron kinetics equations. The benchmark has been named the OECD/NEA C5G7-TD benchmark, and later extended with three consecutive phases each corresponding to one modelling stage of the multi-physics transient analysis of the nuclear reactor core. This paper provides a detailed introduction of the benchmark specification of Phase I, known as the “kinetics phase”, including the geometry description, supporting neutron transport data, transient scenarios in both two-dimensional (2-D) and three-dimensional (3-D) configurations, as well as the expected output parameters from the participants. Also presented are the preliminary results for the initial state 2-D core and selected transient exercises that have been obtained using the Monte Carlo method and the Surface Harmonic Method (SHM), respectively.

  4. Regression Tree-Based Methodology for Customizing Building Energy Benchmarks to Individual Commercial Buildings

    Science.gov (United States)

    Kaskhedikar, Apoorva Prakash

    According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI

  5. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying...... this perspective develops more thorough knowledge about benchmarking and challenges the current dominating rationales. Hereby, it is argued that benchmarking is not a neutral practice. On the contrary it is highly influenced by organizational ambitions and strategies, with the potentials to transform...

  6. Extraction of electron beam dose parameters from EBT2 film data scored in a mini phantom.

    Science.gov (United States)

    O'Reilly, Dedri; Smit, Cobus J L; du Plessis, Freek C P

    2013-09-01

    Quality assurance of medical linear accelerators includes dosimetric parameter measurement of therapeutic electron beams e.g. relative dose at a depth of 80% (R₈₀). This parameter must be within a tolerance of 0.2 cm of the declared value. Cumbersome water tank measurements can be regarded as a benchmark to measure electron depth dose curves. A mini-phantom was designed and built, in which a strip of GAFCHROMIC® EBT2 film could be encased tightly for electron beam depth dose measurement. Depth dose data were measured for an ELEKTA Sl25 MLC, ELEKTA Precise, and ELEKTA Synergy (Elekta Oncology Systems, Crawley, UK) machines. The electron beam energy range was between 4 and 22 MeV among the machines. A 10 × 10 cm² electron applicator with 95 cm source-surface-distance was used on all the machines. 24 h after irradiation, the EBT2 film strips were scanned on Canon CanoScan N670U scanner. Afterwards, the data were analysed with in-house developed software that entailed optical density to dose conversion, and optimal fitting of the PDD data to de-noise the raw data. From the PDD data R₈₀ values were solved for and compared with acceptance values. A series of tests were also carried out to validate the use of the scanner for film Dosimetry. These tests are presented in this study. It was found that this method of R₈₀ evaluation was reliable with good agreement with benchmark water tank measurements using a commercial parallel plate ionization chamber as the radiation detector. The EBT2 film data yielded R₈₀ values that were on average 0.06 cm different from benchmark water tank measured R₈₀ values.

  7. Benchmarking in the globalised world and its impact on South ...

    African Journals Online (AJOL)

    In order to understand the potential impact of international benchmarking on South African institutions, it is important to explore the future role of benchmarking on the international level. In this regard, examples of transnational benchmarking activities will be considered. As a result of the involvement of South African ...

  8. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  9. MTCB: A Multi-Tenant Customizable database Benchmark

    NARCIS (Netherlands)

    van der Zijden, WIm; Hiemstra, Djoerd; van Keulen, Maurice

    2017-01-01

    We argue that there is a need for Multi-Tenant Customizable OLTP systems. Such systems need a Multi-Tenant Customizable Database (MTC-DB) as a backing. To stimulate the development of such databases, we propose the benchmark MTCB. Benchmarks for OLTP exist and multi-tenant benchmarks exist, but no

  10. BENCHMARKING – BETWEEN TRADITIONAL & MODERN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Mihaela Ungureanu

    2011-09-01

    Full Text Available The concept of benchmarking requires a continuous process of performance improvement of different organizations in order to obtain superiority towards those perceived as market leader’s competitors. This superiority can always be questioned, its relativity originating in the quick growing evolution of the economic environment. The approach supports innovation in relation with traditional methods and it is based on the will of those managers who want to determine limits and seek excellence. The end of the twentieth century is the period of broad expression of benchmarking in various areas and its transformation from a simple quantitative analysis tool, to a resource of information on performance and quality of goods and services.

  11. Benchmark and Continuous Improvement of Performance

    Directory of Open Access Journals (Sweden)

    Alina Alecse Stanciu

    2017-12-01

    Full Text Available The present Economic Environment is challenge us to perform, to think and re-think our personal strategies in according with our entities strategies, even if we are simply employed or we are entrepreneurs. Is an environment characterised by Volatility, Uncertainity, Complexity and Ambiguity - a VUCA World in which the entities must fight for their position gained in the market, disrupt new markets and new economies, developing their client portofolio, with the Performance as one final goal. The pressure of driving forces known as the 2030 Megatrends: Globalization 2.0, Environmental Crisis and the Scarcity of Resources, Individualism and Value Pluralism, Demographic Change, This paper examines whether using benchmark is an opportunity to increase the competitiveness of Romanian SMEs and the results show that benchmark is therefore a powerful instrument, combining reduced negative impact on the environment with a positive impact on the economy and society.

  12. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  13. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  14. IOP Physics benchmarks of the VELO upgrade

    CERN Document Server

    AUTHOR|(CDS)2068636

    2017-01-01

    The LHCb Experiment at the LHC is successfully performing precision measurements primarily in the area of flavour physics. The collaboration is preparing an upgrade that will start taking data in 2021 with a trigger-less readout at five times the current luminosity. The vertex locator has been crucial in the success of the experiment and will continue to be so for the upgrade. It will be replaced by a hybrid pixel detector and this paper discusses the performance benchmarks of the upgraded detector. Despite the challenging experimental environment, the vertex locator will maintain or improve upon its benchmark figures compared to the current detector. Finally the long term plans for LHCb, beyond those of the upgrade currently in preparation, are discussed.

  15. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  16. Benchmark On Sensitivity Calculation (Phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, Tatiana [IRSN; Laville, Cedric [IRSN; Dyrda, James [Atomic Weapons Establishment; Mennerdahl, Dennis [E. Mennerdahl Systems; Golovko, Yury [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Raskach, Kirill [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Tsiboulia, Anatoly [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Lee, Gil Soo [Korea Institute of Nuclear Safety (KINS); Woo, Sweng-Woong [Korea Institute of Nuclear Safety (KINS); Bidaud, Adrien [Labratoire de Physique Subatomique et de Cosmolo-gie (LPSC); Patel, Amrit [NRC; Bledsoe, Keith C [ORNL; Rearden, Bradley T [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  17. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    the visible human CT scans from the National Library of Medicine, are essential for producing realistic images. Sets of test cases with systematic and random errors in selected setup parameters and anatomic volumes are suitable for use as standard benchmarks by the radiotherapy community. In addition to serving as an aid to research and development, benchmark images may also be useful for evaluation of commercial systems and as part of a quality assurance program for clinical systems. Test cases and software are available upon request

  18. Benchmark problems for repository siting models

    International Nuclear Information System (INIS)

    Ross, B.; Mercer, J.W.; Thomas, S.D.; Lester, B.H.

    1982-12-01

    This report describes benchmark problems to test computer codes used in siting nuclear waste repositories. Analytical solutions, field problems, and hypothetical problems are included. Problems are included for the following types of codes: ground-water flow in saturated porous media, heat transport in saturated media, ground-water flow in saturated fractured media, heat and solute transport in saturated porous media, solute transport in saturated porous media, solute transport in saturated fractured media, and solute transport in unsaturated porous media

  19. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W.

    1996-01-01

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  20. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    Science.gov (United States)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  1. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248 ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 2.145, year: 2015 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  2. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  3. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks'...... state during the computation. We ran the system with two servers doing the secure computation using a database with information on about 2500 users. Answers arrived in about 25 seconds.......We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  4. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  5. Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...

    Science.gov (United States)

    This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms.  This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment.  This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a

  6. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  7. The Benchmarking of Integrated Business Structures

    Directory of Open Access Journals (Sweden)

    Nifatova Olena M.

    2017-12-01

    Full Text Available The aim of the article is to study the role of the benchmarking in the process of integration of business structures in the aspect of knowledge sharing. The results of studying the essential content of the concept “integrated business structure” and its semantic analysis made it possible to form our own understanding of this category with an emphasis on the need to consider it in the plane of three projections — legal, economic and organizational one. The economic projection of the essential content of integration associations of business units is supported by the organizational projection, which is expressed through such essential aspects as existence of a single center that makes key decisions; understanding integration as knowledge sharing; using the benchmarking as exchange of experience on key business processes. Understanding the process of integration of business units in the aspect of knowledge sharing involves obtaining certain information benefits. Using the benchmarking as exchange of experience on key business processes in integrated business structures will help improve the basic production processes, increase the efficiency of activity of both the individual business unit and the IBS as a whole.

  8. (Invited) Microreactors for Characterization and Benchmarking of Photocatalysts

    DEFF Research Database (Denmark)

    Vesborg, Peter Christian Kjærgaard; Dionigi, Fabio; Trimarco, Daniel Bøndergaard

    2015-01-01

    In the field of photocatalysis the batch-nature of the typical benchmarking experiment makes it very laborious to obtain good kinetic data as a function of parameters such as illumination wavelength, irradiance, catalyst temperature, reactant composition, etc. Microreactors with on-line mass...... spectrometry, on the other hand, allow fast and automated acquisition of quantitative kinetic data. [1,2] As an example, we show how microreactor experiments on water splitting using Pt- or Rh-loaded GaN:ZnO photocatalysts quickly rank different catalysts according to their activity for gas-phase water...... splitting - but also how the activity scales with relative humidity and the crucial role of CrOx "capping" of the Pt- or Rh-co catalyst in order to prevent the loss of H2/O2 product via backward reaction on the precious metal. [3,4] The data suggests that protons transfer via the catalyst surface between...

  9. A new quality index for benchmarking of different cellulose nanofibrils.

    Science.gov (United States)

    Desmaisons, Johanna; Boutonnet, Elisa; Rueff, Martine; Dufresne, Alain; Bras, Julien

    2017-10-15

    From a single plant source, a wide range of mechanically-deconstructed cellulose nanomaterials can be obtained due to the large number of possible combinations of pre-treatments, mechanical disintegration process, and post-treatments. It leads to the existence of a variety of cellulose nanofibrils with different shapes, morphologies, and properties on the market. The resulting material is actually a complex mixture of nanoscale particles, microfibrillated fibers, and residual fibers on the millimeter scale. Defining a "degree of fibrillation" for determining the final cellulose nanofibril quality is a challenging issue. This study proposes a multi-criteria method to obtain the quality index of cellulose nanofibril suspensions under the form of a unique quantitative grade. According to this method, the influence of different parameters such as pulp conditioning, refining, and hemicellulose content on the defibrillation process is highlighted. This method also allows for the benchmarking of different commercial nanocellulose products. Copyright © 2017. Published by Elsevier Ltd.

  10. SCANNING AND TRACKING MONITORING APPARATUS AND METHOD

    DEFF Research Database (Denmark)

    2017-01-01

    Disclosed is a scanning monitoring apparatus for medical imaging, the scanning monitoring apparatus comprising a controller unit and a display, wherein the controller unit during a scanning session is configured to obtain tracking data (102) of a subject in a medical scanner, obtain scanner data...... indicative of operating parameters of the medical scanner (104); determine an output of a verification function based on the tracking data and the scanner data (106); and control the scanning monitoring apparatus according to the output of the verification function (108). A notification signal may...

  11. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... Videos About Us News Physician Resources Professions Site Index A-Z Thyroid Scan and Uptake Thyroid scan ... for several hours before your exam because eating can affect the accuracy of the uptake measurement. Jewelry ...

  12. Radiopharmaceutical scanning agents

    International Nuclear Information System (INIS)

    1976-01-01

    This invention is directed to dispersions useful in preparing radiopharmaceutical scanning agents, to technetium labelled dispersions, to methods for preparing such dispersions and to their use as scanning agents

  13. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... the limitations of the Thyroid Scan and Uptake? What is a Thyroid Scan and Uptake? A thyroid ... body converts food to energy. top of page What are some common uses of the procedure? The ...

  14. Nuclear Scans - Multiple Languages

    Science.gov (United States)

    ... Expand Section Bone Scan - 简体中文 (Chinese, Simplified (Mandarin dialect)) Bilingual PDF Health Information Translations PET (Positron Emission Tomography) Scan - 简体中文 (Chinese, Simplified ( ...

  15. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... Scan and Uptake Thyroid scan and uptake uses small amounts of radioactive materials called radiotracers, a special ... is a branch of medical imaging that uses small amounts of radioactive material to diagnose and determine ...

  16. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... of page What will I experience during and after the procedure? Most thyroid scan and thyroid uptake ... you otherwise, you may resume your normal activities after your nuclear medicine scan. If any special instructions ...

  17. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  18. Ontology for Semantic Data Integration in the Domain of IT Benchmarking.

    Science.gov (United States)

    Pfaff, Matthias; Neubig, Stefan; Krcmar, Helmut

    2018-01-01

    A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.

  19. Development of Conceptual Benchmark Models to Evaluate Complex Hydrologic Model Calibration in Managed Basins Using Python

    Science.gov (United States)

    Hughes, J. D.; White, J.

    2013-12-01

    For many numerical hydrologic models it is a challenge to quantitatively demonstrate that complex models are preferable to simpler models. Typically, a decision is made to develop and calibrate a complex model at the beginning of a study. The value of selecting a complex model over simpler models is commonly inferred from use of a model with fewer simplifications of the governing equations because it can be time consuming to develop another numerical code with data processing and parameter estimation functionality. High-level programming languages like Python can greatly reduce the effort required to develop and calibrate simple models that can be used to quantitatively demonstrate the increased value of a complex model. We have developed and calibrated a spatially-distributed surface-water/groundwater flow model for managed basins in southeast Florida, USA, to (1) evaluate the effect of municipal groundwater pumpage on surface-water/groundwater exchange, (2) investigate how the study area will respond to sea-level rise, and (3) explore combinations of these forcing functions. To demonstrate the increased value of this complex model, we developed a two-parameter conceptual-benchmark-discharge model for each basin in the study area. The conceptual-benchmark-discharge model includes seasonal scaling and lag parameters and is driven by basin rainfall. The conceptual-benchmark-discharge models were developed in the Python programming language and used weekly rainfall data. Calibration was implemented with the Broyden-Fletcher-Goldfarb-Shanno method available in the Scientific Python (SciPy) library. Normalized benchmark efficiencies calculated using output from the complex model and the corresponding conceptual-benchmark-discharge model indicate that the complex model has more explanatory power than the simple model driven only by rainfall.

  20. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  1. Regional restoration benchmarks for Acropora cervicornis

    Science.gov (United States)

    Schopmeyer, Stephanie A.; Lirman, Diego; Bartels, Erich; Gilliam, David S.; Goergen, Elizabeth A.; Griffin, Sean P.; Johnson, Meaghan E.; Lustic, Caitlin; Maxwell, Kerry; Walter, Cory S.

    2017-12-01

    Coral gardening plays an important role in the recovery of depleted populations of threatened Acropora cervicornis in the Caribbean. Over the past decade, high survival coupled with fast growth of in situ nursery corals have allowed practitioners to create healthy and genotypically diverse nursery stocks. Currently, thousands of corals are propagated and outplanted onto degraded reefs on a yearly basis, representing a substantial increase in the abundance, biomass, and overall footprint of A. cervicornis. Here, we combined an extensive dataset collected by restoration practitioners to document early (1-2 yr) restoration success metrics in Florida and Puerto Rico, USA. By reporting region-specific data on the impacts of fragment collection on donor colonies, survivorship and productivity of nursery corals, and survivorship and productivity of outplanted corals during normal conditions, we provide the basis for a stop-light indicator framework for new or existing restoration programs to evaluate their performance. We show that current restoration methods are very effective, that no excess damage is caused to donor colonies, and that once outplanted, corals behave just as wild colonies. We also provide science-based benchmarks that can be used by programs to evaluate successes and challenges of their efforts, and to make modifications where needed. We propose that up to 10% of the biomass can be collected from healthy, large A. cervicornis donor colonies for nursery propagation. We also propose the following benchmarks for the first year of activities for A. cervicornis restoration: (1) >75% live tissue cover on donor colonies; (2) >80% survivorship of nursery corals; and (3) >70% survivorship of outplanted corals. Finally, we report productivity means of 4.4 cm yr-1 for nursery corals and 4.8 cm yr-1 for outplants as a frame of reference for ranking performance within programs. Such benchmarks, and potential subsequent adaptive actions, are needed to fully assess the

  2. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  3. Benchmarking urban energy efficiency in the UK

    International Nuclear Information System (INIS)

    Keirstead, James

    2013-01-01

    This study asks what is the ‘best’ way to measure urban energy efficiency. There has been recent interest in identifying efficient cities so that best practices can be shared, a process known as benchmarking. Previous studies have used relatively simple metrics that provide limited insight on the complexity of urban energy efficiency and arguably fail to provide a ‘fair’ measure of urban performance. Using a data set of 198 urban UK local administrative units, three methods are compared: ratio measures, regression residuals, and data envelopment analysis. The results show that each method has its own strengths and weaknesses regarding the ease of interpretation, ability to identify outliers and provide consistent rankings. Efficient areas are diverse but are notably found in low income areas of large conurbations such as London, whereas industrial areas are consistently ranked as inefficient. The results highlight the shortcomings of the underlying production-based energy accounts. Ideally urban energy efficiency benchmarks would be built on consumption-based accounts, but interim recommendations are made regarding the use of efficiency measures that improve upon current practice and facilitate wider conversations about what it means for a specific city to be energy-efficient within an interconnected economy. - Highlights: • Benchmarking is a potentially valuable method for improving urban energy performance. • Three different measures of urban energy efficiency are presented for UK cities. • Most efficient areas are diverse but include low-income areas of large conurbations. • Least efficient areas perform industrial activities of national importance. • Improve current practice with grouped per capita metrics or regression residuals

  4. An OpenMP Compiler Benchmark

    Directory of Open Access Journals (Sweden)

    Matthias S. Müller

    2003-01-01

    Full Text Available The purpose of this benchmark is to propose several optimization techniques and to test their existence in current OpenMP compilers. Examples are the removal of redundant synchronization constructs, effective constructs for alternative code and orphaned directives. The effectiveness of the compiler generated code is measured by comparing different OpenMP constructs and compilers. If possible, we also compare with the hand coded "equivalent" solution. Six out of seven proposed optimization techniques are already implemented in different compilers. However, most compilers implement only one or two of them.

  5. Benchmarks of Global Clean Energy Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-01-01

    The Clean Energy Manufacturing Analysis Center (CEMAC), sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), provides objective analysis and up-to-date data on global supply chains and manufacturing of clean energy technologies. Benchmarks of Global Clean Energy Manufacturing sheds light on several fundamental questions about the global clean technology manufacturing enterprise: How does clean energy technology manufacturing impact national economies? What are the economic opportunities across the manufacturing supply chain? What are the global dynamics of clean energy technology manufacturing?

  6. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  7. COVE 2A Benchmarking calculations using NORIA

    International Nuclear Information System (INIS)

    Carrigan, C.R.; Bixler, N.E.; Hopkins, P.L.; Eaton, R.R.

    1991-10-01

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs

  8. Benchmarking EUV mask inspection beyond 0.25 NA

    International Nuclear Information System (INIS)

    Goldberg, Kenneth A.; Mochi, Iacopo; Anderson, Erik H.; Rekawa, Seno B.; Kemp, Charles D.; Huh, S.; Han, H.-S.; Naulleau, P.; Gunion, R.F.

    2008-01-01

    The SEMATECH Berkeley Actinic Inspection Tool (AIT) is an EUV-wavelength mask inspection microscope designed for direct aerial image measurements, and pre-commercial EUV mask research. Operating on a synchrotron bending magnet beamline, the AIT uses an off-axis Fresnel zoneplate lens to project a high-magnification EUV image directly onto a CCD camera. We present the results of recent system upgrades that have improved the imaging resolution, illumination uniformity, and partial coherence. Benchmarking tests show image contrast above 75% for 100-nm mask features, and significant improvements and across the full range of measured sizes. The zoneplate lens has been replaced by an array of user-selectable zoneplates with higher magnification and NA values up to 0.0875, emulating the spatial resolution of a 0.35-NA 4 x EUV stepper. Illumination uniformity is above 90% for mask areas 2-(micro)m-wide and smaller. An angle-scanning mirror reduces the high coherence of the synchrotron beamline light source giving measured σ values of approximately 0.125 at 0.0875 NA

  9. A Web-Based System for Bayesian Benchmark Dose Estimation.

    Science.gov (United States)

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  10. TREAT Transient Analysis Benchmarking for the HEU Core

    Energy Technology Data Exchange (ETDEWEB)

    Kontogeorgakos, D. C. [Argonne National Lab. (ANL), Argonne, IL (United States); Connaway, H. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, A. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-05-01

    This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used to determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.

  11. Peculiarity by Modeling of the Control Rod Movement by the Kalinin-3 Benchmark

    International Nuclear Information System (INIS)

    Nikonov, S. P.; Velkov, K.; Pautz, A.

    2010-01-01

    The paper presents an important part of the results of the OECD/NEA benchmark transient 'Switching off one main circulation pump at nominal power' analyzed as a boundary condition problem by the coupled system code ATHLET-BIPR-VVER. Some observations and comparisons with measured data for integral reactor parameters are discussed. Special attention is paid on the modeling and comparisons performed for the control rod movement and the reactor power history. (Authors)

  12. Application of dynamic benchmarking of rotating machinery for eMaintenance

    OpenAIRE

    Galar, Diego; Berges, Luis

    2010-01-01

    The vibration analysis and condition monitoring technology is based on comparison of measurements obtained with benchmarks suggested by manufacturers or standards. In this case, the references provided by current rules are static and independent of parameters such as age or environmental conditions in which the machine is analyzed.New communication technologies allow the integration of eMaintenance systems, production and real-time data or the result of vibration routes. The integration of al...

  13. Benchmarking the internal combustion engine and hydrogen

    International Nuclear Information System (INIS)

    Wallace, J.S.

    2006-01-01

    The internal combustion engine is a cost-effective and highly reliable energy conversion technology. Exhaust emission regulations introduced in the 1970's triggered extensive research and development that has significantly improved in-use fuel efficiency and dramatically reduced exhaust emissions. The current level of gasoline vehicle engine development is highlighted and representative emissions and efficiency data are presented as benchmarks. The use of hydrogen fueling for IC engines has been investigated over many decades and the benefits and challenges arising are well-known. The current state of hydrogen-fueled engine development will be reviewed and evaluated against gasoline-fueled benchmarks. The prospects for further improvements to hydrogen-fueled IC engines will be examined. While fuel cells are projected to offer greater energy efficiency than IC engines and zero emissions, the availability of fuel cells in quantity at reasonable cost is a barrier to their widespread adaptation for the near future. In their current state of development, hydrogen fueled IC engines are an effective technology to create demand for hydrogen fueling infrastructure until fuel cells become available in commercial quantities. During this transition period, hydrogen fueled IC engines can achieve PZEV/ULSLEV emissions. (author)

  14. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  15. Swiss electricity grid - Benchmarking pilot project

    International Nuclear Information System (INIS)

    2001-01-01

    This article is a short version of the ENET number 210369. This report for the Swiss Federal Office of Energy (SFOE) describes a benchmarking pilot project carried out as a second phase in the development of a formula for the regulation of an open electricity market in Switzerland. It follows on from an initial phase involving the definition of a 'blue print' and a basic concept. The aims of the pilot project - to check out the practicability of the concept - are discussed. The collection of anonymised data for the benchmarking model from over 30 electricity utilities operating on all 7 Swiss grid levels and their integration in the three areas 'Technology', 'Grid Costs' and 'Capital Invested' are discussed in detail. In particular, confidentiality and data protection aspects are looked at. The methods used in the analysis of the data are described and the results of an efficiency analysis of various utilities are presented. The report is concluded with a listing of questions concerning data collection and analysis as well as operational and capital costs that are still to be answered

  16. REVISED STREAM CODE AND WASP5 BENCHMARK

    International Nuclear Information System (INIS)

    Chen, K

    2005-01-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls

  17. Multisensor benchmark data for riot control

    Science.gov (United States)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  18. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. © 2015 Wiley Periodicals, Inc.

  19. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    Karlson, C.F.; Reed, M.L.; Webb, J.R.; Elzea, J.D.

    1990-01-01

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  20. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  1. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  2. CT scanning in meningitis

    Energy Technology Data Exchange (ETDEWEB)

    Wardle, Stephan; Carty, Helen (Royal Liverpool Children' s Hospital, Liverpool (United Kingdom). Department of Radiology)

    12 pediatric cases of acute meningitis were reviewed retrospectively. Findings on CT scan were compared with the clinical course and resulting neurological sequelae. Complications detected by CT scanning include subdural effusion, empyema, hydrocephalus, cerebral atrophy, oedema and infarction. The CT scan results adequately correlated with neurological signs in most cases. Infarction was a reliable indicator of neurological sequelae. Cerebral atrophy alone, however, did not correlate well with the clinical sequelae. (author). 19 refs.; 5 figs.; 6 tabs.

  3. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  4. Structural Benchmark Creep Testing for Microcast MarM-247 Advanced Stirling Convertor E2 Heater Head Test Article SN18

    Science.gov (United States)

    Krause, David L.; Brewer, Ethan J.; Pawlik, Ralph

    2013-01-01

    This report provides test methodology details and qualitative results for the first structural benchmark creep test of an Advanced Stirling Convertor (ASC) heater head of ASC-E2 design heritage. The test article was recovered from a flight-like Microcast MarM-247 heater head specimen previously used in helium permeability testing. The test article was utilized for benchmark creep test rig preparation, wall thickness and diametral laser scan hardware metrological developments, and induction heater custom coil experiments. In addition, a benchmark creep test was performed, terminated after one week when through-thickness cracks propagated at thermocouple weld locations. Following this, it was used to develop a unique temperature measurement methodology using contact thermocouples, thereby enabling future benchmark testing to be performed without the use of conventional welded thermocouples, proven problematic for the alloy. This report includes an overview of heater head structural benchmark creep testing, the origin of this particular test article, test configuration developments accomplished using the test article, creep predictions for its benchmark creep test, qualitative structural benchmark creep test results, and a short summary.

  5. Accuracy and Uncertainty Analysis of PSBT Benchmark Exercises Using a Subchannel Code MATRA

    Directory of Open Access Journals (Sweden)

    Dae-Hyun Hwang

    2012-01-01

    Full Text Available In the framework of the OECD/NRC PSBT benchmark, the subchannel grade void distribution data and DNB data were assessed by a subchannel code, MATRA. The prediction accuracy and uncertainty of the zone-averaged void fraction at the central region of the 5 × 5 test bundle were evaluated for the steady-state and transient benchmark data. Optimum values of the turbulent mixing parameter were evaluated for the subchannel exit temperature distribution benchmark. The influence of the mixing vanes on the subchannel flow distribution was investigated through a CFD analysis. In addition, a regionwise turbulent mixing model was examined to account for the nonhomogeneous mixing characteristics caused by the vane effect. The steady-state DNB benchmark data with uniform and nonuniform axial power shapes were evaluated by employing various DNB prediction models: EPRI bundle CHF correlation, AECL-IPPE 1995 CHF lookup table, and representative mechanistic DNB models such as a sublayer dryout model and a bubble crowding model. The DNBR prediction uncertainties for various DNB models were evaluated from a Monte-Carlo simulation for a selected steady-state condition.

  6. Benchmarking Brain-Computer Interfaces Outside the Laboratory: The Cybathlon 2016

    Science.gov (United States)

    Novak, Domen; Sigrist, Roland; Gerig, Nicolas J.; Wyss, Dario; Bauer, René; Götz, Ulrich; Riener, Robert

    2018-01-01

    This paper presents a new approach to benchmarking brain-computer interfaces (BCIs) outside the lab. A computer game was created that mimics a real-world application of assistive BCIs, with the main outcome metric being the time needed to complete the game. This approach was used at the Cybathlon 2016, a competition for people with disabilities who use assistive technology to achieve tasks. The paper summarizes the technical challenges of BCIs, describes the design of the benchmarking game, then describes the rules for acceptable hardware, software and inclusion of human pilots in the BCI competition at the Cybathlon. The 11 participating teams, their approaches, and their results at the Cybathlon are presented. Though the benchmarking procedure has some limitations (for instance, we were unable to identify any factors that clearly contribute to BCI performance), it can be successfully used to analyze BCI performance in realistic, less structured conditions. In the future, the parameters of the benchmarking game could be modified to better mimic different applications (e.g., the need to use some commands more frequently than others). Furthermore, the Cybathlon has the potential to showcase such devices to the general public. PMID:29375294

  7. Benchmarking Brain-Computer Interfaces Outside the Laboratory: The Cybathlon 2016

    Directory of Open Access Journals (Sweden)

    Domen Novak

    2018-01-01

    Full Text Available This paper presents a new approach to benchmarking brain-computer interfaces (BCIs outside the lab. A computer game was created that mimics a real-world application of assistive BCIs, with the main outcome metric being the time needed to complete the game. This approach was used at the Cybathlon 2016, a competition for people with disabilities who use assistive technology to achieve tasks. The paper summarizes the technical challenges of BCIs, describes the design of the benchmarking game, then describes the rules for acceptable hardware, software and inclusion of human pilots in the BCI competition at the Cybathlon. The 11 participating teams, their approaches, and their results at the Cybathlon are presented. Though the benchmarking procedure has some limitations (for instance, we were unable to identify any factors that clearly contribute to BCI performance, it can be successfully used to analyze BCI performance in realistic, less structured conditions. In the future, the parameters of the benchmarking game could be modified to better mimic different applications (e.g., the need to use some commands more frequently than others. Furthermore, the Cybathlon has the potential to showcase such devices to the general public.

  8. A Dwarf-based Scalable Big Data Benchmarking Methodology

    OpenAIRE

    Gao, Wanling; Wang, Lei; Zhan, Jianfeng; Luo, Chunjie; Zheng, Daoyi; Jia, Zhen; Xie, Biwei; Zheng, Chen; Yang, Qiang; Wang, Haibin

    2017-01-01

    Different from the traditional benchmarking methodology that creates a new benchmark or proxy for every possible workload, this paper presents a scalable big data benchmarking methodology. Among a wide variety of big data analytics workloads, we identify eight big data dwarfs, each of which captures the common requirements of each class of unit of computation while being reasonably divorced from individual implementations. We implement the eight dwarfs on different software stacks, e.g., Open...

  9. Probing New Physics through B Mixing: Status, Benchmarks and Prospects

    CERN Document Server

    Ball, P; Ball, Patricia; Fleischer, Robert

    2006-01-01

    As is well known, $B^0_{d,s}$--$\\bar B^0_{d,s}$ mixing offers a profound probe into the effects of physics beyond the Standard Model. The data obtained at the $e^+e^-$ $B$ factories have already provided valuable insights into the $B_d$-meson system, and very recently also the $B^0_s$--$\\bar B^0_s$ oscillation frequency $\\Delta M_s$ has been measured at the Tevatron. We give a critical discussion of the interpretation of these data in terms of model-independent new-physics parameters. We address in particular the impact of the uncertainties of the relevant input parameters, set benchmarks for their accuracies as required by future precision measurements at the LHC, and explore the prospects for new CP-violating effects in the $B_s$ system. To complement our model-independent analysis, we also discuss the constraints imposed by the CDF measurement of $\\Delta M_s$ on popular models of new physics, namely scenarios with an extra $Z'$ boson and supersymmetry. We find that the new data still leave sizeable room fo...

  10. Exposure artifacts in raster scanned equalization radiography

    International Nuclear Information System (INIS)

    Plewes, D.B.; Vogelstein, E.

    1984-01-01

    The image artifacts characteristic of a scanning chest radiographic system are reviewed. The technique employs a pulsed beam of radiation swept in an overlapping raster pattern that can result in severe ripple and scan line artifacts with improper scanning parameters. A one-dimensional treatment of the scanner geometry shows that the artifacts can be eliminated when the beam width is an integral multiple of interpulse spacing. An extension to a two-dimensional analysis indicates that with the collimator geometries employed, artifact-free images are not possible with a fixed x-ray frequency but can be achieved when a variable frequency source is used. A treatment of the sensitivity for artifact formation shows that with proper choice of scanning parameters sizable errors in beam width can be tolerated without significant artifact formation

  11. Benchmarking Pt-based electrocatalysts for low temperature fuel cell reactions with the rotating disk electrode

    DEFF Research Database (Denmark)

    Pedersen, Christoffer Mølleskov; Escribano, Maria Escudero; Velazquez-Palenzuela, Amado Andres

    2015-01-01

    We present up-to-date benchmarking methods for testing electrocatalysts for polymer exchange membrane fuel cells (PEMFC), using the rotating disk electrode (RDE) method. We focus on the oxygen reduction reaction (ORR) and the hydrogen oxidation reaction (HOR) in the presence of CO. We have chosen...... our experimental methods to provide the most optimal compromise between the ease of carrying out the measurements and for ensuring comparability with PEMFC conditions. For the ORR, the effect of temperature, scan rate, Ohmic drop correction and background subtraction on the catalyst activity...

  12. Benchmarking Best Practices in Transformation for Sea Enterprise

    National Research Council Canada - National Science Library

    Brook, Douglas A; Hudgens, Bryan; Nguyen, Nam; Walsh, Katherine

    2006-01-01

    ... applied to reinvestment and recapitalization. Sea Enterprise contracted the Center for Defense Management Reform to research transformation and benchmarking best practices in the private sector...

  13. Benchmark hydrogeophysical data from a physical seismic model

    Science.gov (United States)

    Lorenzo, Juan M.; Smolkin, David E.; White, Christopher; Chollett, Shannon R.; Sun, Ting

    2013-01-01

    Theoretical fluid flow models are used regularly to predict and analyze porous media flow but require verification against natural systems. Seismic monitoring in a controlled laboratory setting at a nominal scale of 1:1000 in the acoustic frequency range can help improve fluid flow models as well as elasto-granular models for uncompacted saturated-unsaturated soils. A mid-scale sand tank allows for many highly repeatable, yet flexible, experimental configurations with different material compositions and pump rates while still capturing phenomena such as patchy saturation, flow fingering, or layering. The tank (˜6×9×0.44 m) contains a heterogeneous sand pack (1.52-1.7 phi). In a set of eight benchmark experiments the water table is raised inside the sand body at increments of ˜0.05 m. Seismic events (vertical component) are recorded by a pseudowalkaway 64-channel accelerometer array (20 Hz-20 kHz), at 78 kS/s, in 100- scan stacks so as to optimize signal-to-noise ratio. Three screened well sites monitor water depth (+/-3 mm) inside the sand body. Seismic data sets in SEG Y format are publicly downloadable from the internet (http://github.com/cageo/Lorenzo-2012), in order to allow comparisons of different seismic and fluid flow analyses. The capillary fringe does not appear to completely saturate, as expected, because the interpreted compressional-wave velocity values remain so low (<210 m/s). Even at the highest water levels there is no large seismic impedance contrast across the top of the water table to generate a clear reflector. Preliminary results indicate an immediate need for several additional experiments whose data sets will be added to the online database. Future benchmark data sets will grow with a control data set to show conditions in the sand body before water levels rise, and a surface 3D data set. In later experiments, buried sensors will help reduce seismic attenuation effects and in-situ saturation sensors will provide calibration values.

  14. MIMIC: An Innovative Methodology for Determining Mobile Laser Scanning System Point Density

    Directory of Open Access Journals (Sweden)

    Conor Cahalane

    2014-08-01

    Full Text Available Understanding how various Mobile Mapping System (MMS laser hardware configurations and operating parameters exercise different influence on point density is important for assessing system performance, which in turn facilitates system design and MMS benchmarking. Point density also influences data processing, as objects that can be recognised using automated algorithms generally require a minimum point density. Although obtaining the necessary point density impacts on hardware costs, survey time and data storage requirements, a method for accurately and rapidly assessing MMS performance is lacking for generic MMSs. We have developed a method for quantifying point clouds collected by an MMS with respect to known objects at specified distances using 3D surface normals, 2D geometric formulae and line drawing algorithms. These algorithms were combined in a system called the Mobile Mapping Point Density Calculator (MIMIC and were validated using point clouds captured by both a single scanner and a dual scanner MMS. Results from MIMIC were promising: when considering the number of scan profiles striking the target, the average error equated to less than 1 point per scan profile. These tests highlight that MIMIC is capable of accurately calculating point density for both single and dual scanner MMSs.

  15. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available Toggle navigation Test/Treatment Patient Type Screening/Wellness Disease/Condition Safety En Español More Info Images/Videos About Us News Physician Resources Professions Site Index A-Z Thyroid Scan and Uptake Thyroid scan and uptake uses ...

  16. Model PET Scan Activity

    Science.gov (United States)

    Strunk, Amber; Gazdovich, Jennifer; Redouté, Oriane; Reverte, Juan Manuel; Shelley, Samantha; Todorova, Vesela

    2018-05-01

    This paper provides a brief introduction to antimatter and how it, along with other modern physics topics, is utilized in positron emission tomography (PET) scans. It further describes a hands-on activity for students to help them gain an understanding of how PET scans assist in detecting cancer. Modern physics topics provide an exciting way to introduce students to current applications of physics.

  17. Scanning laser Doppler vibrometry

    DEFF Research Database (Denmark)

    Brøns, Marie; Thomsen, Jon Juel

    With a Scanning Laser Doppler Vibrometer (SLDV) a vibrating surface is automatically scanned over predefined grid points, and data processed for displaying vibration properties like mode shapes, natural frequencies, damping ratios, and operational deflection shapes. Our SLDV – a PSV-500H from...

  18. Bone scan and joint scan of hands and feet in rheumatoid arthritis

    International Nuclear Information System (INIS)

    Carpentier, N.; Verbeke, S.; Perdrisot, R.; Grilo, R.M.; Quenesson, E.; Bonnet, C.; Vergne, P.; Treves, R.; Bertin, P.; Boutros-Toni, F.

    2000-01-01

    The aim of this study was to determine the ability of joint scan and bone scan of hands and feet, in patients with rheumatoid arthritis, to localize the altered joints. The sensitivity, the specificity, the positive predictive value (PPV) and the negative predictive value (NPV) of joint scan were determined in comparison with clinical joint assessment. Fifteen patients (780 joints) were clinically examined (pain and synovitis); during the same day, a bone scan and a joint scan were realized by oxidronate 99m Tc intravenous injection. Patients were scanned 5 minutes (tissual time, T t ) and 3 hours 1/4 (bone time, T 0 ) after the administration. The uptake of the bi-phosphonate was evaluated with a qualitative method using a grey scale. The uptake of 99m Tc oxidronate was quantitated using an extra-articular region of interest. The sensitivity, specificity, PPV and NPV of the scan at Tt were 46%, 96%, 85% et 78%. The same parameters were 75%, 66%, 53% and 84% for the scan realized at T 0 . The joint scan has showed 22% of false positive. These false positives could be a consequence of an earlier detection of joint alterations by scan. The joint scan should forecast the evolution of joints in patients with rheumatoid arthritis. (author)

  19. Transverse section scanning mechanism

    International Nuclear Information System (INIS)

    Doherty, E.J.

    1978-01-01

    Apparatus is described for scanning a transverse, radionuclide scan-field using an array of focussed collimators. The collimators are movable tangentially on rails, driven by a single motor via a coupled screw. The collimators are also movable in a radial direction on rails driven by a step motor via coupled screws and bevel gears. Adjacent bevel gears rotate in opposite directions so adjacent collimators move in radially opposite directions. In use, the focal point of each collimator scans at least half of the scan-field, e.g. a human head located in the central aperture, and the electrical outputs of detectors associated with each collimator are used to determine the distribution of radioactive emission intensity at a number of points in the scan-field. (author)

  20. Beam scanning system

    International Nuclear Information System (INIS)

    Enge, H.A.

    1977-01-01

    A system for deflecting a beam of particles having different momenta, preferably through a 90 0 angle, so as to cause the beam to impinge upon a moving target and to scan across the target is described. The system includes a means responsive to a beam from a suitable source for causing the beam to periodically scan in a scanning plane and further means for deflecting the periodically scanned beam through the desired angle in a deflection plane so that the deflected beam impinges on the target. Means are included in the system for reducing the momentum dispersion at the target in both the deflection and the scanning planes and for spatially focussing the beam so as to produce a desired beam diameter at the target

  1. Development of an MPI benchmark program library

    Energy Technology Data Exchange (ETDEWEB)

    Uehara, Hitoshi

    2001-03-01

    Distributed parallel simulation software with message passing interfaces has been developed to realize large-scale and high performance numerical simulations. The most popular API for message communication is an MPI. The MPI will be provided on the Earth Simulator. It is known that performance of message communication using the MPI libraries gives a significant influence on a whole performance of simulation programs. We developed an MPI benchmark program library named MBL in order to measure the performance of message communication precisely. The MBL measures the performance of major MPI functions such as point-to-point communications and collective communications and the performance of major communication patterns which are often found in application programs. In this report, the description of the MBL and the performance analysis of the MPI/SX measured on the SX-4 are presented. (author)

  2. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  3. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  4. Benchmarking organic mixed conductors for transistors

    KAUST Repository

    Inal, Sahika

    2017-11-20

    Organic mixed conductors have garnered significant attention in applications from bioelectronics to energy storage/generation. Their implementation in organic transistors has led to enhanced biosensing, neuromorphic function, and specialized circuits. While a narrow class of conducting polymers continues to excel in these new applications, materials design efforts have accelerated as researchers target new functionality, processability, and improved performance/stability. Materials for organic electrochemical transistors (OECTs) require both efficient electronic transport and facile ion injection in order to sustain high capacity. In this work, we show that the product of the electronic mobility and volumetric charge storage capacity (µC*) is the materials/system figure of merit; we use this framework to benchmark and compare the steady-state OECT performance of ten previously reported materials. This product can be independently verified and decoupled to guide materials design and processing. OECTs can therefore be used as a tool for understanding and designing new organic mixed conductors.

  5. Benchmark Results for Few-Body Hypernuclei

    Science.gov (United States)

    Ferrari Ruffino, F.; Lonardoni, D.; Barnea, N.; Deflorian, S.; Leidemann, W.; Orlandini, G.; Pederiva, F.

    2017-05-01

    The Non-Symmetrized Hyperspherical Harmonics method (NSHH) is introduced in the hypernuclear sector and benchmarked with three different ab-initio methods, namely the Auxiliary Field Diffusion Monte Carlo method, the Faddeev-Yakubovsky approach and the Gaussian Expansion Method. Binding energies and hyperon separation energies of three- to five-body hypernuclei are calculated by employing the two-body Λ N component of the phenomenological Bodmer-Usmani potential (Bodmer and Usmani in Nucl Phys A 477:621, 1988; Usmani and Khanna in J Phys G 35:025105, 2008), and a hyperon-nucleon interaction (Hiyama et al. in Phus Rev C 65:011301, 2001) simulating the scattering phase shifts given by NSC97f (Rijken et al. in Phys Rev C 59:21, 1999). The range of applicability of the NSHH method is briefly discussed.

  6. Development of solutions to benchmark piping problems

    Energy Technology Data Exchange (ETDEWEB)

    Reich, M; Chang, T Y; Prachuktam, S; Hartzman, M

    1977-12-01

    Benchmark problems and their solutions are presented. The problems consist in calculating the static and dynamic response of selected piping structures subjected to a variety of loading conditions. The structures range from simple pipe geometries to a representative full scale primary nuclear piping system, which includes the various components and their supports. These structures are assumed to behave in a linear elastic fashion only, i.e., they experience small deformations and small displacements with no existing gaps, and remain elastic through their entire response. The solutions were obtained by using the program EPIPE, which is a modification of the widely available program SAP IV. A brief outline of the theoretical background of this program and its verification is also included.

  7. Shielding integral benchmark archive and database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L.; Grove, R.E. [Radiation Safety Information Computational Center RSICC, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831-6171 (United States); Kodeli, I. [Josef Stefan Inst., Jamova 39, 1000 Ljubljana (Slovenia); Gulliford, J.; Sartori, E. [OECD NEA Data Bank, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  8. LIDAR COMBINED SCANNING UNIT

    Directory of Open Access Journals (Sweden)

    V. V. Elizarov

    2016-11-01

    Full Text Available Subject of Research. The results of lidar combined scanning unit development for locating leaks of hydrocarbons are presented The unit enables to perform high-speed scanning of the investigated space in wide and narrow angle fields. Method. Scanning in a wide angular field is produced by one-line scanning path by means of the movable aluminum mirror with a frequency of 20Hz and amplitude of 20 degrees of swing. Narrowband scanning is performed along a spiral path by the deflector. The deflection of the beam is done by rotation of the optical wedges forming part of the deflector at an angle of ±50. The control function of the scanning node is performed by a specialized software product written in C# programming language. Main Results. This scanning unit allows scanning the investigated area at a distance of 50-100 m with spatial resolution at the level of 3 cm. The positioning accuracy of the laser beam in space is 15'. The developed scanning unit gives the possibility to browse the entire investigated area for the time not more than 1 ms at a rotation frequency of each wedge from 50 to 200 Hz. The problem of unambiguous definition of the beam geographical coordinates in space is solved at the software level according to the rotation angles of the mirrors and optical wedges. Lidar system coordinates are determined by means of GPS. Practical Relevance. Development results open the possibility for increasing the spatial resolution of scanning systems of a wide range of lidars and can provide high positioning accuracy of the laser beam in space.

  9. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  10. Benchmarking management practices in Australian public healthcare.

    Science.gov (United States)

    Agarwal, Renu; Green, Roy; Agarwal, Neeru; Randhawa, Krithika

    2016-01-01

    The purpose of this paper is to investigate the quality of management practices of public hospitals in the Australian healthcare system, specifically those in the state-managed health systems of Queensland and New South Wales (NSW). Further, the authors assess the management practices of Queensland and NSW public hospitals jointly and globally benchmark against those in the health systems of seven other countries, namely, USA, UK, Sweden, France, Germany, Italy and Canada. In this study, the authors adapt the unique and globally deployed Bloom et al. (2009) survey instrument that uses a "double blind, double scored" methodology and an interview-based scoring grid to measure and internationally benchmark the management practices in Queensland and NSW public hospitals based on 21 management dimensions across four broad areas of management - operations, performance monitoring, targets and people management. The findings reveal the areas of strength and potential areas of improvement in the Queensland and NSW Health hospital management practices when compared with public hospitals in seven countries, namely, USA, UK, Sweden, France, Germany, Italy and Canada. Together, Queensland and NSW Health hospitals perform best in operations management followed by performance monitoring. While target management presents scope for improvement, people management is the sphere where these Australian hospitals lag the most. This paper is of interest to both hospital administrators and health care policy-makers aiming to lift management quality at the hospital level as well as at the institutional level, as a vehicle to consistently deliver sustainable high-quality health services. This study provides the first internationally comparable robust measure of management capability in Australian public hospitals, where hospitals are run independently by the state-run healthcare systems. Additionally, this research study contributes to the empirical evidence base on the quality of

  11. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Primm III, RT

    2002-05-29

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  12. Discretization Based on Entropy and Multiple Scanning

    Directory of Open Access Journals (Sweden)

    Jerzy W. Grzymala-Busse

    2013-04-01

    Full Text Available In this paper we present entropy driven methodology for discretization. Recently, the original entropy based discretization was enhanced by including two options of selecting the best numerical attribute. In one option, Dominant Attribute, an attribute with the smallest conditional entropy of the concept given the attribute is selected for discretization and then the best cut point is determined. In the second option, Multiple Scanning, all attributes are scanned a number of times, and at the same time the best cut points are selected for all attributes. The results of experiments on 17 benchmark data sets, including large data sets, with 175 attributes or 25,931 cases, are presented. For comparison, the results of experiments on the same data sets using the global versions of well-known discretization methods of Equal Interval Width and Equal Frequency per Interval are also included. The entropy driven technique enhanced both of these methods by converting them into globalized methods. Results of our experiments show that the Multiple Scanning methodology is significantly better than both: Dominant Attribute and the better results of Globalized Equal Interval Width and Equal Frequency per Interval methods (using two-tailed test and 0.01 level of significance.

  13. Validation of VHTRC calculation benchmark of critical experiment using the MCB code

    Directory of Open Access Journals (Sweden)

    Stanisz Przemysław

    2016-01-01

    Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.

  14. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallelcomputing systems and the actual delivered performance for scientificapplications. In general this gap is caused by inadequate architecturalsupport for the requirements of modern scientific applications, ascommercial applications and the much larger market they represent, havedriven the evolution of computer architectures. This gap has raised theimportance of developing better benchmarking methodologies tocharacterize and to understand the performance requirements of scientificapplications, to communicate them efficiently to influence the design offuture computer architectures. This improved understanding of theperformance behavior of scientific applications will allow improvedperformance predictions, development of adequate benchmarks foridentification of hardware and application features that work well orpoorly together, and a more systematic performance evaluation inprocurement situations. The Berkeley Institute for Performance Studieshas developed a three-level approach to evaluating the design of high endmachines and the software that runs on them: 1) A suite of representativeapplications; 2) A set of application kernels; and 3) Benchmarks tomeasure key system parameters. The three levels yield different type ofinformation, all of which are useful in evaluating systems, and enableNSF and DOE centers to select computer architectures more suited forscientific applications. The analysis will further allow the centers toengage vendors in discussion of strategies to alleviate the presentarchitectural bottlenecks using quantitative information. These mayinclude small hardware changes or larger ones that may be out interest tonon-scientific workloads. Providing quantitative models to the vendorsallows them to assess the benefits of technology alternatives using theirown internal cost-models in the broader marketplace, ideally facilitatingthe development of future computer architectures more suited forscientific

  15. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallel computing systems and the actual delivered performance for scientific applications. In general this gap is caused by inadequate architectural support for the requirements of modern scientific applications, as commercial applications and the much larger market they represent, have driven the evolution of computer architectures. This gap has raised the importance of developing better benchmarking methodologies to characterize and to understand the performance requirements of scientific applications, to communicate them efficiently to influence the design of future computer architectures. This improved understanding of the performance behavior of scientific applications will allow improved performance predictions, development of adequate benchmarks for identification of hardware and application features that work well or poorly together, and a more systematic performance evaluation in procurement situations. The Berkeley Institute for Performance Studies has developed a three-level approach to evaluating the design of high end machines and the software that runs on them: (1) A suite of representative applications; (2) A set of application kernels; and (3) Benchmarks to measure key system parameters. The three levels yield different type of information, all of which are useful in evaluating systems, and enable NSF and DOE centers to select computer architectures more suited for scientific applications. The analysis will further allow the centers to engage vendors in discussion of strategies to alleviate the present architectural bottlenecks using quantitative information. These may include small hardware changes or larger ones that may be out interest to non-scientific workloads. Providing quantitative models to the vendors allows them to assess the benefits of technology alternatives using their own internal cost-models in the broader marketplace, ideally facilitating the development of future computer

  16. Evaporator unit as a benchmark for plug and play and fault tolerant control

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Vinther, Kasper; Mojallali, Hamed

    2012-01-01

    This paper presents a challenging industrial benchmark for implementation of control strategies under realistic working conditions. The developed control strategies should perform in a plug & play manner, i.e. adapt to varying working conditions, optimize their performance, and provide fault...... tolerance. A fault tolerant strategy is needed to deal with a faulty sensor measurement of the evaporation pressure. The design and algorithmic challenges in the control of an evaporator include: unknown model parameters, large parameter variations, varying loads, and external discrete phenomena...... such as compressor switch on/o or abrupt change in compressor speed....

  17. Scan path entropy and Arrow plots: Capturing scanning behavior of multiple observers

    Directory of Open Access Journals (Sweden)

    Ignace T C Hooge

    2013-12-01

    Full Text Available Designers of visual communication material want their material to attract and retain attention. In marketing research, heat maps, dwell time, and time to AOI first hit are often used as evaluation parameters. Here we present two additional measures 1 scan path entropy to quantify gaze guidance and 2 the arrow plot to visualize the average scan path. Both are based on string representations of scan paths. The latter also incorporates transition matrices and time required for 50% of the observers to first hit AOIs (T50. The new measures were tested in an eye tracking study (48 observers, 39 advertisements. Scan path entropy is a sensible measure for gaze guidance and the new visualization method reveals aspects of the average scan path and gives a better indication in what order global scanning takes place.

  18. Benchmarking on the management of radioactive waste; Benchmarking sobre la gestion de los residuos radiactivos

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Gomez, M. a.; Gonzalez Gandal, R.; Gomez Castano, N.

    2013-09-01

    In this project, an evaluation of the practices carried out in the waste management field at the Spanish nuclear power plants has been done following the Benchmarking methodology. This process has allowed the identification of aspects to improve waste treatment processes; to reduce the volume of waste; to reduce management costs and to establish ways of management for the waste stream which do not have. (Author)

  19. The New Weather Radar for America's Space Program in Florida: A Temperature Profile Adaptive Scan Strategy

    Science.gov (United States)

    Carey, L. D.; Petersen, W. A.; Deierling, W.; Roeder, W. P.

    2009-01-01

    A new weather radar is being acquired for use in support of America s space program at Cape Canaveral Air Force Station, NASA Kennedy Space Center, and Patrick AFB on the east coast of central Florida. This new radar replaces the modified WSR-74C at Patrick AFB that has been in use since 1984. The new radar is a Radtec TDR 43-250, which has Doppler and dual polarization capability. A new fixed scan strategy was designed to best support the space program. The fixed scan strategy represents a complex compromise between many competing factors and relies on climatological heights of various temperatures that are important for improved lightning forecasting and evaluation of Lightning Launch Commit Criteria (LCC), which are the weather rules to avoid lightning strikes to in-flight rockets. The 0 C to -20 C layer is vital since most generation of electric charge occurs within it and so it is critical in evaluating Lightning LCC and in forecasting lightning. These are two of the most important duties of 45 WS. While the fixed scan strategy that covers most of the climatological variation of the 0 C to -20 C levels with high resolution ensures that these critical temperatures are well covered most of the time, it also means that on any particular day the radar is spending precious time scanning at angles covering less important heights. The goal of this project is to develop a user-friendly, Interactive Data Language (IDL) computer program that will automatically generate optimized radar scan strategies that adapt to user input of the temperature profile and other important parameters. By using only the required scan angles output by the temperature profile adaptive scan strategy program, faster update times for volume scans and/or collection of more samples per gate for better data quality is possible, while maintaining high resolution at the critical temperature levels. The temperature profile adaptive technique will also take into account earth curvature and refraction

  20. Benchmarking in pathology: development of a benchmarking complexity unit and associated key performance indicators.

    Science.gov (United States)

    Neil, Amanda; Pfeffer, Sally; Burnett, Leslie

    2013-01-01

    This paper details the development of a new type of pathology laboratory productivity unit, the benchmarking complexity unit (BCU). The BCU provides a comparative index of laboratory efficiency, regardless of test mix. It also enables estimation of a measure of how much complex pathology a laboratory performs, and the identification of peer organisations for the purposes of comparison and benchmarking. The BCU is based on the theory that wage rates reflect productivity at the margin. A weighting factor for the ratio of medical to technical staff time was dynamically calculated based on actual participant site data. Given this weighting, a complexity value for each test, at each site, was calculated. The median complexity value (number of BCUs) for that test across all participating sites was taken as its complexity value for the Benchmarking in Pathology Program. The BCU allowed implementation of an unbiased comparison unit and test listing that was found to be a robust indicator of the relative complexity for each test. Employing the BCU data, a number of Key Performance Indicators (KPIs) were developed, including three that address comparative organisational complexity, analytical depth and performance efficiency, respectively. Peer groups were also established using the BCU combined with simple organisational and environmental metrics. The BCU has enabled productivity statistics to be compared between organisations. The BCU corrects for differences in test mix and workload complexity of different organisations and also allows for objective stratification into peer groups.

  1. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  2. The Learning Organisation: Results of a Benchmarking Study.

    Science.gov (United States)

    Zairi, Mohamed

    1999-01-01

    Learning in corporations was assessed using these benchmarks: core qualities of creative organizations, characteristic of organizational creativity, attributes of flexible organizations, use of diversity and conflict, creative human resource management systems, and effective and successful teams. These benchmarks are key elements of the learning…

  3. Present status of International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori

    2000-01-01

    The International Criticality Safety Evaluation Project, ICSBEP was designed to identify and evaluate a comprehensive set of critical experiment benchmark data. Compilation of the data into a standardized format are made by reviewing original and subsequently revised documentation for calculating each experiment with standard criticality safety codes. Five handbooks of evaluated criticality safety benchmark experiments have been published since 1995. (author)

  4. 7 CFR 1709.5 - Determination of energy cost benchmarks.

    Science.gov (United States)

    2010-01-01

    ... recent data available, and periodically revise, the home energy cost benchmarks and the high energy cost...), other petroleum products, wood and other biomass fuels, coal, wind and solar energy. ... 7 Agriculture 11 2010-01-01 2010-01-01 false Determination of energy cost benchmarks. 1709.5...

  5. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...... to a number of constrains. The objective is to develop an efficient and optimal control strategy....

  6. Case mix classification and a benchmark set for surgery scheduling

    NARCIS (Netherlands)

    Leeftink, Gréanne; Hans, Erwin W.

    Numerous benchmark sets exist for combinatorial optimization problems. However, in healthcare scheduling, only a few benchmark sets are known, mainly focused on nurse rostering. One of the most studied topics in the healthcare scheduling literature is surgery scheduling, for which there is no widely

  7. Quality indicators for international benchmarking of mental health care

    DEFF Research Database (Denmark)

    Hermann, Richard C; Mattke, Soeren; Somekh, David

    2006-01-01

    To identify quality measures for international benchmarking of mental health care that assess important processes and outcomes of care, are scientifically sound, and are feasible to construct from preexisting data.......To identify quality measures for international benchmarking of mental health care that assess important processes and outcomes of care, are scientifically sound, and are feasible to construct from preexisting data....

  8. Benchmark – based review as a strategy for microfinance delivery ...

    African Journals Online (AJOL)

    Benchmark – based review as a strategy for microfinance delivery. ... A.O Ejiogu, C.O Utazi. Abstract. Microfinance is one of the development tools for poverty reduction. The traditional supply-led subsidized credit delivery has led to increase in credit disbursements. However, there is shortage of model benchmark and ...

  9. Advocacy for Benchmarking in the Nigerian Institute of Advanced ...

    African Journals Online (AJOL)

    FIRST LADY

    Abstract. The paper gave a general overview of benchmarking and its novel application to library practice with a view to achieve organizational change and improved performance. Based on literature, the paper took an analytic, descriptive and qualitative overview of benchmarking practices vis a vis services in law libraries ...

  10. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    Science.gov (United States)

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  11. Procedural restraint in children's nursing: using clinical benchmarks.

    Science.gov (United States)

    Bland, Michael; Bridge, Caroline; Cooper, Melanie; Dixon, Deborah; Hay, Lyndsey; Zerbato, Anna

    2002-08-01

    This paper will explore the use of child restraint during common clinical procedures such as venepuncture, cannulation and lumbar puncture. A lack of research, guidelines and protocols on restraining children led a group of student nurses to develop a clinical practice benchmark on procedural restraint for the North West Clinical Practice Benchmarking Group.

  12. Evaluation of an international benchmarking initiative in nine eye hospitals.

    Science.gov (United States)

    de Korne, Dirk F; Sol, Kees J C A; van Wijngaarden, Jeroen D H; van Vliet, Ellen J; Custers, Thomas; Cubbon, Mark; Spileers, Werner; Ygge, Jan; Ang, Chong-Lye; Klazinga, Niek S

    2010-01-01

    Benchmarking has become very popular among managers to improve quality in the private and public sector, but little is known about its applicability in international hospital settings. The purpose of this study was to evaluate the applicability of an international benchmarking initiative in eye hospitals. To assess the applicability, an evaluation frame was constructed on the basis of a systematic literature review. The frame was applied longitudinally to a case study of nine eye hospitals that used a set of performance indicators for benchmarking. Document analysis, nine questionnaires, and 26 semistructured interviews with stakeholders in each hospital were used for qualitative analysis. The evaluation frame consisted of four areas with key conditions for benchmarking: purposes of benchmarking, performance indicators, participating organizations, and performance management systems. This study showed that the international benchmarking between eye hospitals scarcely met these conditions. The used indicators were not incorporated in a performance management system in any of the hospitals. Despite the apparent homogeneity of the participants and the absence of competition, differences in ownership, governance structure, reimbursement, and market orientation made comparisons difficult. Benchmarking, however, stimulated learning and exchange of knowledge. It encouraged interaction and thereby learning on the tactical and operational levels, which is also an incentive to attract and motivate staff. Although international hospital benchmarking seems to be a rational process of sharing performance data, this case study showed that it is highly dependent on social processes and a learning environment. It can be useful for diagnostics, helping local hospitals to catalyze performance improvements.

  13. Evaluation of an international benchmarking initiative in nine eye hospitals

    NARCIS (Netherlands)

    de Korne, Dirk F.; Sol, Kees J. C. A.; van Wijngaarden, Jeroen D. H.; van Vliet, Ellen J.; Custers, Thomas; Cubbon, Mark; Spileers, Werner; Ygge, Jan; Ang, Chong-Lye; Klazinga, Niek S.

    2010-01-01

    BACKGROUND:: Benchmarking has become very popular among managers to improve quality in the private and public sector, but little is known about its applicability in international hospital settings. PURPOSE:: The purpose of this study was to evaluate the applicability of an international benchmarking

  14. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...

  15. Advocacy for Benchmarking in the Nigerian Institute of Advanced ...

    African Journals Online (AJOL)

    The paper gave a general overview of benchmarking and its novel application to library practice with a view to achieve organizational change and improved performance. Based on literature, the paper took an analytic, descriptive and qualitative overview of benchmarking practices vis a vis services in law libraries generally ...

  16. A Benchmark for Online Non-Blocking Schema Transformations

    NARCIS (Netherlands)

    Wevers, L.; Hofstra, Matthijs; Tammens, Menno; Huisman, Marieke; van Keulen, Maurice

    2015-01-01

    This paper presents a benchmark for measuring the blocking behavior of schema transformations in relational database systems. As a basis for our benchmark, we have developed criteria for the functionality and performance of schema transformation mechanisms based on the characteristics of state of

  17. Presidential Address 1997--Benchmarks for the Next Millennium.

    Science.gov (United States)

    Baker, Pamela C.

    1997-01-01

    Reflects on the century's preeminent benchmarks, including the evolution in the lives of people with disabilities and the prevention of many causes of mental retardation. The ethical challenges of genetic engineering and diagnostic technology and the need for new benchmarks in policy, practice, and research are discussed. (CR)

  18. Computational benchmark problem for deep penetration in iron

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Carter, L.L.

    1980-01-01

    A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab

  19. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  20. Benchmarking with the BLASST Sessional Staff Standards Framework

    Science.gov (United States)

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  1. BIM quickscan: benchmark of BIM performance in the Netherlands

    NARCIS (Netherlands)

    Berlo, L.A.H.M. van; Dijkmans, T.J.A.; Hendriks, H.; Spekkink, D.; Pel, W.

    2012-01-01

    In 2009 a “BIM QuickScan” for benchmarking BIM performance was created in the Netherlands (Sebastian, Berlo 2010). This instrument aims to provide insight into the current BIM performance of a company. The benchmarking instrument combines quantitative and qualitative assessments of the ‘hard’ and

  2. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    Science.gov (United States)

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  3. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    Science.gov (United States)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  4. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    Science.gov (United States)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  5. OWL2 benchmarking for the evaluation of knowledge based systems.

    Directory of Open Access Journals (Sweden)

    Sher Afgun Khan

    Full Text Available OWL2 semantics are becoming increasingly popular for the real domain applications like Gene engineering and health MIS. The present work identifies the research gap that negligible attention has been paid to the performance evaluation of Knowledge Base Systems (KBS using OWL2 semantics. To fulfil this identified research gap, an OWL2 benchmark for the evaluation of KBS is proposed. The proposed benchmark addresses the foundational blocks of an ontology benchmark i.e. data schema, workload and performance metrics. The proposed benchmark is tested on memory based, file based, relational database and graph based KBS for performance and scalability measures. The results show that the proposed benchmark is able to evaluate the behaviour of different state of the art KBS on OWL2 semantics. On the basis of the results, the end users (i.e. domain expert would be able to select a suitable KBS appropriate for his domain.

  6. Tomographic scanning apparatus

    International Nuclear Information System (INIS)

    1981-01-01

    Details are presented of a tomographic scanning apparatus, its rotational assembly, and the control and circuit elements, with particular reference to the amplifier and multiplexing circuits enabling detector signal calibration. (U.K.)

  7. Tomographic Scanning Imaging Seeker

    National Research Council Canada - National Science Library

    Hovland, Harald

    2005-01-01

    .... Simulation results are provided to show the reconstruction quality. The concept, using a single pixel and a simple rotating axis scan mechanism, allows for a simple, low-cost, software-driven imaging sensor...

  8. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... which are encased in metal and plastic and most often shaped like a box, attached to a ... will I experience during and after the procedure? Most thyroid scan and thyroid uptake procedures are painless. ...

  9. Tomographic scanning apparatus

    International Nuclear Information System (INIS)

    1981-01-01

    Details are given of a tomographic scanning apparatus, with particular reference to a multiplexer slip ring means for receiving output from the detectors and enabling interfeed to the image reconstruction station. (U.K.)

  10. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... often unattainable using other imaging procedures. For many diseases, nuclear medicine scans yield the most useful information needed to make a diagnosis or to determine appropriate treatment, if any. Nuclear ...

  11. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... energy. top of page What are some common uses of the procedure? The thyroid scan is used ... the test. You should also drink plenty of water to help flush the radioactive material out of ...

  12. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... the body converts food to energy. top of page What are some common uses of the procedure? The ... type of scan you are undergoing. top of page What does the equipment look like? The special camera ...

  13. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... nuclear medicine procedures are able to pinpoint molecular activity within the body, they offer the potential to ... tells you otherwise, you may resume your normal activities after your nuclear medicine scan. If any special ...

  14. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... gamma camera heads are oriented at a 90 degree angle and placed over the patient's body. SPECT ... are scheduled for an additional procedure that same day that requires an intravenous line. Actual scanning time ...

  15. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... energy. top of page What are some common uses of the procedure? The thyroid scan is used ... computer aids in creating the images from the data obtained by the gamma camera. A probe is ...

  16. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... eat for several hours before your exam because eating can affect the accuracy of the uptake measurement. ... often unattainable using other imaging procedures. For many diseases, nuclear medicine scans yield the most useful information ...

  17. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... often unattainable using other imaging procedures. For many diseases, nuclear medicine scans yield the most useful information needed to make a diagnosis or to determine appropriate treatment, if any. Nuclear medicine is less expensive and ...

  18. Tomographic scanning apparatus

    International Nuclear Information System (INIS)

    1981-01-01

    This patent specification relates to a tomographic scanning apparatus using a fan beam and digital output signal, and particularly to the design of the gas-pressurized ionization detection system. (U.K.)

  19. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... abnormal was found, and should not be a cause of concern for you. If you had an intravenous ... abnormal was found, and should not be a cause of concern for you. Actual scanning time for each ...

  20. The Scanning Optical Microscope.

    Science.gov (United States)

    Sheppard, C. J. R.

    1978-01-01

    Describes the principle of the scanning optical microscope and explains its advantages over the conventional microscope in the improvement of resolution and contrast, as well as the possibility of producing a picture from optical harmonies generated within the specimen.

  1. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... evaluate changes in the gland following medication use, surgery, radiotherapy or chemotherapy top of page How should ... such as an x-ray or CT scan, surgeries or treatments using iodinated contrast material within the ...

  2. Scanning Auger Electron Microscope

    Data.gov (United States)

    Federal Laboratory Consortium — A JEOL model 7830F field emission source, scanning Auger microscope.Specifications / Capabilities:Ultra-high vacuum (UHV), electron gun range from 0.1 kV to 25 kV,...

  3. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... liquid or capsule form, it is typically swallowed up to 24 hours before the scan. The radiotracer given by intravenous injection is usually given up to 30 minutes prior to the test. When ...

  4. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... exam of any medications you are taking, including vitamins and herbal supplements. You should also inform them ... of scan you are undergoing. top of page What does the equipment look like? The special camera ...

  5. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... Because nuclear medicine procedures are able to pinpoint molecular activity within the body, they offer the potential ... or imaging device that produces pictures and provides molecular information. The thyroid scan and thyroid uptake provide ...

  6. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... scan and thyroid uptake provide information about the structure and function of the thyroid. The thyroid is ... computer, create pictures offering details on both the structure and function of organs and tissues in your ...

  7. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... abnormal was found, and should not be a cause of concern for you. If you had an ... abnormal was found, and should not be a cause of concern for you. Actual scanning time for ...

  8. Pediatric CT Scans

    Science.gov (United States)

    The Radiation Epidemiology Branch and collaborators have initiated a retrospective cohort study to evaluate the relationship between radiation exposure from CT scans conducted during childhood and adolescence and the subsequent development of cancer.

  9. Lung gallium scan

    Science.gov (United States)

    ... produced by the gallium. Images display on a computer screen. During the scan, it is important that ... M. is also a founding member of Hi-Ethics and subscribes to the principles of the Health ...

  10. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... of page What are some common uses of the procedure? The thyroid scan is used to determine ... you are undergoing. top of page What does the equipment look like? The special camera and imaging ...

  11. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... Uptake? A thyroid scan is a type of nuclear medicine imaging. The radioactive iodine uptake test (RAIU) ... of thyroid function, but does not involve imaging. Nuclear medicine is a branch of medical imaging that ...

  12. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... scan and uptake uses small amounts of radioactive materials called radiotracers, a special camera and a computer ... last two months that used iodine-based contrast material. Your doctor will instruct you on how to ...

  13. RBC nuclear scan

    Science.gov (United States)

    ... scan uses small amounts of radioactive material to mark (tag) red blood cells (RBCs). Your body is ... Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2016:chap 135. Tavakkoli A, Ashley SW. Acute ...

  14. Cervical spine CT scan

    Science.gov (United States)

    ... your provider if you take the diabetes medicine metformin (Glucophage). You may need to take extra steps ... to contrast dye Birth defect if done during pregnancy CT scans expose you to more radiation than ...

  15. Leg CT scan

    Science.gov (United States)

    ... your provider if you take the diabetes medicine metformin (Glucophage). You may need to take extra steps ... to contrast dye Birth defect if done during pregnancy CT scans expose you to more radiation than ...

  16. Shoulder CT scan

    Science.gov (United States)

    ... your provider if you take the diabetes medicine metformin (Glucophage) because you may need to take extra ... to contrast dye Birth defect if done during pregnancy CT scans do expose you to more radiation ...

  17. Knee CT scan

    Science.gov (United States)

    ... your provider if you take the diabetes medicine metformin (Glucophage). You may need to take extra steps ... to contrast dye Birth defect if done during pregnancy CT scans give off more radiation than regular ...

  18. Arm CT scan

    Science.gov (United States)

    ... your provider if you take the diabetes drug metformin (Glucophage). You may need to take special steps ... to contrast dye Birth defect if done during pregnancy CT scans expose you to more radiation than ...

  19. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... A thyroid scan is a type of nuclear medicine imaging. The radioactive iodine uptake test (RAIU) is ... thyroid function, but does not involve imaging. Nuclear medicine is a branch of medical imaging that uses ...

  20. Thyroid Scan and Uptake

    Medline Plus

    Full Text Available ... Uptake? A thyroid scan is a type of nuclear medicine imaging. The radioactive iodine uptake test (RAIU) is ... of thyroid function, but does not involve imaging. Nuclear medicine is a branch of medical imaging that uses ...

  1. Optimal stimulation settings for CMAP scan registrations

    Directory of Open Access Journals (Sweden)

    Maathuis Ellen M

    2012-06-01

    Full Text Available Abstract Background The CMAP (Compound Muscle Action Potential scan is a non-invasive electrodiagnostic tool, which provides a quick and visual assessment of motor unit potentials as electrophysiological components that together constitute the CMAP. The CMAP scan records the electrical activity of the muscle (CMAP in response to transcutaneous stimulation of the motor nerve with gradual changes in stimulus intensity. Large MUs, including those that result from collateral reinnervation, appear in the CMAP scan as so-called steps, i.e., clearly visible jumps in CMAP amplitude. The CMAP scan also provides information on nerve excitability. This study aims to evaluate the influence of the stimulation protocol used on the CMAP scan and its quantification. Methods The stimulus frequency (1, 2 and 3 Hz, duration (0.05, 0.1 and 0.3 ms, or number (300, 500 and 1000 stimuli in CMAP scans of 23 subjects was systematically varied while the other two parameters were kept constant. Pain was measured by means of a visual analogue scale (VAS. Non-parametric paired tests were used to assess significant differences in excitability and step variables and VAS scores between the different stimulus parameter settings. Results We found no effect of stimulus frequency on CMAP scan variables or VAS scores. Stimulus duration affected excitability variables significantly, with higher stimulus intensity values for shorter stimulus durations. Step variables showed a clear trend towards increasing values with decreasing stimulus number. Conclusions A protocol delivering 500 stimuli at a frequency of 2 Hz with a 0.1 ms pulse duration optimized CMAP scan quantification with a minimum of subject discomfort, artefact and duration of the recording. CMAP scan variables were influenced by stimulus duration and number; hence, these need to be standardized in future studies.

  2. A CFD benchmarking exercise based on flow mixing in a T-junction

    Energy Technology Data Exchange (ETDEWEB)

    Smith, B.L., E-mail: brian.smith@psi.ch [Thermal Hydraulics Laboratory, Nuclear Energy and Safety Department, Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Mahaffy, J.H. [Wheelsmith Farm, Spring Mill, PA (United States); Angele, K. [Vattenfall R and D, Älvkarleby (Sweden)

    2013-11-15

    The paper describes an international benchmarking exercise, sponsored by the OECD Nuclear Energy Agency (NEA), aimed at testing the ability of state-of-the-art computational fluid dynamics (CFD) codes to predict the important fluid flow parameters affecting high-cycle thermal fatigue induced by turbulent mixing in T-junctions. The results from numerical simulations are compared to measured data from an experiment performed at 1:2 scale by Vattenfall Research and Development, Älvkarleby, Sweden. The test data were released only at the end of the exercise making this a truly blind CFD-validation benchmark. Details of the organizational procedures, the experimental set-up and instrumentation, the different modeling approaches adopted, synthesis of results, and overall conclusions and perspectives are presented.

  3. A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.

    Science.gov (United States)

    Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas

    2014-01-01

    The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.

  4. IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov

    2012-11-01

    Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.

  5. Critical Assessment of Metagenome Interpretation-a benchmark of metagenomics software

    DEFF Research Database (Denmark)

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter

    2017-01-01

    Methods for assembly, taxonomic profiling and binning are key to interpreting metagenome data, but a lack of consensus about benchmarking complicates performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark...... their programs on highly complex and realistic data sets, generated from ∼700 newly sequenced microorganisms and ∼600 novel viruses and plasmids and representing common experimental setups. Assembly and genome binning programs performed well for species represented by individual genomes but were substantially...... affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below family level. Parameter settings markedly affected performance, underscoring their importance for program reproducibility. The CAMI...

  6. Possibilities and challenges of a large international benchmarking in pediatric diabetology-The SWEET experience

    DEFF Research Database (Denmark)

    Witsch, Michael; Kosteria, Ioanna; Kordonouri, Olga

    2016-01-01

    AIM: Despite the existence of evidence-based guidelines for the care of children with diabetes, widespread gaps in knowledge, attitude, and practice remain. The purpose of this paper is to present a review of benchmarking practices and results of this process within SWEET, moreover focusing...... on current challenges and future directions. METHODS: Biannually, members electronically transfer de-identified clinic data for 37 parameters to the SWEET database. Each center receives benchmarking and data validation reports. RESULTS: In 2015, 48 centers have contributed data for 20 165 unique patients (51...... variation between centers. Fourteen centers achieve a median HbA1c encouraging members to deliver increasingly accurate and complete data. Dissemination of results and prospective projects serve as further motivation...

  7. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  8. Reactor physics tests and benchmark analyses of STACY

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori; Umano, Takuya

    1996-01-01

    The Static Experiment Critical Facility, STACY in the Nuclear Fuel Cycle Safety Engineering Research Facility, NUCEF is a solution type critical facility to accumulate fundamental criticality data on uranyl nitrate solution, plutonium nitrate solution and their mixture. A series of critical experiments have been performed for 10 wt% enriched uranyl nitrate solution using a cylindrical core tank. In these experiments, systematic data of the critical height, differential reactivity of the fuel solution, kinetic parameter and reactor power were measured with changing the uranium concentration of the fuel solution from 313 gU/l to 225 gU/l. Critical data through the first series of experiments for the basic core are reported in this paper for evaluating the accuracy of the criticality safety calculation codes. Benchmark calculations of the neutron multiplication factor k eff for the critical condition were made using a neutron transport code TWOTRAN in the SRAC system and a continuous energy Monte Carlo code MCNP 4A with a Japanese evaluated nuclear data library, JENDL 3.2. (J.P.N.)

  9. Preliminary analysis of the proposed BN-600 benchmark core

    International Nuclear Information System (INIS)

    John, T.M.

    2000-01-01

    The Indira Gandhi Centre for Atomic Research is actively involved in the design of Fast Power Reactors in India. The core physics calculations are performed by the computer codes that are developed in-house or by the codes obtained from other laboratories and suitably modified to meet the computational requirements. The basic philosophy of the core physics calculations is to use the diffusion theory codes with the 25 group nuclear cross sections. The parameters that are very sensitive is the core leakage, like the power distribution at the core blanket interface etc. are calculated using transport theory codes under the DSN approximations. All these codes use the finite difference approximation as the method to treat the spatial variation of the neutron flux. Criticality problems having geometries that are irregular to be represented by the conventional codes are solved using Monte Carlo methods. These codes and methods have been validated by the analysis of various critical assemblies and calculational benchmarks. Reactor core design procedure at IGCAR consists of: two and three dimensional diffusion theory calculations (codes ALCIALMI and 3DB); auxiliary calculations, (neutron balance, power distributions, etc. are done by codes that are developed in-house); transport theory corrections from two dimensional transport calculations (DOT); irregular geometry treated by Monte Carlo method (KENO); cross section data library used CV2M (25 group)

  10. GPI Spectroscopy of the Mass, Age, and Metallicity Benchmark Brown Dwarf HD 4747 B

    Science.gov (United States)

    Crepp, Justin R.; Principe, David A.; Wolff, Schuyler; Giorla Godfrey, Paige A.; Rice, Emily L.; Cieza, Lucas; Pueyo, Laurent; Bechter, Eric B.; Gonzales, Erica J.

    2018-02-01

    The physical properties of brown dwarf companions found to orbit nearby, solar-type stars can be benchmarked against independent measures of their mass, age, chemical composition, and other parameters, offering insights into the evolution of substellar objects. The TRENDS high-contrast imaging survey has recently discovered a (mass/age/metallicity) benchmark brown dwarf orbiting the nearby (d = 18.69 ± 0.19 pc), G8V/K0V star HD 4747. We have acquired follow-up spectroscopic measurements of HD 4747 B using the Gemini Planet Imager to study its spectral type, effective temperature, surface gravity, and cloud properties. Observations obtained in the H-band and K 1-band recover the companion and reveal that it is near the L/T transition (T1 ± 2). Fitting atmospheric models to the companion spectrum, we find strong evidence for the presence of clouds. However, spectral models cannot satisfactorily fit the complete data set: while the shape of the spectrum can be well-matched in individual filters, a joint fit across the full passband results in discrepancies that are a consequence of the inherent color of the brown dwarf. We also find a 2σ tension in the companion mass, age, and surface gravity when comparing to evolutionary models. These results highlight the importance of using benchmark objects to study “secondary effects” such as metallicity, non-equilibrium chemistry, cloud parameters, electron conduction, non-adiabatic cooling, and other subtleties affecting emergent spectra. As a new L/T transition benchmark, HD 4747 B warrants further investigation into the modeling of cloud physics using higher resolution spectroscopy across a broader range of wavelengths, polarimetric observations, and continued Doppler radial velocity and astrometric monitoring.

  11. Benchmarking and accounting for the (private) cloud

    CERN Document Server

    Belleman, J

    2015-01-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible, the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to ...

  12. Productivity benchmarks for operative service units.

    Science.gov (United States)

    Helkiö, P; Aantaa, R; Virolainen, P; Tuominen, R

    2016-04-01

    Easily accessible reliable information is crucial for strategic and tactical decision-making on operative processes. We report development of an analysis tool and resulting metrics for benchmarking purposes at a Finnish university hospital. The analysis tool is based on data collected in a resource management system and an in-house cost-reporting database. The exercise reports key metrics for four operative service units and six surgical units from 2014 and the change from year 2013. Productivity, measured as total costs per total hours, ranged from 658 to 957 €/h and utilization of the total available resource hours at the service unit level ranged from 66% to 74%. The lowest costs were in a unit running only regular working hour shifts, whereas the highest costs were in a unit operating on 24/7 basis. The tool includes additional metrics on operating room (OR) scheduling and monthly data to support more detailed analysis. This report provides the hospital management with an improved and detailed overview of its operative service units and the surgical process and related costs. The operating costs are associated with on call duties, size of operative service units, and the requirements of the surgeries. This information aids in making mid- to long range decisions on managing OR capacity. © 2016 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  13. Hydrologic information server for benchmark precipitation dataset

    Science.gov (United States)

    McEnery, John A.; McKee, Paul W.; Shelton, Gregory P.; Ramsey, Ryan W.

    2013-01-01

    This paper will present the methodology and overall system development by which a benchmark dataset of precipitation information has been made available. Rainfall is the primary driver of the hydrologic cycle. High quality precipitation data is vital for hydrologic models, hydrometeorologic studies and climate analysis,and hydrologic time series observations are important to many water resources applications. Over the past two decades, with the advent of NEXRAD radar, science to measure and record rainfall has improved dramatically. However, much existing data has not been readily available for public access or transferable among the agricultural, engineering and scientific communities. This project takes advantage of the existing CUAHSI Hydrologic Information System ODM model and tools to bridge the gap between data storage and data access, providing an accepted standard interface for internet access to the largest time-series dataset of NEXRAD precipitation data ever assembled. This research effort has produced an operational data system to ingest, transform, load and then serve one of most important hydrologic variable sets.

  14. Benchmark of systematic human action reliability procedure

    International Nuclear Information System (INIS)

    Spurgin, A.J.; Hannaman, G.W.; Moieni, P.

    1986-01-01

    Probabilistic risk assessment (PRA) methodology has emerged as one of the most promising tools for assessing the impact of human interactions on plant safety and understanding the importance of the man/machine interface. Human interactions were considered to be one of the key elements in the quantification of accident sequences in a PRA. The approach to quantification of human interactions in past PRAs has not been very systematic. The Electric Power Research Institute sponsored the development of SHARP to aid analysts in developing a systematic approach for the evaluation and quantification of human interactions in a PRA. The SHARP process has been extensively peer reviewed and has been adopted by the Institute of Electrical and Electronics Engineers as the basis of a draft guide for the industry. By carrying out a benchmark process, in which SHARP is an essential ingredient, however, it appears possible to assess the strengths and weaknesses of SHARP to aid human reliability analysts in carrying out human reliability analysis as part of a PRA

  15. RIA Fuel Codes Benchmark - Volume 1

    International Nuclear Information System (INIS)

    Marchand, Olivier; Georgenthum, Vincent; Petit, Marc; Udagawa, Yutaka; Nagase, Fumihisa; Sugiyama, Tomoyuki; Arffman, Asko; Cherubini, Marco; Dostal, Martin; Klouzal, Jan; Geelhood, Kenneth; Gorzel, Andreas; Holt, Lars; Jernkvist, Lars Olof; Khvostov, Grigori; Maertens, Dietmar; Spykman, Gerold; Nakajima, Tetsuo; Nechaeva, Olga; Panka, Istvan; Rey Gayo, Jose M.; Sagrado Garcia, Inmaculada C.; Shin, An-Dong; Sonnenburg, Heinz Guenther; Umidova, Zeynab; Zhang, Jinzhao; Voglewede, John

    2013-01-01

    Reactivity-initiated accident (RIA) fuel rod codes have been developed for a significant period of time and they all have shown their ability to reproduce some experimental results with a certain degree of adequacy. However, they sometimes rely on different specific modelling assumptions the influence of which on the final results of the calculations is difficult to evaluate. The NEA Working Group on Fuel Safety (WGFS) is tasked with advancing the understanding of fuel safety issues by assessing the technical basis for current safety criteria and their applicability to high burnup and to new fuel designs and materials. The group aims at facilitating international convergence in this area, including the review of experimental approaches as well as the interpretation and use of experimental data relevant for safety. As a contribution to this task, WGFS conducted a RIA code benchmark based on RIA tests performed in the Nuclear Safety Research Reactor in Tokai, Japan and tests performed or planned in CABRI reactor in Cadarache, France. Emphasis was on assessment of different modelling options for RIA fuel rod codes in terms of reproducing experimental results as well as extrapolating to typical reactor conditions. This report provides a summary of the results of this task. (authors)

  16. LHC benchmarks from flavored gauge mediation

    Energy Technology Data Exchange (ETDEWEB)

    Ierushalmi, N.; Iwamoto, S.; Lee, G.; Nepomnyashy, V.; Shadmi, Y. [Physics Department, Technion - Israel Institute of Technology,Haifa 32000 (Israel)

    2016-07-12

    We present benchmark points for LHC searches from flavored gauge mediation models, in which messenger-matter couplings give flavor-dependent squark masses. Our examples include spectra in which a single squark — stop, scharm, or sup — is much lighter than all other colored superpartners, motivating improved quark flavor tagging at the LHC. Many examples feature flavor mixing; in particular, large stop-scharm mixing is possible. The correct Higgs mass is obtained in some examples by virtue of the large stop A-term. We also revisit the general flavor and CP structure of the models. Even though the A-terms can be substantial, their contributions to EDM’s are very suppressed, because of the particular dependence of the A-terms on the messenger coupling. This holds regardless of the messenger-coupling texture. More generally, the special structure of the soft terms often leads to stronger suppression of flavor- and CP-violating processes, compared to naive estimates.

  17. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  18. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  19. Draft Mercury Aquatic Wildlife Benchmarks for Great Salt Lake ...

    Science.gov (United States)

    This document describes the EPA Region 8's rationale for selecting aquatic wildlife dietary and tissue mercury benchmarks for use in interpreting available data collected from the Great Salt Lake and surrounding wetlands. EPA Region 8 has conducted a literature review to update and refine the aquatic wildlife dietary and tissue benchmarks for mercury that may be used for data assessment until water quality criteria can be derived. The document describes how aquatic wildlife dietary and tissue benchmarks for mercury have been compiled for existing literature sources and the approach for how they will be used to evaluate whether the Great Salt Lake and surrounding wetlands meet its designated use for aquatic wildlife.

  20. Fault detection of a benchmark wind turbine using interval analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Odgaard, Peter Fogh; Bak, Thomas

    2012-01-01

    This paper investigates a state estimation set- membership approach for fault detection of a benchmark wind turbine. The main challenges in the benchmark are high noise on the wind speed measurement and the nonlinearities in the aerodynamic torque such that the overall model of the turbine is non...... of the measurement with a closed set that is computed based on the past measurements and a model of the system. If the measurement is not consistent with this set, a fault is detected. The result demonstrates effectiveness of the method for fault detection of the benchmark wind turbine....