WorldWideScience

Sample records for benchmark ultracool subdwarf

  1. Ultracool dwarf benchmarks with Gaia primaries

    Science.gov (United States)

    Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.

    2017-10-01

    We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.

  2. Benchmark ultra-cool dwarfs in widely separated binary systems

    Directory of Open Access Journals (Sweden)

    Jones H.R.A.

    2011-07-01

    Full Text Available Ultra-cool dwarfs as wide companions to subgiants, giants, white dwarfs and main sequence stars can be very good benchmark objects, for which we can infer physical properties with minimal reference to theoretical models, through association with the primary stars. We have searched for benchmark ultra-cool dwarfs in widely separated binary systems using SDSS, UKIDSS, and 2MASS. We then estimate spectral types using SDSS spectroscopy and multi-band colors, place constraints on distance, and perform proper motions calculations for all candidates which have sufficient epoch baseline coverage. Analysis of the proper motion and distance constraints show that eight of our ultra-cool dwarfs are members of widely separated binary systems. Another L3.5 dwarf, SDSS 0832, is shown to be a companion to the bright K3 giant η Cancri. Such primaries can provide age and metallicity constraints for any companion objects, yielding excellent benchmark objects. This is the first wide ultra-cool dwarf + giant binary system identified.

  3. New ultracool subdwarfs identified in large-scale surveys using Virtual Observatory tools. I. UKIDSS LAS DR5 vs. SDSS DR7

    Science.gov (United States)

    Lodieu, N.; Espinoza Contreras, M.; Zapatero Osorio, M. R.; Solano, E.; Aberasturi, M.; Martín, E. L.

    2012-06-01

    Aims: The aim of the project is to improve our knowledge of the low-mass and low-metallicity population to investigate the influence of metallicity on the stellar (and substellar) mass function. Methods: We present the results of a photometric and proper motion search aimed at discovering ultracool subdwarfs in large-scale surveys. We employed and combined the Fifth Data Release (DR5) of the UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Survey (LAS) and the Sloan Digital Sky Survey (SDSS) Data Release 7 complemented with ancillary data from the Two Micron All-Sky Survey (2MASS), the DEep Near-Infrared Survey (DENIS) and the SuperCOSMOS Sky Surveys (SSS). Results: The SDSS DR7 vs. UKIDSS LAS DR5 search returned a total of 32 ultracool subdwarf candidates, only two of which are recognised as a subdwarf in the literature. Twenty-seven candidates, including the two known ones, were followed-up spectroscopically in the optical between 600 and 1000 nm, thus covering strong spectral features indicative of low metallicity (e.g., CaH), 21 with the Very Large Telescope, one with the Nordic Optical Telescope, and five were extracted from the Sloan spectroscopic database to assess (or refute) their low-metal content. We confirm 20 candidates as subdwarfs, extreme subdwarfs, or ultra-subdwarfs with spectral types later than M5; this represents a success rate of ≥ 60%. Among those 20 new subdwarfs, we identify two early-L subdwarfs that are very likely located within 100 pc, which we propose as templates for future searches because they are the first examples of their subclass. Another seven sources are solar-metallicity M dwarfs with spectral types between M4 and M7 without Hα emission, suggesting that they are old M dwarfs. The remaining five candidates do not have spectroscopic follow-up yet; only one remains as a bona-fide ultracool subdwarf after revision of their proper motions. We assigned spectral types based on the current classification schemes and, when

  4. Ultracool Subdwarfs: Metal-poor Stars and Brown Dwarfs Extending into the Late-type M, L and T Dwarf Regimes

    OpenAIRE

    Burgasser, Adam J.; Kirkpatrick, J. Davy; Lepine, Sebastien

    2004-01-01

    Recent discoveries from red optical proper motion and wide-field near-infrared surveys have uncovered a new population of ultracool subdwarfs -- metal-poor stars and brown dwarfs extending into the late-type M, L and possibly T spectral classes. These objects are among the first low-mass stars and brown dwarfs formed in the Galaxy, and are valuable tracers of metallicity effects in low-temperature atmospheres. Here we review the spectral, photometric, and kinematic properties of recent discov...

  5. Searching for benchmark systems containing ultra-cool dwarfs and white dwarfs

    Directory of Open Access Journals (Sweden)

    Pinfield D.J.

    2013-04-01

    Full Text Available We have used the 2MASS all-sky survey and WISE to look for ultracool dwarfs that are part of multiple systems containing main sequence stars. We cross-matched L dwarf candidates from the surveys with Hipparcos and Gliese stars, finding two new systems. We consider the binary fraction for L dwarfs and main sequence stars, and further assess possible unresolved multiplicity within the full companion sample. This analysis shows that some of the L dwarfs in this sample might actually be unresolved binaries themselves. We have also identified a sample of common proper motion systems in which a main sequence star has a white dwarf as wide companion. These systems can help explore key issues in star evolution theory, as the initial-final mass relationship of white dwarfs, or the chromospheric activity-age relationship for stars still in the main sequence. Spectroscopy for 50 white dwarf candidates, selected from the SuperCOSMOS Science Archive, was obtained. We have also observed 6 of the main sequence star companions, and have estimated their effective temperatures, rotational and microturbulent velocities and metallicities.

  6. UNIFORM ATMOSPHERIC RETRIEVAL ANALYSIS OF ULTRACOOL DWARFS. I. CHARACTERIZING BENCHMARKS, Gl 570D AND HD 3651B

    Energy Technology Data Exchange (ETDEWEB)

    Line, Michael R.; Fortney, Jonathan J. [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Teske, Johanna [Carnegie DTM, 5241 Broad Branch Road, NW, Washington, DC 20015 (United States); Burningham, Ben; Marley, Mark S., E-mail: mrline@ucsc.edu [NASA Ames Research Center, Mail Stop 245-3, Moffett Field, CA 94035 (United States)

    2015-07-10

    Interpreting the spectra of brown dwarfs is key to determining the fundamental physical and chemical processes occurring in their atmospheres. Powerful Bayesian atmospheric retrieval tools have recently been applied to both exoplanet and brown dwarf spectra to tease out the thermal structures and molecular abundances to understand those processes. In this manuscript we develop a significantly upgraded retrieval method and apply it to the SpeX spectral library data of two benchmark late T dwarfs, Gl 570D and HD 3651B, to establish the validity of our upgraded forward model parameterization and Bayesian estimator. Our retrieved metallicities, gravities, and effective temperatures are consistent with the metallicity and presumed ages of the systems. We add the carbon-to-oxygen ratio as a new dimension to benchmark systems and find good agreement between carbon-to-oxygen ratios derived in the brown dwarfs and the host stars. Furthermore, we have for the first time unambiguously determined the presence of ammonia in the low-resolution spectra of these two late T dwarfs. We also show that the retrieved results are not significantly impacted by the possible presence of clouds, though some quantities are significantly impacted by uncertainties in photometry. This investigation represents a watershed study in establishing the utility of atmospheric retrieval approaches on brown dwarf spectra.

  7. The Brown Dwarf Kinematics Project (BDKP. III. Parallaxes for 70 Ultracool Dwarfs

    Science.gov (United States)

    2012-06-10

    a low surface gravity dwarf, Cal is a calibrator ultracool dwarf, SD is an ultracool subdwarf, B is a tight binary unresolved in 2MASS . d F indicates...procedure described in Vrba et al. (2004), we obtained 2MASS photometry for all reference stars. We com- pared with the intrinsic colors described in...140.5 ± 5.8 38.44 ± 2.83 −1191.00 ± 13.00 −115.00 ± 13.00 A 1 2MASS J0746+2000 86.2 ± 4.6 −355.9 ± 5.1 −63.7 ± 5.2 81.90 ± 0.30 −374.04 ± 0.31 −57.91

  8. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  9. A search for southern ultracool dwarfs in young moving groups

    Directory of Open Access Journals (Sweden)

    Deacon N.R.

    2011-07-01

    Full Text Available We have constructed an 800-strong red object catalogue by cross-referencing optical and infrared catalogues with an extensive proper motion catalogue compiled for red objects in the southern sky to obtain proper motions. We have applied astrometric and photometric constraints to the catalogue in order to select ultracool dwarf moving group candidates. 132 objects were found to be candidates of a moving group. From this candidate list we present initial results. Using spectroscopy we have obtained reliable spectral types and space motions, and by association with moving groups we can infer an age and composition. the further study of the remainder of our candidates will provide a large sample of young brown dwarfs and confirmed members will provide benchmark ultracool dwarfs. These will make suitable targets of AO planet searches.

  10. Benchmarking

    OpenAIRE

    Beretta Sergio; Dossi Andrea; Grove Hugh

    2000-01-01

    Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...

  11. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  12. A FEROS Survey of Hot Subdwarf Stars

    Science.gov (United States)

    Vennes, Stéphane; Németh, Péter; Kawka, Adela

    2018-02-01

    We have completed a survey of twenty-two ultraviolet-selected hot subdwarfs using the Fiber-fed Extended Range Optical Spectrograph (FEROS) and the 2.2-m telescope at La Silla. The sample includes apparently single objects as well as hot subdwarfs paired with a bright, unresolved companion. The sample was extracted from our GALEX catalogue of hot subdwarf stars. We identified three new short-period systems (P = 3.5 hours to 5 days) and determined the orbital parameters of a long-period (P = 62d.66) sdO plus G III system. This particular system should evolve into a close double degenerate system following a second common envelope phase.We also conducted a chemical abundance study of the subdwarfs: Some objects show nitrogen and argon abundance excess with respect to oxygen. We present key results of this programme.

  13. Ultraviolet colors of subdwarf O stars

    International Nuclear Information System (INIS)

    Wesselius, P.R.

    1978-01-01

    The group of subdwarf O stars consisting of field stars and some central stars of old planetary nebulae does occupy an interesting place in the HR diagram. Greenstein and Sargent (1974) have tried to establish this place, and conclude that especially the hottest ones need ultraviolet data to improve the values of effective temperature and absolute luminosity. The author therefore observed some twenty sdO stars in the far ultraviolet using the spectrophotometer in the Netherlands' satellite ANS. (Auth.)

  14. Uniform Atmospheric Retrievals of Ultracool Late-T and Early-Y dwarfs

    Science.gov (United States)

    Garland, Ryan; Irwin, Patrick

    2018-01-01

    A significant number of ultracool (types of objects with a uniform retrieval method, we hope to elucidate any trends and (dis)similarities found in atmospheric parameters, such as chemical abundances, temperature-pressure profile, and cloud structure, for a sample of 7 ultracool brown dwarfs as we transition from hotter (~700K) to colder objects (~450K).We perform atmospheric retrievals on two late-T and five early-Y dwarfs. We use the NEMESIS atmospheric retrieval code coupled to a Nested Sampling algorithm, along with a standard uniform model for all of our retrievals. The uniform model assumes the atmosphere is described by a gray radiative-convective temperature profile, (optionally) a self-consistent Mie scattering cloud, and a number of relevant gases. We first verify our methods by comparing it to a benchmark retrieval for Gliese 570D, which is found to be consistent. Furthermore, we present the retrieved gaseous composition, temperature structure, spectroscopic mass and radius, cloud structure and the trends associated with decreasing temperature found in this small sample of objects.

  15. THE NIRSPEC ULTRACOOL DWARF RADIAL VELOCITY SURVEY

    International Nuclear Information System (INIS)

    Blake, Cullen H.; Charbonneau, David; White, Russel J.

    2010-01-01

    We report the results of an infrared Doppler survey designed to detect brown dwarf and giant planetary companions to a magnitude-limited sample of ultracool dwarfs. Using the NIRSPEC spectrograph on the Keck II telescope, we obtained approximately 600 radial velocity (RV) measurements over a period of six years of a sample of 59 late-M and L dwarfs spanning spectral types M8/L0 to L6. A subsample of 46 of our targets has been observed on three or more epochs. We rely on telluric CH 4 absorption features in Earth's atmosphere as a simultaneous wavelength reference and exploit the rich set of CO absorption features found in the K-band spectra of cool stars and brown dwarfs to measure RVs and projected rotational velocities. For a bright, slowly rotating M dwarf standard we demonstrate an RV precision of 50 m s -1 and for slowly rotating L dwarfs we achieve a typical RV precision of approximately 200 m s -1 . This precision is sufficient for the detection of close-in giant planetary companions to mid-L dwarfs as well as more equal mass spectroscopic binary systems with small separations (a +0.7 -0.6 Gyr, similar to that of nearby sun-like stars. We simulate the efficiency with which we detect spectroscopic binaries and find that the rate of tight (a +8.6 -1.6 %, consistent with recent estimates in the literature of a tight binary fraction of 3%-4%.

  16. Radio-flaring Ultracool Dwarf Population Synthesis

    Energy Technology Data Exchange (ETDEWEB)

    Route, Matthew, E-mail: mroute@purdue.edu [Department of Astronomy and Astrophysics, the Pennsylvania State University, 525 Davey Laboratory, University Park, PA 16802 (United States)

    2017-08-10

    Over a dozen ultracool dwarfs (UCDs), low-mass objects of spectral types ≥M7, are known to be sources of radio flares. These typically several-minutes-long radio bursts can be up to 100% circularly polarized and have high brightness temperatures, consistent with coherent emission via the electron cyclotron maser operating in approximately kilogauss magnetic fields. Recently, the statistical properties of the bulk physical parameters that describe these UCDs have become described adequately enough to permit synthesis of the population of radio-flaring objects. For the first time, I construct a Monte Carlo simulator to model the population of these radio-flaring UCDs. This simulator is powered by Intel Secure Key (ISK), a new processor technology that uses a local entropy source to improve random number generation that has heretofore been used to improve cryptography. The results from this simulator indicate that only ∼5% of radio-flaring UCDs within the local interstellar neighborhood (<25 pc away) have been discovered. I discuss a number of scenarios that may explain this radio-flaring fraction and suggest that the observed behavior is likely a result of several factors. The performance of ISK as compared to other pseudorandom number generators is also evaluated, and its potential utility for other astrophysical codes is briefly described.

  17. X-ray emission from hot subdwarfs with compact companions

    Directory of Open Access Journals (Sweden)

    Esposito P.

    2013-03-01

    Full Text Available We review the X-ray observations of hot subdwarf stars. While no X-ray emission has been detected yet from binaries containing B-type subdwarfs, interesting results have been obtained in the case of the two luminous O-type subdwarfs HD 49798 and BD + 37° 442. Both of them are members of binary systems in which the X-ray luminosity is powered by accretion onto a compact object: a rapidly spinning (13.2 s and massive (1.28  M⊙ white dwarf in the case of HD 49798 and most likely a neutron star, spinning at 19.2 s, in the case of BD + 37° 442. Their study can shed light on the poorly known processes taking place during common envelope evolutionary phases and on the properties of wind mass loss from hot subdwarfs.

  18. Subdwarf ultraviolet excesses and metal abundances

    International Nuclear Information System (INIS)

    Carney, B.W.

    1979-01-01

    The relation between stellar ultraviolet excesses and abundances is reexamined with the aid of new data, and an investigation is made of the accuracy of previous abundance analyses. A high-resolution echellogram of the subdwarf HD 201891 is analyzed to illustrate some of the problems. Generally, the earliest and latest analytical techniques yield consistent results for dwarfs. New UBV data yield normalized ultraviolet excesses, delta (U-B)/sub 0.6/, which are compared to abundances to produce a graphical relation that may be used to estimate [Fe/H] to +- 0.2 dex, given UBV colors accurate to +- 0.01 mag. The relation suggests a possible discontinuity between the halo and old-disk stars

  19. Wide cool and ultracool companions to nearby stars from Pan-STARRS 1

    International Nuclear Information System (INIS)

    Deacon, Niall R.; Liu, Michael C.; Magnier, Eugene A.; Aller, Kimberly M.; Best, William M. J.; Bowler, Brendan P.; Burgett, William S.; Chambers, Kenneth C.; Flewelling, H.; Kaiser, Nick; Kudritzki, Rolf-Peter; Morgan, Jeff S.; Tonry, John L.; Dupuy, Trent; Mann, Andrew W.; Redstone, Joshua A.; Draper, Peter W.; Metcalfe, Nigel; Hodapp, Klaus W.; Price, Paul A.

    2014-01-01

    We present the discovery of 57 wide (>5'') separation, low-mass (stellar and substellar) companions to stars in the solar neighborhood identified from Pan-STARRS 1 (PS1) data and the spectral classification of 31 previously known companions. Our companions represent a selective subsample of promising candidates and span a range in spectral type of K7-L9 with the addition of one DA white dwarf. These were identified primarily from a dedicated common proper motion search around nearby stars, along with a few as serendipitous discoveries from our Pan-STARRS 1 brown dwarf search. Our discoveries include 23 new L dwarf companions and one known L dwarf not previously identified as a companion. The primary stars around which we searched for companions come from a list of bright stars with well-measured parallaxes and large proper motions from the Hipparcos catalog (8583 stars, mostly A-K dwarfs) and fainter stars from other proper motion catalogs (79170 stars, mostly M dwarfs). We examine the likelihood that our companions are chance alignments between unrelated stars and conclude that this is unlikely for the majority of the objects that we have followed-up spectroscopically. We also examine the entire population of ultracool (>M7) dwarf companions and conclude that while some are loosely bound, most are unlikely to be disrupted over the course of ∼10 Gyr. Our search increases the number of ultracool M dwarf companions wider than 300 AU by 88% and increases the number of L dwarf companions in the same separation range by 82%. Finally, we resolve our new L dwarf companion to HIP 6407 into a tight (0.''13, 7.4 AU) L1+T3 binary, making the system a hierarchical triple. Our search for these key benchmarks against which brown dwarf and exoplanet atmosphere models are tested has yielded the largest number of discoveries to date.

  20. THE BROWN DWARF KINEMATICS PROJECT (BDKP). III. PARALLAXES FOR 70 ULTRACOOL DWARFS

    International Nuclear Information System (INIS)

    Faherty, Jacqueline K.; Shara, Michael M.; Cruz, Kelle L.; Burgasser, Adam J.; Walter, Frederick M.; Van der Bliek, Nicole; West, Andrew A.; Vrba, Frederick J.; Anglada-Escudé, Guillem

    2012-01-01

    We report parallax measurements for 70 ultracool dwarfs (UCDs) including 11 late-M, 32 L, and 27 T dwarfs. In this sample, 14 M and L dwarfs exhibit low surface gravity features, 6 are close binary systems, and 2 are metal-poor subdwarfs. We combined our new measurements with 114 previously published UCD parallaxes and optical-mid-IR photometry to examine trends in spectral-type/absolute magnitude, and color-color diagrams. We report new polynomial relations between spectral type and M JHK . Including resolved L/T transition binaries in the relations, we find no reason to differentiate between a 'bright' (unresolved binary) and a 'faint' (single source) sample across the L/T boundary. Isolating early T dwarfs, we find that the brightening of T0-T4 sources is prominent in M J where there is a [1.2-1.4] mag difference. A similar yet dampened brightening of [0.3-0.5] mag happens at M H and a plateau or dimming of [–0.2 to –0.3] mag is seen in M K . Comparison with evolutionary models that vary gravity, metallicity, and cloud thickness verifies that for L into T dwarfs, decreasing cloud thickness reproduces brown dwarf near-IR color-magnitude diagrams. However we find that a near constant temperature of 1200 ±100 K along a narrow spectral subtype of T0-T4 is required to account for the brightening and color-magnitude diagram of the L-dwarf/T-dwarf transition. There is a significant population of both L and T dwarfs which are red or potentially 'ultra-cloudy' compared to the models, many of which are known to be young indicating a correlation between enhanced photospheric dust and youth. For the low surface gravity or young companion L dwarfs we find that 8 out of 10 are at least [0.2-1.0] mag underluminous in M JH and/or M K compared to equivalent spectral type objects. We speculate that this is a consequence of increased dust opacity and conclude that low surface gravity L dwarfs require a completely new spectral-type/absolute magnitude polynomial for analysis.

  1. THE BROWN DWARF KINEMATICS PROJECT (BDKP). III. PARALLAXES FOR 70 ULTRACOOL DWARFS

    Energy Technology Data Exchange (ETDEWEB)

    Faherty, Jacqueline K.; Shara, Michael M.; Cruz, Kelle L. [Department of Astrophysics, American Museum of Natural History, Central Park West at 79th Street, New York, NY 10034 (United States); Burgasser, Adam J. [Center of Astrophysics and Space Sciences, Department of Physics, University of California, San Diego, CA 92093 (United States); Walter, Frederick M. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794-3800 (United States); Van der Bliek, Nicole [CTIO/National Optical Astronomy Observatory (Chile); West, Andrew A. [Department of Astronomy, Boston University, 725 Commonwealth Ave Boston, MA 02215 (United States); Vrba, Frederick J. [US Naval Observatory, Flagstaff Station, P.O. Box 1149, Flagstaff, AZ 86002 (United States); Anglada-Escude, Guillem, E-mail: jfaherty@amnh.org [Department of Terrestrial Magnetism, Carnegie Institution of Washington 5241 Broad Branch Road, NW, Washington, DC 20015 (United States)

    2012-06-10

    We report parallax measurements for 70 ultracool dwarfs (UCDs) including 11 late-M, 32 L, and 27 T dwarfs. In this sample, 14 M and L dwarfs exhibit low surface gravity features, 6 are close binary systems, and 2 are metal-poor subdwarfs. We combined our new measurements with 114 previously published UCD parallaxes and optical-mid-IR photometry to examine trends in spectral-type/absolute magnitude, and color-color diagrams. We report new polynomial relations between spectral type and M{sub JHK}. Including resolved L/T transition binaries in the relations, we find no reason to differentiate between a 'bright' (unresolved binary) and a 'faint' (single source) sample across the L/T boundary. Isolating early T dwarfs, we find that the brightening of T0-T4 sources is prominent in M{sub J} where there is a [1.2-1.4] mag difference. A similar yet dampened brightening of [0.3-0.5] mag happens at M{sub H} and a plateau or dimming of [-0.2 to -0.3] mag is seen in M{sub K} . Comparison with evolutionary models that vary gravity, metallicity, and cloud thickness verifies that for L into T dwarfs, decreasing cloud thickness reproduces brown dwarf near-IR color-magnitude diagrams. However we find that a near constant temperature of 1200 {+-}100 K along a narrow spectral subtype of T0-T4 is required to account for the brightening and color-magnitude diagram of the L-dwarf/T-dwarf transition. There is a significant population of both L and T dwarfs which are red or potentially 'ultra-cloudy' compared to the models, many of which are known to be young indicating a correlation between enhanced photospheric dust and youth. For the low surface gravity or young companion L dwarfs we find that 8 out of 10 are at least [0.2-1.0] mag underluminous in M{sub JH} and/or M{sub K} compared to equivalent spectral type objects. We speculate that this is a consequence of increased dust opacity and conclude that low surface gravity L dwarfs require a completely new

  2. A new sample of cool subdwarfs from SDSS: properties and kinematics

    Energy Technology Data Exchange (ETDEWEB)

    Savcheva, Antonia S. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); West, Andrew A. [Astronomy Department, Boston University, 725 Commonwealth Avenue, Boston, MA 02215 (United States); Bochanski, John J., E-mail: asavcheva@cfa.harvard.edu [Department of Physics and Astronomy, Haverford College, 370 Lancaster Avenue, Haverford, PA 19041 (United States)

    2014-10-20

    We present a new sample of M subdwarfs compiled from the seventh data release of the Sloan Digital Sky Survey. With 3517 new subdwarfs, this new sample significantly increases the number of spectroscopically confirmed low-mass subdwarfs. This catalog also includes 905 extreme and 534 ultra sudwarfs. We present the entire catalog, including observed and derived quantities, and template spectra created from co-added subdwarf spectra. We show color-color and reduced proper motion diagrams of the three metallicity classes, which are shown to separate from the disk dwarf population. The extreme and ultra subdwarfs are seen at larger values of reduced proper motion, as expected for more dynamically heated populations. We determine 3D kinematics for all of the stars with proper motions. The color-magnitude diagrams show a clear separation of the three metallicity classes with the ultra and extreme subdwarfs being significantly closer to the main sequence than the ordinary subdwarfs. All subdwarfs lie below (fainter) and to the left (bluer) of the main sequence. Based on the average (U, V, W) velocities and their dispersions, the extreme and ultra subdwarfs likely belong to the Galactic halo, while the ordinary subdwarfs are likely part of the old Galactic (or thick) disk. An extensive activity analysis of subdwarfs is performed using Hα emission, and 208 active subdwarfs are found. We show that while the activity fraction of subdwarfs rises with spectral class and levels off at the latest spectral classes, consistent with the behavior of M dwarfs, the extreme and ultra subdwarfs are basically flat.

  3. CCD Parallaxes for 309 Late-type Dwarfs and Subdwarfs

    Energy Technology Data Exchange (ETDEWEB)

    Dahn, Conard C.; Harris, Hugh C.; Subasavage, John P.; Ables, Harold D.; Guetter, Harry H.; Harris, Fred H.; Luginbuhl, Christian B.; Monet, Alice B.; Monet, David G.; Munn, Jeffrey A.; Pier, Jeffrey R.; Stone, Ronald C.; Vrba, Frederick J.; Walker, Richard L.; Tilleman, Trudy M. [US Naval Observatory, Flagstaff Station, 10391 W. Naval Observatory Road, Flagstaff, AZ 86005-8521 (United States); Canzian, Blaise J. [L-3 Communications/Brashear, 615 Epsilon Drive, Pittsburgh, PA 15238-2807 (United States); Henden, Arne H. [AAVSO, Cambridge, MA 02138 (United States); Leggett, S. K. [Gemini Observatory, Northern Operations Center, 670 N. A’ohoku Place, Hilo, HI 96720 (United States); Levine, Stephen E., E-mail: jsubasavage@nofs.navy.mil [Lowell Observatory, 1400 W. Mars Hill Road, Flagstaff, AZ 86001-4499 (United States)

    2017-10-01

    New, updated, and/or revised CCD parallaxes determined with the Strand Astrometric Reflector at the Naval Observatory Flagstaff Station are presented. Included are results for 309 late-type dwarf and subdwarf stars observed over the 30+ years that the program operated. For 124 of the stars, parallax determinations from other investigators have already appeared in the literature and we compare the different results. Also included here are new or updated VI photometry on the Johnson–Kron-Cousins system for all but a few of the faintest targets. Together with 2MASS JHK{sub s} near-infrared photometry, a sample of absolute magnitude versus color and color versus color diagrams are constructed. Because large proper motion was a prime criterion for targeting the stars, the majority turn out to be either M-type subdwarfs or late M-type dwarfs. The sample also includes 50 dwarf or subdwarf L-type stars, and four T dwarfs. Possible halo subdwarfs are identified in the sample based on tangential velocity, subluminosity, and spectral type. Residuals from the solutions for parallax and proper motion for several stars show evidence of astrometric perturbations.

  4. Photometry, Astrometry, and Discoveries of Ultracool Dwarfs in the Pan-STARRS 3π Survey

    Science.gov (United States)

    Best, William M. J.; Magnier, Eugene A.; Liu, Michael C.; Deacon, Niall; Aller, Kimberly; Zhang, Zhoujian; Pan-STARRS1 Builders

    2018-01-01

    The Pan-STARRS1 3π Survey (PS1)'s far-red optical sensitivity makes it an exceptional new resource for discovering and characterizing ultracool dwarfs. We present a PS1-based catalog of photometry and proper motions of nearly 10,000 M, L, and T dwarfs, along with our analysis of the kinematics of nearby M6-T9 dwarfs, building a comprehensive picture of the local ultracool population. We highlight some especially interesting ultracool discoveries made with PS1, including brown dwarfs with spectral types in the enigmatic L/T transition, wide companions to main sequence stars that serve as age and metallicity bechmarks for substellar models, and free-floating members of the nearby young moving groups and star-forming regions with masses down to ≈5 MJup. With its public release, PS1 will continue to be a vital tool for studying the ultracool population.

  5. Ages of white dwarf-red subdwarf systems

    Directory of Open Access Journals (Sweden)

    Hektor Monteiro

    2006-01-01

    Full Text Available We provide the first age estimates for two recently discovered white dwarf-red subdwarf systems, LHS 193AB and LHS 300AB. These systems provide a new opportunity for linking the reliable age estimates for the white dwarfs to the (measurable metallicities of the red subdwarfs. We have obtained precise photometry in the VJRKCIKCJH bands and spectroscopy covering from 6,000°A to 9,000°A (our spectral coveragefor the two new systems, as well as for a comparison white dwarfmain sequence red dwarf system, GJ 283 AB. Using model grids, we estimate the cooling age as well as temperature, surface gravity, mass, progenitor mass and total lifetimes of the white dwarfs. The results indicate that the two new systems are probably ancient thick disk objects with ages of at least 6-9 gigayears (Gyr.

  6. THE HAWAII INFRARED PARALLAX PROGRAM. II. YOUNG ULTRACOOL FIELD DWARFS

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Michael C. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Dupuy, Trent J. [The University of Texas at Austin, Department of Astronomy, 2515 Speedway C1400, Austin, TX 78712 (United States); Allers, Katelyn N., E-mail: mliu@ifa.hawaii.edu [Department of Physics and Astronomy, Bucknell University, Lewisburg, PA 17837 (United States)

    2016-12-10

    We present a large, uniform analysis of young (≈10–150 Myr) ultracool dwarfs, based on new high-precision infrared (IR) parallaxes for 68 objects. We find that low-gravity (vl-g) late-M and L dwarfs form a continuous sequence in IR color–magnitude diagrams, separate from the field population and from current theoretical models. These vl-g objects also appear distinct from young substellar (brown dwarf and exoplanet) companions, suggesting that the two populations may have a different range of physical properties. In contrast, at the L/T transition, young, old, and spectrally peculiar objects all span a relatively narrow range in near-IR absolute magnitudes. At a given spectral type, the IR absolute magnitudes of young objects can be offset from ordinary field dwarfs, with the largest offsets occurring in the Y and J bands for late-M dwarfs (brighter than the field) and mid-/late-L dwarfs (fainter than the field). Overall, low-gravity (vl-g) objects have the most uniform photometric behavior, while intermediate gravity (int-g) objects are more diverse, suggesting a third governing parameter beyond spectral type and gravity class. We examine the moving group membership for all young ultracool dwarfs with parallaxes, changing the status of 23 objects (including 8 previously identified planetary-mass candidates) and fortifying the status of another 28 objects. We use our resulting age-calibrated sample to establish empirical young isochrones and show a declining frequency of vl-g objects relative to int-g objects with increasing age. Notable individual objects in our sample include high-velocity (≳100 km s{sup −1}) int-g objects, very red late-L dwarfs with high surface gravities, candidate disk-bearing members of the MBM20 cloud and β  Pic moving group, and very young distant interlopers. Finally, we provide a comprehensive summary of the absolute magnitudes and spectral classifications of young ultracool dwarfs, using a combined sample of 102

  7. THE RADIO ACTIVITY-ROTATION RELATION OF ULTRACOOL DWARFS

    International Nuclear Information System (INIS)

    McLean, M.; Berger, E.; Reiners, A.

    2012-01-01

    We present a new radio survey of about 100 late-M and L dwarfs undertaken with the Very Large Array. The sample was chosen to explore the role of rotation in the radio activity of ultracool dwarfs. As part of the survey we discovered radio emission from three new objects, 2MASS J 0518113 – 310153 (M6.5), 2MASS J 0952219 – 192431 (M7), and 2MASS J 1314203 + 132001 (M7), and made an additional detection of LP 349-25 (M8). Combining the new sample with results from our previous studies and from the literature, we compile the largest sample to date of ultracool dwarfs with radio observations and measured rotation velocities (167 objects). In the spectral type range M0-M6 we find a radio activity-rotation relation, with saturation at L rad /L bol ≈ 10 –7.5 above vsin i ≈ 5 km s –1 , similar to the relation in Hα and X-rays. However, at spectral types ∼> M7 the ratio of radio to bolometric luminosity increases significantly regardless of rotation velocity, and the scatter in radio luminosity increases. In particular, while the most rapid rotators (vsin i ∼> 20 km s –1 ) exhibit 'super-saturation' in X-rays and Hα, this effect is not seen in the radio. We also find that ultracool dwarfs with vsin i ∼> 20 km s –1 have a higher radio detection fraction by about a factor of three compared to objects with vsin i ∼ –1 . When measured in terms of the Rossby number (Ro), the radio activity-rotation relation follows a single trend and with no apparent saturation from G to L dwarfs and down to Ro ∼ 10 –3 ; in X-rays and Hα there is clear saturation at Ro ∼ rad /R 2 * ) as a function of Ro. The continued role of rotation in the overall level of radio activity and in the fraction of active sources, and the single trend of L rad /L bol and L rad /R 2 * as a function of Ro from G to L dwarfs, indicates that rotation effects are important in regulating the topology or strength of magnetic fields in at least some fully convective dwarfs. The fact that

  8. NEAR-INFRARED LINEAR POLARIZATION OF ULTRACOOL DWARFS

    International Nuclear Information System (INIS)

    Zapatero Osorio, M. R.; Bejar, V. J. S.; Rebolo, R.; Acosta-Pulido, J. A.; Manchado, A.; Pena Ramirez, K.; Goldman, B.; Caballero, J. A.

    2011-01-01

    We report on near-infrared J- and H-band linear polarimetric photometry of eight ultracool dwarfs (two late-M, five L0-L7.5, and one T2.5) with known evidence for photometric variability due to dust clouds, anomalous red infrared colors, or low-gravity atmospheres. The polarimetric data were acquired with the LIRIS instrument on the William Herschel Telescope. We also provide mid-infrared photometry in the interval 3.4-24 μm for some targets obtained with Spitzer and WISE, which has allowed us to confirm the peculiar red colors of five sources in the sample. We can impose modest upper limits of 0.9% and 1.8% on the linear polarization degree for seven targets with a confidence of 99%. Only one source, 2MASS J02411151-0326587 (L0), appears to be strongly polarized (P ∼ 3%) in the J band with a significance level of P/σ P ∼ 10. The likely origin of its linearly polarized light and rather red infrared colors may reside in a surrounding disk with an asymmetric distribution of grains. Given its proximity (66 ± 8 pc), this object becomes an excellent target for the direct detection of the disk.

  9. Periodic optical variability of radio-detected ultracool dwarfs

    International Nuclear Information System (INIS)

    Harding, L. K.; Golden, A.; Singh, Navtej; Sheehan, B.; Butler, R. F.; Hallinan, G.; Boyle, R. P.; Zavala, R. T.

    2013-01-01

    A fraction of very low mass stars and brown dwarfs are known to be radio active, in some cases producing periodic pulses. Extensive studies of two such objects have also revealed optical periodic variability, and the nature of this variability remains unclear. Here, we report on multi-epoch optical photometric monitoring of six radio-detected dwarfs, spanning the ∼M8-L3.5 spectral range, conducted to investigate the ubiquity of periodic optical variability in radio-detected ultracool dwarfs. This survey is the most sensitive ground-based study carried out to date in search of periodic optical variability from late-type dwarfs, where we obtained 250 hr of monitoring, delivering photometric precision as low as ∼0.15%. Five of the six targets exhibit clear periodicity, in all cases likely associated with the rotation period of the dwarf, with a marginal detection found for the sixth. Our data points to a likely association between radio and optical periodic variability in late-M/early-L dwarfs, although the underlying physical cause of this correlation remains unclear. In one case, we have multiple epochs of monitoring of the archetype of pulsing radio dwarfs, the M9 TVLM 513–46546, spanning a period of 5 yr, which is sufficiently stable in phase to allow us to establish a period of 1.95958 ± 0.00005 hr. This phase stability may be associated with a large-scale stable magnetic field, further strengthening the correlation between radio activity and periodic optical variability. Finally, we find a tentative spin-orbit alignment of one component of the very low mass binary, LP 349–25.

  10. Periodic optical variability of radio-detected ultracool dwarfs

    Energy Technology Data Exchange (ETDEWEB)

    Harding, L. K.; Golden, A.; Singh, Navtej; Sheehan, B.; Butler, R. F. [Centre for Astronomy, National University of Ireland, Galway, University Road, Galway (Ireland); Hallinan, G. [Cahill Center for Astrophysics, California Institute of Technology, 1200 East California Boulevard, MC 249-17, Pasadena, CA 91125 (United States); Boyle, R. P. [Vatican Observatory Research Group, Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States); Zavala, R. T., E-mail: lkh@astro.caltech.edu [United States Naval Observatory, Flagstaff Station, Flagstaff, AZ 86001 (United States)

    2013-12-20

    A fraction of very low mass stars and brown dwarfs are known to be radio active, in some cases producing periodic pulses. Extensive studies of two such objects have also revealed optical periodic variability, and the nature of this variability remains unclear. Here, we report on multi-epoch optical photometric monitoring of six radio-detected dwarfs, spanning the ∼M8-L3.5 spectral range, conducted to investigate the ubiquity of periodic optical variability in radio-detected ultracool dwarfs. This survey is the most sensitive ground-based study carried out to date in search of periodic optical variability from late-type dwarfs, where we obtained 250 hr of monitoring, delivering photometric precision as low as ∼0.15%. Five of the six targets exhibit clear periodicity, in all cases likely associated with the rotation period of the dwarf, with a marginal detection found for the sixth. Our data points to a likely association between radio and optical periodic variability in late-M/early-L dwarfs, although the underlying physical cause of this correlation remains unclear. In one case, we have multiple epochs of monitoring of the archetype of pulsing radio dwarfs, the M9 TVLM 513–46546, spanning a period of 5 yr, which is sufficiently stable in phase to allow us to establish a period of 1.95958 ± 0.00005 hr. This phase stability may be associated with a large-scale stable magnetic field, further strengthening the correlation between radio activity and periodic optical variability. Finally, we find a tentative spin-orbit alignment of one component of the very low mass binary, LP 349–25.

  11. The effects of diffusion in hot subdwarf progenitors from the common envelope channel

    Science.gov (United States)

    Byrne, Conor M.; Jeffery, C. Simon; Tout, Christopher A.; Hu, Haili

    2018-04-01

    Diffusion of elements in the atmosphere and envelope of a star can drastically alter its surface composition, leading to extreme chemical peculiarities. We consider the case of hot subdwarfs, where surface helium abundances range from practically zero to almost 100 percent. Since hot subdwarfs can form via a number of different evolution channels, a key question concerns how the formation mechanism is connected to the present surface chemistry. A sequence of extreme horizontal branch star models was generated by producing post-common envelope stars from red giants. Evolution was computed with MESA from envelope ejection up to core-helium ignition. Surface abundances were calculated at the zero-age horizontal branch for models with and without diffusion. A number of simulations also included radiative levitation. The goal was to study surface chemistry during evolution from cool giant to hot subdwarf and determine when the characteristic subdwarf surface is established. Only stars leaving the giant branch close to core-helium ignition become hydrogen-rich subdwarfs at the zero-age horizontal branch. Diffusion, including radiative levitation, depletes the initial surface helium in all cases. All subdwarf models rapidly become more depleted than observations allow. Surface abundances of other elements follow observed trends in general, but not in detail. Additional physics is required.

  12. LB 3459, an O-type subdwarf eclipsing binary system

    International Nuclear Information System (INIS)

    Kilkenny, D.; Penfold, J.E.; Hilditch, R.W.

    1979-01-01

    Four-colour photometry of the short-period eclipsing binary system LB 3459 confirms features seen in earlier less-detailed data. An analysis of all the observational data suggests the system to be an O-type subdwarf plus a hot white dwarf rather than two sdO stars. A value of 0.03 is obtained for the linear limb-darkening coefficient of the primary and estimates of the absolute magnitudes of the two components give a distance of 70 +- 25 pc for the system. The primary and secondary may have radii as small as 0.04 solar radius and 0.02 solar radius respectively, indicating a component separation of only 0.25 solar radius. Several unsolved problems connected with the nature and evolution of the LB 3459 system are noted. (author)

  13. Evolutionary model of the subdwarf binary system LB3459

    International Nuclear Information System (INIS)

    Paczynski, B.; Dearborn, D.S.

    1980-01-01

    An evolutionary model is proposed for the eclipsing binary system LB 3459 (=CPD-60 0 389 = HDE 269696). The two stars are hot subdwarfs with degenerate helium cores, hydrogen burning shell sources and low mass hydrogen rich envelopes. The system probably evolved through two common envelope phases. After the first such phase it might look like the semi-detached binary AS Eri. Soon after the second common envelope phase the system might look like UU Sge, an eclipsing binary nucleus of a planetary nebula. The present mass of the optical (spectroscopic) primary is probably close to 0.24 solar mass, and the predicted radial velocity amplitude of the primary is about 150 km/s. The optical secondary should be hotter and bolometrically brighter, with a mass of 0.32 solar mass. The primary eclipse is an occultation. (author)

  14. The allwise motion survey and the quest for cold subdwarfs

    Energy Technology Data Exchange (ETDEWEB)

    Kirkpatrick, J. Davy; Fajardo-Acosta, Sergio; Gelino, Christopher R.; Fowler, John W.; Cutri, Roc M. [Infrared Processing and Analysis Center, MS 100-22, California Institute of Technology, Pasadena, CA 91125 (United States); Schneider, Adam; Cushing, Michael C. [Department of Physics and Astronomy, MS 111, University of Toledo, 2801 West Bancroft St., Toledo, OH 43606-3328 (United States); Mace, Gregory N.; Wright, Edward L.; Logsdon, Sarah E.; McLean, Ian S. [Department of Physics and Astronomy, UCLA, 430 Portola Plaza, Box 951547, Los Angeles, CA 90095-1547 (United States); Skrutskie, Michael F. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904 (United States); Eisenhardt, Peter R.; Stern, Daniel [NASA Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Baloković, Mislav [California Institute of Technology, MC 249-17, Pasadena, CA 91125 (United States); Burgasser, Adam J. [Department of Physics, University of California, San Diego, CA 92093 (United States); Faherty, Jacqueline K. [Department of Terrestrial Magnetism, Carnegie Institution of Washington, Washington, DC 20015 (United States); Lansbury, George B. [Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Rich, J. A. [Observatories of the Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Skrzypek, Nathalie, E-mail: davy@ipac.caltech.edu [Astro Group, Imperial College London, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom); and others

    2014-03-10

    The AllWISE processing pipeline has measured motions for all objects detected on Wide-field Infrared Survey Explorer (WISE) images taken between 2010 January and 2011 February. In this paper, we discuss new capabilities made to the software pipeline in order to make motion measurements possible, and we characterize the resulting data products for use by future researchers. Using a stringent set of selection criteria, we find 22,445 objects that have significant AllWISE motions, of which 3525 have motions that can be independently confirmed from earlier Two Micron All Sky Survey (2MASS) images, yet lack any published motions in SIMBAD. Another 58 sources lack 2MASS counterparts and are presented as motion candidates only. Limited spectroscopic follow-up of this list has already revealed eight new L subdwarfs. These may provide the first hints of a 'subdwarf gap' at mid-L types that would indicate the break between the stellar and substellar populations at low metallicities (i.e., old ages). Another object in the motion list—WISEA J154045.67–510139.3—is a bright (J ≈ 9 mag) object of type M6; both the spectrophotometric distance and a crude preliminary parallax place it ∼6 pc from the Sun. We also compare our list of motion objects to the recently published list of 762 WISE motion objects from Luhman. While these first large motion studies with WISE data have been very successful in revealing previously overlooked nearby dwarfs, both studies missed objects that the other found, demonstrating that many other nearby objects likely await discovery in the AllWISE data products.

  15. Ultra-cool dwarfs viewed equator-on: surveying the best host stars for biosignature detection in transiting exoplanets

    Science.gov (United States)

    Miles-Paez, Paulo; Metchev, Stanimir; Burgasser, Adam; Apai, Daniel; Palle, Enric; Zapatero Osorio, Maria Rosa; Artigau, Etienne; Mace, Greg; Tannock, Megan; Triaud, Amaury

    2018-05-01

    There are about 150 known planets around M dwarfs, but only one system around an ultra-cool (>M7) dwarf: Trappist-1. Ultra-cool dwarfs are arguably the most promising hosts for atmospheric and biosignature detection in transiting planets because of the enhanced feature contrast in transit and eclipse spectroscopy. We propose a Spitzer survey to continuously monitor 15 of the brightest ultra-cool dwarfs over 3 days. To maximize the probability of detecting transiting planets, we have selected only targets seen close to equator-on. Spin-orbit alignment expectations dictate that the planetary systems around these ultra-cool dwarfs should also be oriented nearly edge-on. Any planet detections from this survey will immediately become top priority targets for JWST transit spectroscopy. No other telescope, present or within the foreseeable future, will be able to conduct a similarly sensitive and dedicated survey for characterizeable Earth analogs.

  16. The Physical Nature of Subdwarf A Stars: White Dwarf Impostors

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Warren R. [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Kilic, Mukremin; Gianninas, A., E-mail: wbrown@cfa.harvard.edu, E-mail: kilic@ou.edu, E-mail: alexg@nhn.ou.edu [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, 440 W. Brooks Street, Norman, OK, 73019 (United States)

    2017-04-10

    We address the physical nature of subdwarf A-type (sdA) stars and their possible link to extremely low mass (ELM) white dwarfs (WDs). The two classes of objects are confused in low-resolution spectroscopy. However, colors and proper motions indicate that sdA stars are cooler and more luminous, and thus larger in radius, than published ELM WDs. We demonstrate that surface gravities derived from pure hydrogen models suffer a systematic ∼1 dex error for sdA stars, likely explained by metal line blanketing below 9000 K. A detailed study of five eclipsing binaries with radial velocity orbital solutions and infrared excess establishes that these sdA stars are metal-poor ≃1.2 M {sub ⊙} main sequence stars with ≃0.8 M {sub ⊙} companions. While WDs must exist at sdA temperatures, only ∼1% of a magnitude-limited sdA sample should be ELM WDs. We conclude that the majority of sdA stars are metal-poor A–F type stars in the halo, and that recently discovered pulsating ELM WD-like stars with no obvious radial velocity variations may be SX Phe variables, not pulsating WDs.

  17. On the 3He anomaly in hot subdwarf B stars

    Science.gov (United States)

    Schneider, David; Irrgang, Andreas; Heber, Ulrich; Nieva, Maria F.; Przybilla, Norbert

    2017-12-01

    Decades ago, 3He isotope enrichment in helium-weak B-type main-sequence, in blue horizontal branch and in hot subdwarf B (sdB) stars, i.e., helium-core burning stars of the extreme horizontal branch, were discovered. Diffusion processes in the atmosphere of these stars lead to the observed abundance anomalies. Quantitative spectral analyses of high-resolution spectra to derive photospheric isotopic helium abundance ratios for known 3He sdBs have not been performed yet. We present preliminary results of high-resolution and high S/N spectra to determine the 3He and 4He abundances of nine known 3He sdBs. We used a hybrid local/non-local thermodynamic equilibrium (LTE/NLTE) approach for B-type stars investigating multiple He i lines, including λ4922 Å and λ6678 Å, which show the strongest isotopic shifts in the optical spectral range.We also report the discovery of four new 3He sdBs from the ESO Supernova Progenitor survey. Most of the 3He sdBs cluster in a narrow temperature strip between ˜ 26000 K and ˜ 30000 K and have almost no atmospheric 4He at all. Interestingly, three 3He sdBs show evidence for vertical helium stratification.

  18. The Physical Nature of Subdwarf A Stars: White Dwarf Impostors

    Science.gov (United States)

    Brown, Warren R.; Kilic, Mukremin; Gianninas, A.

    2017-04-01

    We address the physical nature of subdwarf A-type (sdA) stars and their possible link to extremely low mass (ELM) white dwarfs (WDs). The two classes of objects are confused in low-resolution spectroscopy. However, colors and proper motions indicate that sdA stars are cooler and more luminous, and thus larger in radius, than published ELM WDs. We demonstrate that surface gravities derived from pure hydrogen models suffer a systematic ˜1 dex error for sdA stars, likely explained by metal line blanketing below 9000 K. A detailed study of five eclipsing binaries with radial velocity orbital solutions and infrared excess establishes that these sdA stars are metal-poor ≃1.2 M ⊙ main sequence stars with ≃0.8 M ⊙ companions. While WDs must exist at sdA temperatures, only ˜1% of a magnitude-limited sdA sample should be ELM WDs. We conclude that the majority of sdA stars are metal-poor A-F type stars in the halo, and that recently discovered pulsating ELM WD-like stars with no obvious radial velocity variations may be SX Phe variables, not pulsating WDs.

  19. The Physical Nature of Subdwarf A Stars: White Dwarf Impostors

    International Nuclear Information System (INIS)

    Brown, Warren R.; Kilic, Mukremin; Gianninas, A.

    2017-01-01

    We address the physical nature of subdwarf A-type (sdA) stars and their possible link to extremely low mass (ELM) white dwarfs (WDs). The two classes of objects are confused in low-resolution spectroscopy. However, colors and proper motions indicate that sdA stars are cooler and more luminous, and thus larger in radius, than published ELM WDs. We demonstrate that surface gravities derived from pure hydrogen models suffer a systematic ∼1 dex error for sdA stars, likely explained by metal line blanketing below 9000 K. A detailed study of five eclipsing binaries with radial velocity orbital solutions and infrared excess establishes that these sdA stars are metal-poor ≃1.2 M ⊙ main sequence stars with ≃0.8 M ⊙ companions. While WDs must exist at sdA temperatures, only ∼1% of a magnitude-limited sdA sample should be ELM WDs. We conclude that the majority of sdA stars are metal-poor A–F type stars in the halo, and that recently discovered pulsating ELM WD-like stars with no obvious radial velocity variations may be SX Phe variables, not pulsating WDs.

  20. On the 3He anomaly in hot subdwarf B stars

    Directory of Open Access Journals (Sweden)

    Schneider David

    2017-12-01

    Full Text Available Decades ago, 3He isotope enrichment in helium-weak B-type main-sequence, in blue horizontal branch and in hot subdwarf B (sdB stars, i.e., helium-core burning stars of the extreme horizontal branch, were discovered. Diffusion processes in the atmosphere of these stars lead to the observed abundance anomalies. Quantitative spectral analyses of high-resolution spectra to derive photospheric isotopic helium abundance ratios for known 3He sdBs have not been performed yet. We present preliminary results of high-resolution and high S/N spectra to determine the 3He and 4He abundances of nine known 3He sdBs. We used a hybrid local/non-local thermodynamic equilibrium (LTE/NLTE approach for B-type stars investigating multiple He i lines, including λ4922 Å and λ6678 Å, which show the strongest isotopic shifts in the optical spectral range.We also report the discovery of four new 3He sdBs from the ESO Supernova Progenitor survey. Most of the 3He sdBs cluster in a narrow temperature strip between ∼ 26000 K and ∼ 30000 K and have almost no atmospheric 4He at all. Interestingly, three 3He sdBs show evidence for vertical helium stratification.

  1. POPULATION SYNTHESIS OF HOT SUBDWARFS: A PARAMETER STUDY

    International Nuclear Information System (INIS)

    Clausen, Drew; Wade, Richard A.; Kopparapu, Ravi Kumar; O'Shaughnessy, Richard

    2012-01-01

    Binaries that contain a hot subdwarf (sdB) star and a main-sequence companion may have interacted in the past. This binary population has historically helped determine our understanding of binary stellar evolution. We have computed a grid of binary population synthesis models using different assumptions about the minimum core mass for helium ignition, the envelope binding energy, the common-envelope ejection efficiency, the amount of mass and angular momentum lost during stable mass transfer, and the criteria for stable mass transfer on the red giant branch and in the Hertzsprung gap. These parameters separately and together can significantly change the entire predicted population of sdBs. Nonetheless, several different parameter sets can reproduce the observed subpopulation of sdB + white dwarf and sdB + M dwarf binaries, which has been used to constrain these parameters in previous studies. The period distribution of sdB + early F dwarf binaries offers a better test of different mass transfer scenarios for stars that fill their Roche lobes on the red giant branch.

  2. A test of Pulsation Theory in Hot B Subdwarfs

    Science.gov (United States)

    Fontaine, Gilles

    There are currently of the order of 15 hot B subdwarf (sdB) stars which are known to exhibit low-amplitude (a few to tens of millimag), short-period (100-500 s), multiperiodic luminosity variations. These pulsations are thought to be driven by an opacity bump linked to the presence of a local enhancement of the iron abundance in the envelopes of sdB stars. Such an enhancement results quite naturally from the diffusive equilibrium between gravitational settling and radiative support in the stellar envelope. Nevertheless, surveys for pulsating sdB stars show that, in several instances, variable and non-variable objects with similar effective temperatures and gravities may coexist in the HR diagram. This result suggests that an additional parameter, perhaps a weak stellar wind, might affect the extent of the iron reservoir and thus the ability of the latter to drive pulsations in sdB stars. Fortunately, it is expected that such a wind might also leave its mark on the photospheric heavy element abundance patterns. The intended FUSE observations will i) permit a direct comparison of the heavy element abundance patterns in variable and nonvariable stars of similar atmospheric parameters; ii) provide a consistency check with our wind models; and iii) provide a test of the currently-favored explanation for the driving of the observed pulsations.

  3. A Test of Pulsation Theory in Hot B Subdwarfs (bis)

    Science.gov (United States)

    Fontaine, G.

    There are currently 33 hot B subdwarf (sdB) stars which are known to exhibit low-amplitude (a few to tens of mmag), short-period (100-500 s), multiperiodic luminosity variations caused by acoustic mode instabilities. These pulsations are thought to be driven by an opacity bump linked to the presence of a local enhancement of the iron and other iron-peak elements) abundance in the envelopes of sdB stars. Such an enhancement results quite naturally from the diffusive equilibrium between gravitational settling and radiative support in the stellar envelope. Nevertheless, surveys for pulsating sdB stars show that variable and nonvariable objects with similar effective temperatures and gravities coexist in the log g-Teff diagram. This puzzling result suggests that an additional parameter, perhaps a weak stellar wind, might affect the extent of the iron reservoir and thus the ability of the latter to drive pulsations in sdB stars. Fortunately, it is expected that such a wind might also leave its mark on the photospheric heavy element abundance patterns. The intended FUSE observations will 1) permit a direct comparison of the heavy element abundance patterns in variable and nonvariable stars of similar atmospheric parameters, 2) provide a consistency check with our wind models, and 3) provide a test of the currently-favored explanation for the driving of the observed pulsations.

  4. Temperate Earth-sized planets transiting a nearby ultracool dwarf star.

    Science.gov (United States)

    Gillon, Michaël; Jehin, Emmanuël; Lederer, Susan M; Delrez, Laetitia; de Wit, Julien; Burdanov, Artem; Van Grootel, Valérie; Burgasser, Adam J; Triaud, Amaury H M J; Opitom, Cyrielle; Demory, Brice-Olivier; Sahu, Devendra K; Bardalez Gagliuffi, Daniella; Magain, Pierre; Queloz, Didier

    2016-05-12

    Star-like objects with effective temperatures of less than 2,700 kelvin are referred to as 'ultracool dwarfs'. This heterogeneous group includes stars of extremely low mass as well as brown dwarfs (substellar objects not massive enough to sustain hydrogen fusion), and represents about 15 per cent of the population of astronomical objects near the Sun. Core-accretion theory predicts that, given the small masses of these ultracool dwarfs, and the small sizes of their protoplanetary disks, there should be a large but hitherto undetected population of terrestrial planets orbiting them--ranging from metal-rich Mercury-sized planets to more hospitable volatile-rich Earth-sized planets. Here we report observations of three short-period Earth-sized planets transiting an ultracool dwarf star only 12 parsecs away. The inner two planets receive four times and two times the irradiation of Earth, respectively, placing them close to the inner edge of the habitable zone of the star. Our data suggest that 11 orbits remain possible for the third planet, the most likely resulting in irradiation significantly less than that received by Earth. The infrared brightness of the host star, combined with its Jupiter-like size, offers the possibility of thoroughly characterizing the components of this nearby planetary system.

  5. TRAPPIST-UCDTS: A prototype search for habitable planets transiting ultra-cool stars

    Directory of Open Access Journals (Sweden)

    Magain P.

    2013-04-01

    Full Text Available The ∼1000 nearest ultra-cool stars (spectral type M6 and latter represent a unique opportunity for the search for life outside solar system. Due to their small luminosity, their habitable zone is 30–100 times closer than for the Sun, the corresponding orbital periods ranging from one to a few days. Thanks to this proximity, the transits of a habitable planet are much more probable and frequent than for an Earth-Sun analog, while their tiny size (∼1 Jupiter radius leads to transits deep enough for a ground-based detection, even for sub-Earth size planets. Furthermore, a habitable planet transiting one of these nearby ultra-cool star would be amenable for a thorough atmospheric characterization, including the detection of possible biosignatures, notably with the near-to-come JWST. Motivated by these reasons, we have set up the concept of a ground-based survey optimized for detecting planets of Earth-size and below transiting the nearest Southern ultra-cool stars. To assess thoroughly the actual potential of this future survey, we are currently conducting a prototype mini-survey using the TRAPPIST robotic 60cm telescope located at La Silla ESO Observatory (Chile. We summarize here the preliminary results of this mini-survey that fully validate our concept.

  6. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  7. VizieR Online Data Catalog: Hot subdwarf stars in LAMOST DR1 (Luo+, 2016)

    Science.gov (United States)

    Luo, Y.-P.; Nemeth, P.; Liu, C.; Deng, L.-C.; Han, Z.-W.

    2018-01-01

    We present a catalog of 166 spectroscopically identified hot subdwarf stars from LAMOST DR1, 44 of which show the characteristics of cool companions in their optical spectra. Atmospheric parameters of 122 subdwarf stars with non-composite spectra were measured by fitting the profiles of hydrogen (H) and helium (He) lines with synthetic spectra from non-LTE model atmospheres. A unique property of our sample is that it covers a large range in apparent magnitude and galactic latitude, therefore it contains a mix of stars from different populations and galactic environments. (3 data files).

  8. Ages of Globular Clusters from HIPPARCOS Parallaxes of Local Subdwarfs

    Science.gov (United States)

    Gratton, Raffaele G.; Fusi Pecci, Flavio; Carretta, Eugenio; Clementini, Gisella; Corsi, Carlo E.; Lattanzi, Mario

    1997-12-01

    We report here initial but strongly conclusive results for absolute ages of Galactic globular clusters (GGCs). This study is based on high-precision trigonometric parallaxes from the HIPPARCOS satellite coupled with accurate metal abundances ([Fe/H], [O/Fe], and [α/Fe]) from high-resolution spectroscopy for a sample of about thirty subdwarfs. Systematic effects due to star selection (Lutz-Kelker corrections to parallaxes) and the possible presence of undetected binaries in the sample of bona fide single stars are examined, and appropriate corrections are estimated. They are found to be small for our sample. The new data allow us to reliably define the absolute location of the main sequence (MS) as a function of metallicity. These results are then used to derive distances and ages for a carefully selected sample of nine globular clusters having metallicities determined from high-dispersion spectra of individual giants according to a procedure totally consistent with that used for the field subdwarfs. Very precise and homogeneous reddening values have also been independently determined for these clusters. Random errors for our distance moduli are +/-0.08 mag, and systematic errors are likely of the same order of magnitude. These very accurate distances allow us to derive ages with internal errors of ~12% (+/-1.5 Gyr). The main results are: 1. HIPPARCOS parallaxes are smaller than corresponding ground-based measurements, leading, in turn, to longer distance moduli (~0.2 mag) and younger ages (~2.8 Gyr). 2. The distance to NGC 6752 derived from our MS fitting is consistent with that determined using the white dwarf cooling sequence. 3. The relation between the zero-age HB (ZAHB) absolute magnitude and metallicity for the nine program clusters is MV(ZAHB)=(0.22+/-0.09)([Fe/H]+1.5)+(0.49+/-0.04) . This relation is fairly consistent with some of the most recent theoretical models. Within quoted errors, the slope is in agreement with that given by the Baade-Wesselink (BW

  9. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  10. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  11. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  12. CPD -20 1123 (Albus 1) Is a Bright He-B Subdwarf

    Czech Academy of Sciences Publication Activity Database

    Vennes, S.; Kawka, Adela; Smith, J.A.

    2007-01-01

    Roč. 668, č. 1 (2007), L59-L61 ISSN 0004-637X R&D Projects: GA ČR GP205/05/P186 Institutional research plan: CEZ:AV0Z10030501 Keywords : chemically peculiar stars * subdwarfs Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 6.405, year: 2007

  13. Hot subdwarfs formed from the merger of two He white dwarfs

    Science.gov (United States)

    Schwab, Josiah

    2018-06-01

    We perform stellar evolution calculations of the remnant of the merger of two He white dwarfs (WDs). Our initial conditions are taken from hydrodynamic simulations of double WD mergers and the viscous disc phase that follows. We evolve these objects from shortly after the merger into their core He-burning phase, when they appear as hot subdwarf stars. We use our models to quantify the amount of H that survives the merger, finding that it is difficult for ≳ 10^{-4} M_{⊙} of H to survive, with even less being concentrated in the surface layers of the object. We also study the rotational evolution of these merger remnants. We find that mass-loss over the {˜ } 10^4 yr following the merger can significantly reduce the angular momentum of these objects. As hot subdwarfs, our models have moderate surface rotation velocities of 30-100 km s-1. The properties of our models are not representative of many apparently isolated hot subdwarfs, suggesting that those objects may form via other channels or that our modelling is incomplete. However, a sub-population of hot subdwarfs are moderate-to-rapid rotators and/or have He-rich atmospheres. Our models help to connect the observed properties of these objects to their progenitor systems.

  14. 2MASS J06164006-6407194: The First Outer Halo L Subdwarf

    NARCIS (Netherlands)

    Cushing, Michael C.; Looper, Dagny; Burgasser, Adam J.; Kirkpatrick, J. Davy; Faherty, Jacqueline; Cruz, Kelle L.; Sweet, Anne; Sanderson, Robyn E.

    2009-01-01

    We present the serendipitous discovery of an L subdwarf in the Two Micron All Sky Survey (2MASS) J06164006-6407194, in a search of the 2MASS for T dwarfs. Its spectrum exhibits features indicative of both a cool and metal poor atmosphere including a heavily pressure-broadened K I resonant doublet,

  15. A search for new hot subdwarf stars by means of Virtual Observatory tools

    Science.gov (United States)

    Oreiro, R.; Rodríguez-López, C.; Solano, E.; Ulla, A.; Østensen, R.; García-Torres, M.

    2011-06-01

    Context. Recent massive sky surveys in different bandwidths are providing new opportunities to modern astronomy. The Virtual Observatory (VO) provides the adequate framework to handle the huge amount of information available and filter out data according to specific requirements. Aims: Hot subdwarf stars are faint, blue objects, and are the main contributors to the far-UV excess observed in elliptical galaxies. They offer an excellent laboratory to study close and wide binary systems, and to scrutinize their interiors through asteroseismology, since some of them undergo stellar oscillations. However, their origins are still uncertain, and increasing the number of detections is crucial to undertake statistical studies. In this work, we aim at defining a strategy to find new, uncatalogued hot subdwarfs. Methods: Making use of VO tools we thoroughly search stellar catalogues to retrieve multi-colour photometry and astrometric information of a known sample of blue objects, including hot subdwarfs, white dwarfs, cataclysmic variables and main-sequence OB stars. We define a procedure to distinguish among these spectral classes, which is particularly designed to obtain a hot subdwarf sample with a low contamination factor. To check the validity of the method, this procedure is then applied to two test sky regions: to the Kepler FoV and to a test region of 300 deg2 around (α:225, δ:5) deg. Results: As a result of the procedure we obtained 38 hot subdwarf candidates, 23 of which had already a spectral classification. We have acquired spectroscopy for three other targets, and four additional ones have an available SDSS spectrum, which we used to determine their spectral type. A temperature estimate is provided for the candidates based on their spectral energy distribution, considering two-atmospheres fit for objects with clear infrared excess as a signature of the presence of a cool companion. Eventually, out of 30 candidates with spectral classification, 26 objects were

  16. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  17. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  18. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  19. A NEAR-INFRARED SPECTROSCOPIC STUDY OF YOUNG FIELD ULTRACOOL DWARFS

    Energy Technology Data Exchange (ETDEWEB)

    Allers, K. N. [Department of Physics and Astronomy, Bucknell University, Lewisburg, PA 17837 (United States); Liu, Michael C., E-mail: k.allers@bucknell.edu [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States)

    2013-08-01

    We present a near-infrared (0.9-2.4 {mu}m) spectroscopic study of 73 field ultracool dwarfs having spectroscopic and/or kinematic evidence of youth ( Almost-Equal-To 10-300 Myr). Our sample is composed of 48 low-resolution (R Almost-Equal-To 100) spectra and 41 moderate-resolution spectra (R {approx}> 750-2000). First, we establish a method for spectral typing M5-L7 dwarfs at near-IR wavelengths that is independent of gravity. We find that both visual and index-based classification in the near-IR provides consistent spectral types with optical spectral types, though with a small systematic offset in the case of visual classification at J and K band. Second, we examine features in the spectra of {approx}10 Myr ultracool dwarfs to define a set of gravity-sensitive indices based on FeH, VO, K I, Na I, and H-band continuum shape. We then create an index-based method for classifying the gravities of M6-L5 dwarfs that provides consistent results with gravity classifications from optical spectroscopy. Our index-based classification can distinguish between young and dusty objects. Guided by the resulting classifications, we propose a set of low-gravity spectral standards for the near-IR. Finally, we estimate the ages corresponding to our gravity classifications.

  20. News on the X-ray emission from hot subdwarf stars

    Directory of Open Access Journals (Sweden)

    Palombara Nicola La

    2017-12-01

    Full Text Available In latest years, the high sensitivity of the instruments on-board the XMM-Newton and Chandra satellites allowed us to explore the properties of the X-ray emission from hot subdwarf stars. The small but growing sample of X-ray detected hot subdwarfs includes binary systems, in which the X-ray emission is due to wind accretion onto a compact companion (white dwarf or neutron star, as well as isolated sdO stars, in which X-rays are probably due to shock instabilities in the wind. X-ray observations of these low-mass stars provide information which can be useful for our understanding of the weak winds of this type of stars and can lead to the discovery of particularly interesting binary systems. Here we report the most recent results we have recently obtained in this research area.

  1. New white dwarf and subdwarf stars in the Sloan Digital Sky Survey Data Release 12

    OpenAIRE

    Kepler, S. O.; Pelisoli, Ingrid; Koester, Detlev; Ourique, Gustavo; Romero, Alejandra Daniela; Reindl, Nicole; Kleinman, Scot J.; Eisenstein, Daniel J.; Valois, A. Dean M.; Amaral, Larissa A.

    2015-01-01

    We report the discovery of 6576 new spectroscopically confirmed white dwarf and subdwarf stars in the Sloan Digital Sky Survey Data Release 12. We obtain Teff, log g and mass for hydrogen atmospherewhite dwarf stars (DAs) and helium atmospherewhite dwarf stars (DBs), estimate the calcium/helium abundances for the white dwarf stars with metallic lines (DZs) and carbon/helium for carbon-dominated spectra (DQs). We found one central star of a planetary nebula, one ultracompact helium binary (AM ...

  2. New binaries among UV-selected, hot subdwarf stars and population properties

    Czech Academy of Sciences Publication Activity Database

    Kawka, Adela; Vennes, Stephane; O' Toole, S.; Nemeth, P.; Burton, D.; Kotze, E.; Buckley, D.A.H.

    2015-01-01

    Roč. 450, č. 4 (2015), s. 3514-3548 ISSN 0035-8711 R&D Projects: GA ČR GAP209/12/0217; GA ČR GA13-14581S; GA MŠk LG14013 Institutional support: RVO:67985815 Keywords : close binaries * spectroscopic * subdwarfs Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 4.952, year: 2015

  3. Far-ultraviolet spectrophotometry of two very hot O type subdwarfs

    Science.gov (United States)

    Drilling, J. S.; Holberg, J. B.; Schoenberner, D.

    1984-01-01

    As a result of a spectroscopic survey of stars classified as nonemission OB+, Drilling (1983) has detected 12 new subluminous O stars. It was found that these stars are the hottest known O type subdwarfs. The effective temperatures of the stars are 60,000 K or higher. It has been possible to observe two of these stars with Voyager 1, taking into account LSE 21 and LS IV +10.9 deg. LSE 21 is one of the hottest of the new subdwarfs, with an effective temperature of at least 100,000 K. The optical spectrum indicates a hydrogen-rich atmosphere of high surface gravity. LX IV +10.9 deg is one of the cooler objects with an effective temperature of 65,000 K. The optical spectrum indicates an extremely helium-rich atmosphere and a somewhat lower surface gravity than LSE 21. The Voyager 1 observations confirm the temperature scale set up by Schoenberger and Drilling (1984) for the hottest O type subdwarfs.

  4. GTC/OSIRIS SPECTROSCOPIC IDENTIFICATION OF A FAINT L SUBDWARF IN THE UKIRT INFRARED DEEP SKY SURVEY

    International Nuclear Information System (INIS)

    Lodieu, N.; Osorio, M. R. Zapatero; MartIn, E. L.; Solano, E.; Aberasturi, M.

    2010-01-01

    We present the discovery of an L subdwarf in 234 deg 2 common to the UK InfraRed Telescope (UKIRT) Infrared Deep Sky Survey Large Area Survey Data Release 2 and the Sloan Digital Sky Survey Data Release 3. This is the fifth L subdwarf announced to date, the first one identified in the UKIRT Infrared Deep Sky Survey, and the faintest known. The blue optical and near-infrared colors of ULAS J135058.86+081506.8 and its overall spectra energy distribution are similar to the known mid-L subdwarfs. Low-resolution optical (700-1000 nm) spectroscopy with the Optical System for Imaging and low Resolution Integrated Spectroscopy spectrograph on the 10.4 m Gran Telescopio de Canarias reveals that ULAS J135058.86+081506.8 exhibits a strong K I pressure-broadened line at 770 nm and a red slope longward of 800 nm, features characteristics of L-type dwarfs. From direct comparison with the four known L subdwarfs, we estimate its spectral type to be sdL4-sdL6 and derive a distance in the interval 94-170 pc. We provide a rough estimate of the space density for mid-L subdwarfs of 1.5 x 10 -4 pc -3 .

  5. The UV Spectrum of the Ultracool Dwarf LSR J1835+3259 Observed with the Hubble Space Telescope

    Science.gov (United States)

    Saur, Joachim; Fischer, Christian; Wennmacher, Alexandre; Feldman, Paul D.; Roth, Lorenz; Strobel, Darrell F.; Reiners, Ansgar

    2018-05-01

    An interesting question about ultracool dwarfs recently raised in the literature is whether their emission is purely internally driven or partially powered by external processes similar to planetary aurora known from the solar system. In this work, we present Hubble Space Telescope observations of the energy fluxes of the M8.5 ultracool dwarf LSR J1835+3259 throughout the ultraviolet (UV). The obtained spectra reveal that the object is generally UV-fainter compared with other earlier-type dwarfs. We detect the Mg II doublet at 2800 Å and constrain an average flux throughout the near-UV. In the far-UV without Lyα, the ultracool dwarf is extremely faint with an energy output at least a factor of 250 smaller as expected from auroral emission physically similar to that on Jupiter. We also detect the red wing of the Lyα emission. Our overall finding is that the observed UV spectrum of LSR J1835+3259 resembles the spectrum of mid/late-type M-dwarf stars relatively well, but it is distinct from a spectrum expected from Jupiter-like auroral processes.

  6. Hydrogen in hot subdwarfs formed by double helium white dwarf mergers

    OpenAIRE

    Hall, Philip D.; Jeffery, C. Simon

    2016-01-01

    Isolated hot subdwarfs might be formed by the merging of two helium-core white dwarfs. Before merging, helium-core white dwarfs have hydrogen-rich envelopes and some of this hydrogen may survive the merger. We calculate the mass of hydrogen that is present at the start of such mergers and, with the assumption that hydrogen is mixed throughout the disrupted white dwarf in the merger process, estimate how much can survive. We find a hydrogen mass of up to about $2 \\times 10^{-3}\\,\\mathrm{M}_{\\o...

  7. Probing the LHS Catalog. I. New Nearby Stars and the Coolest Subdwarf

    OpenAIRE

    Gizis, John E.; Reid, I. Neill

    1997-01-01

    We present moderate resolution spectroscopy of 112 cool dwarf stars to supplement the observations we have already presented in the Palomar/MSU Nearby-Star Spectroscopic Survey. The sample consists of 72 suspected nearby stars added to the The Preliminary Third Catalog of Nearby Stars since 1991 as well as 40 faint red stars selected from the LHS catalog. LHS 1826 is more metal-poor and cooler than the coolest previously known extreme subdwarf, LHS 1742a. LHS 2195 is a very late M dwarf of ty...

  8. TWO NEW LONG-PERIOD HOT SUBDWARF BINARIES WITH DWARF COMPANIONS

    Energy Technology Data Exchange (ETDEWEB)

    Barlow, Brad N.; Wade, Richard A. [Department of Astronomy and Astrophysics, Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States); Liss, Sandra E. [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904-4325 (United States); Green, Elizabeth M., E-mail: bbarlow@psu.edu [Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States)

    2013-07-01

    Hot subdwarf stars with F-K main sequence binary companions have been known for decades, but the first orbital periods for such systems were published just recently. Current observations suggest that most have long periods, on the order of years, and that some are or once were hierarchical triple systems. As part of a survey with the Hobby-Eberly Telescope, we have been monitoring the radial velocities of several composite-spectra binaries since 2005 in order to determine their periods, velocities, and eccentricities. Here we present observations and orbital solutions for two of these systems, PG 1449+653 and PG 1701+359. Similar to the other sdB+F/G/K binaries with solved orbits, their periods are long, 909 and 734 days, respectively, and pose a challenge to current binary population synthesis models of hot subdwarf stars. Intrigued by their relatively large systemic velocities, we also present a kinematical analysis of both targets and find that neither is likely a member of the Galactic thin disk.

  9. A Universal Transition in Atmospheric Diffusion for Hot Subdwarfs Near 18,000 K

    Science.gov (United States)

    Brown, T. M.; Taylor, J. M.; Cassisi, S.; Sweigart, A. V.; Bellini, A.; Bedin, L. R.; Salaris, M.; Renzini, A.; Dalessandro, E.

    2017-12-01

    In the color–magnitude diagrams of globular clusters, when the locus of stars on the horizontal branch extends to hot temperatures, discontinuities are observed at colors corresponding to ∼12,000 and ∼18,000 K. The former is the “Grundahl jump” that is associated with the onset of radiative levitation in the atmospheres of hot subdwarfs. The latter is the “Momany jump” that has remained unexplained. Using the Space Telescope Imaging Spectrograph on the Hubble Space Telescope, we have obtained ultraviolet and blue spectroscopy of six hot subdwarfs straddling the Momany jump in the massive globular cluster ω Cen. By comparison to model atmospheres and synthetic spectra, we find that the feature is due primarily to a decrease in atmospheric Fe for stars hotter than the feature, amplified by the temperature dependence of the Fe absorption at these effective temperatures. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program GO-14759.

  10. O-C analysis of the pulsating subdwarf B star PG 1219 + 534

    Science.gov (United States)

    Otani, Tomomi; Stone-Martinez, Alexander; Oswalt, Terry D.; Morello, Claudia; Moss, Adam; Singh, Dana; Sampson, Kenneth; DeAbreu, Caila; Khan, Aliyah; Seepersad, Austin; Shaikh, Mehvesh; Wilson, Linda

    2017-01-01

    PG 1219 + 534 (KY Uma) is a subdwarf B pulsating star with multiple periodicities between 120 - 175 s. So far, the most promising theory for the origin of subdwarf B (sdB) stars is that they result from binary mass transfer near the Helium Flash stage of evolution. The observations of PG 1219 +534 reported here are part of our program to constrain this evolutional theory by searching for companions and determining orbital separations around sdB pulsators using the Observed-minus-Calculated (O-C) method. A star’s position in space will wobble due to the gravitational forces of any companion or planet. If the star emits a periodic signal like pulsations, its orbital motion around the system’s center of mass causes periodic changes in the light pulse arrival times. PG 1219 + 534 was monitored for 90 hours during 2010-1 and 2016 using the 0.9m SARA-KP telescope at Kitt Peak National Observatory (KPNO), Arizona, and the 0.8 m Ortega telescope at Florida Institute of Technology in Melbourne. In this poster we present our time-series photometry and O-C analysis of this data.

  11. A Search for Rapidly Pulsating Hot Subdwarf Stars in the GALEX Survey

    Energy Technology Data Exchange (ETDEWEB)

    Boudreaux, Thomas M.; Barlow, Brad N.; Soto, Alan Vasquez [Department of Physics, High Point University, One University Parkway, High Point, NC 27268 (United States); Fleming, Scott W. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Million, Chase [Million Concepts LLC, P.O. Box 119, 141 Mary Street, Lemont, PA 16851 (United States); Reichart, Dan E.; Haislip, Josh B.; Moore, Justin P. [Department of Physics and Astronomy, University of North Carolina, Chapel Hill, NC 27599 (United States); Linder, Tyler R. [Department of Physics, Eastern Illinois University, 600 Lincoln Avenue, Charleston, IL 61920 (United States)

    2017-08-20

    NASA’s Galaxy Evolution Explorer ( GALEX ) provided near- and far-UV observations for approximately 77% of the sky over a 10-year period; however, the data reduction pipeline initially only released single NUV and FUV images to the community. The recently released Python module gPhoton changes this, allowing calibrated time-series aperture photometry to be extracted easily from the raw GALEX data set. Here we use gPhoton to generate light curves for all hot subdwarf B (sdB) stars that were observed by GALEX , with the intention of identifying short-period, p-mode pulsations. We find that the spacecraft’s short visit durations, uneven gaps between visits, and dither pattern make the detection of hot subdwarf pulsations difficult. Nonetheless, we detect UV variations in four previously known pulsating targets and report their UV pulsation amplitudes and frequencies. Additionally, we find that several other sdB targets not previously known to vary show promising signals in their periodograms. Using optical follow-up photometry with the Skynet Robotic Telescope Network, we confirm p-mode pulsations in one of these targets, LAMOST J082517.99+113106.3, and report it as the most recent addition to the sdBV{sub r} class of variable stars.

  12. TWO NEW LONG-PERIOD HOT SUBDWARF BINARIES WITH DWARF COMPANIONS

    International Nuclear Information System (INIS)

    Barlow, Brad N.; Wade, Richard A.; Liss, Sandra E.; Green, Elizabeth M.

    2013-01-01

    Hot subdwarf stars with F-K main sequence binary companions have been known for decades, but the first orbital periods for such systems were published just recently. Current observations suggest that most have long periods, on the order of years, and that some are or once were hierarchical triple systems. As part of a survey with the Hobby-Eberly Telescope, we have been monitoring the radial velocities of several composite-spectra binaries since 2005 in order to determine their periods, velocities, and eccentricities. Here we present observations and orbital solutions for two of these systems, PG 1449+653 and PG 1701+359. Similar to the other sdB+F/G/K binaries with solved orbits, their periods are long, 909 and 734 days, respectively, and pose a challenge to current binary population synthesis models of hot subdwarf stars. Intrigued by their relatively large systemic velocities, we also present a kinematical analysis of both targets and find that neither is likely a member of the Galactic thin disk.

  13. Two New Long-period Hot Subdwarf Binaries with Dwarf Companions

    Science.gov (United States)

    Barlow, Brad N.; Liss, Sandra E.; Wade, Richard A.; Green, Elizabeth M.

    2013-07-01

    Hot subdwarf stars with F-K main sequence binary companions have been known for decades, but the first orbital periods for such systems were published just recently. Current observations suggest that most have long periods, on the order of years, and that some are or once were hierarchical triple systems. As part of a survey with the Hobby-Eberly Telescope, we have been monitoring the radial velocities of several composite-spectra binaries since 2005 in order to determine their periods, velocities, and eccentricities. Here we present observations and orbital solutions for two of these systems, PG 1449+653 and PG 1701+359. Similar to the other sdB+F/G/K binaries with solved orbits, their periods are long, 909 and 734 days, respectively, and pose a challenge to current binary population synthesis models of hot subdwarf stars. Intrigued by their relatively large systemic velocities, we also present a kinematical analysis of both targets and find that neither is likely a member of the Galactic thin disk. Based on observations obtained with the Hobby-Eberly Telescope, which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximilians-Universität München, and Georg-August-Universität Göttingen.

  14. High-Speed Ultracam Colorimetry of the Subdwarf B Star SDSS J171722.08+58055.8

    NARCIS (Netherlands)

    Aerts, C.C.; Jeffery, C.S.; Dhillon, V.S.; Marsh, T.R.; Groot, P.J.

    2006-01-01

    We present high-speed multicolour photometry of the faint sub-dwarf B star SDSS J171722.08+58055.8 (mB=16.7mag), which was recently discovered to be pulsating. The data were obtained during two consecutive nights in 2004 August using the three-channel photometer Ultracam attached to the

  15. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  16. PARALLAXES AND PROPER MOTIONS OF ULTRACOOL BROWN DWARFS OF SPECTRAL TYPES Y AND LATE T

    International Nuclear Information System (INIS)

    Marsh, Kenneth A.; Kirkpatrick, J. Davy; Gelino, Christopher R.; Griffith, Roger L.; Wright, Edward L.; Cushing, Michael C.; Skrutskie, Michael F.; Eisenhardt, Peter R.

    2013-01-01

    We present astrometric measurements of 11 nearby ultracool brown dwarfs of spectral types Y and late-T, based on imaging observations from a variety of space-based and ground-based telescopes. These measurements have been used to estimate relative parallaxes and proper motions via maximum likelihood fitting of geometric model curves. To compensate for the modest statistical significance (∼ tan , assumed similar to that implied by previous observations of T dwarfs. Our estimated distances are therefore somewhat dependent on that assumption. Nevertheless, the results have yielded distances for five of our eight Y dwarfs and all three T dwarfs. Estimated distances in all cases are ∼> 3 pc. In addition, we have obtained significant estimates of V tan for two of the Y dwarfs; both are –1 , consistent with membership in the thin disk population. Comparison of absolute magnitudes with model predictions as a function of color shows that the Y dwarfs are significantly redder in J – H than predicted by a cloud-free model.

  17. The Ultracool Typing Kit - An Open-Source, Qualitative Spectral Typing GUI for L Dwarfs

    Science.gov (United States)

    Schwab, Ellianna; Cruz, Kelle; Núñez, Alejandro; Burgasser, Adam J.; Rice, Emily; Reid, Neill; Faherty, Jacqueline K.; BDNYC

    2018-01-01

    The Ultracool Typing Kit (UTK) is an open-source graphical user interface for classifying the NIR spectral types of L dwarfs, including field and low-gravity dwarfs spanning L0-L9. The user is able to input an NIR spectrum and qualitatively compare the input spectrum to a full suite of spectral templates, including low-gravity beta and gamma templates. The user can choose to view the input spectrum as both a band-by-band comparison with the templates and a full bandwidth comparison with NIR spectral standards. Once an optimal qualitative comparison is selected, the user can save their spectral type selection both graphically and to a database. Using UTK to classify 78 previously typed L dwarfs, we show that a band-by-band classification method more accurately agrees with optical spectral typing systems than previous L dwarf NIR classification schemes. UTK is written in python, released on Zenodo with a BSD-3 clause license and publicly available on the BDNYC Github page.

  18. Discovery of Temperate Earth-Sized Planets Transiting a Nearby Ultracool Dwarf Star

    Science.gov (United States)

    Jehin, Emmanuel; Gillon, Michael; Lederer, Susan M.; Delrez, Laetitia; De Wit, Julien; Burdanov, Artem; Van Grootel, Valerie; Burgasser, Adam; Triaud, Amaury; Demory, Brice-Olivier; hide

    2016-01-01

    We report the discovery of three short-period Earth-sized planets transiting a nearby ultracool dwarf star using data collected by the Liège TRAPPIST telescope, located in la Silla (Chile). TRAPPIST-1 is an isolated M8.0+/-0.5-type dwarf star at a distance of 12.0+/-0.4 parsecs as measured by its trigonometric parallax, with an age constrained to be > 500 Myr, and with a luminosity, mass, and radius of 0.05%, 8% and 11.5% those of the Sun, respectively. The small size of the host star, only slightly larger than Jupiter, translates into Earth-like radii for the three discovered planets, as deduced from their transit depths. The inner two planets receive four and two times the irradiation of Earth, respectively, placing them close to the inner edge of the habitable zone of the star. Several orbits remain possible for the third planet based on our current data. The infrared brightness of the host star combined with its Jupiter-like size offer the possibility of thoroughly characterizing the components of this nearby planetary system.

  19. THE SECOND ARECIBO SEARCH FOR 5 GHz RADIO FLARES FROM ULTRACOOL DWARFS

    Energy Technology Data Exchange (ETDEWEB)

    Route, Matthew; Wolszczan, Alexander, E-mail: alex@astro.psu.edu, E-mail: mroute@purdue.edu [Department of Astronomy and Astrophysics, the Pennsylvania State University, 525 Davey Laboratory, University Park, PA 16802 (United States)

    2016-10-20

    We describe our second installment of the 4.75 GHz survey of ultracool dwarfs (UCDs) conducted with the Arecibo radio telescope, which has observed 27 such objects and resulted in the detection of sporadic flaring from the T6 dwarf, WISEPC J112254.73+255021.5. We also present follow-up observations of the first radio-emitting T dwarf, 2MASS J10475385+2124234, a tentatively identified radio-emitting L1 dwarf, 2MASS J1439284+192915, and the known radio-flaring source, 2MASS J13142039+132011 AB. Our new data indicate that 2MASS J1439284+192915 is not a radio-flaring source. The overall detection rate of our unbiased survey for radio-flaring UCDs is ∼5% for new sources, with a detection rate for each spectral class of ∼5%–10%. Evidently, radio luminosity of the UCDs does not appear to monotonically decline with spectral type from M7 dwarfs to giant planets, contradictory to theories of the magnetic field generation and the internal structure of these objects. Along with other, recently published results, our data exemplify the unique value of using radio surveys to reveal and study properties of substellar magnetic activity.

  20. THE FIRST ULTRA-COOL BROWN DWARF DISCOVERED BY THE WIDE-FIELD INFRARED SURVEY EXPLORER

    International Nuclear Information System (INIS)

    Mainzer, A.; Cushing, Michael C.; Eisenhardt, P.; Skrutskie, M.; Beaton, R.; Gelino, C. R.; Kirkpatrick, J. Davy; Jarrett, T.; Masci, F.; Marsh, K.; Padgett, D.; Marley, Mark S.; Saumon, D.; Wright, E.; McLean, I.; Dietrich, M.; Garnavich, P.; Rueff, K.; Kuhn, O.; Leisawitz, D.

    2011-01-01

    We report the discovery of the first new ultra-cool brown dwarf (BDs) found with the Wide-field Infrared Survey Explorer (WISE). The object's preliminary designation is WISEPC J045853.90+643451.9. Follow-up spectroscopy with the LUCIFER instrument on the Large Binocular Telescope indicates that it is a very late-type T dwarf with a spectral type approximately equal to T9. Fits to an IRTF/SpeX 0.8-2.5 μm spectrum to the model atmospheres of Marley and Saumon indicate an effective temperature of approximately 600 K as well as the presence of vertical mixing in its atmosphere. The new BD is easily detected by WISE, with a signal-to-noise ratio of ∼36 at 4.6 μm. Current estimates place it at a distance of 6-10 pc. This object represents the first in what will likely be hundreds of nearby BDs found by WISE that will be suitable for follow-up observations, including those with the James Webb Space Telescope. One of the two primary scientific goals of the WISE mission is to find the coolest, closest stars to our Sun; the discovery of this new BD proves that WISE is capable of fulfilling this objective.

  1. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  2. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  3. K2 Campaign 5 observations of pulsating subdwarf B stars: binaries and super-Nyquist frequencies

    Science.gov (United States)

    Reed, M. D.; Armbrecht, E. L.; Telting, J. H.; Baran, A. S.; Østensen, R. H.; Blay, Pere; Kvammen, A.; Kuutma, Teet; Pursimo, T.; Ketzer, L.; Jeffery, C. S.

    2018-03-01

    We report the discovery of three pulsating subdwarf B stars in binary systems observed with the Kepler space telescope during Campaign 5 of K2. EPIC 211696659 (SDSS J083603.98+155216.4) is a g-mode pulsator with a white dwarf companion and a binary period of 3.16 d. EPICs 211823779 (SDSS J082003.35+173914.2) and 211938328 (LB 378) are both p-mode pulsators with main-sequence F companions. The orbit of EPIC 211938328 is long (635 ± 146 d) while we cannot constrain that of EPIC 211823779. The p modes are near the Nyquist frequency and so we investigate ways to discriminate super- from sub-Nyquist frequencies. We search for rotationally induced frequency multiplets and all three stars appear to be slow rotators with EPIC 211696659 subsynchronous to its orbit.

  4. GASEOUS MEAN OPACITIES FOR GIANT PLANET AND ULTRACOOL DWARF ATMOSPHERES OVER A RANGE OF METALLICITIES AND TEMPERATURES

    Energy Technology Data Exchange (ETDEWEB)

    Freedman, Richard S. [SETI Institute, Mountain View, CA (United States); Lustig-Yaeger, Jacob [Department of Physics, University of California, Santa Cruz, CA 95064 (United States); Fortney, Jonathan J. [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Lupu, Roxana E.; Marley, Mark S. [Space Science and Astrobiology Division, NASA Ames Research Center, Moffett Field, CA (United States); Lodders, Katharina, E-mail: Richard.S.Freedman@nasa.gov [Planetary Chemistry Laboratory, Washington University, St. Louis, MO (United States)

    2014-10-01

    We present new calculations of Rosseland and Planck gaseous mean opacities relevant to the atmospheres of giant planets and ultracool dwarfs. Such calculations are used in modeling the atmospheres, interiors, formation, and evolution of these objects. Our calculations are an expansion of those presented in Freedman et al. to include lower pressures, finer temperature resolution, and also the higher metallicities most relevant for giant planet atmospheres. Calculations span 1 μbar to 300 bar, and 75-4000 K, in a nearly square grid. Opacities at metallicities from solar to 50 times solar abundances are calculated. We also provide an analytic fit to the Rosseland mean opacities over the grid in pressure, temperature, and metallicity. In addition to computing mean opacities at these local temperatures, we also calculate them with weighting functions up to 7000 K, to simulate the mean opacities for incident stellar intensities, rather than locally thermally emitted intensities. The chemical equilibrium calculations account for the settling of condensates in a gravitational field and are applicable to cloud-free giant planet and ultracool dwarf atmospheres, but not circumstellar disks. We provide our extensive opacity tables for public use.

  5. A likely candidate of type Ia supernova progenitors: the X-ray pulsating companion of the hot subdwarf HD 49798

    International Nuclear Information System (INIS)

    Wang Bo; Han Zhanwen

    2010-01-01

    HD 49798 is a hydrogen depleted subdwarf O6 star and has an X-ray pulsating companion (RX J0648.0-4418). The X-ray pulsating companion is a massive white dwarf. Employing Eggleton's stellar evolution code with the optically thick wind assumption, we find that the hot subdwarf HD 49798 and its X-ray pulsating companion could produce a type Ia supernova (SN Ia) in future evolution. This implies that the binary system is a likely candidate of an SN Ia progenitor. We also discuss the possibilities of some other WD + He star systems (e.g. V445 Pup and KPD 1930+2752) for producing SNe Ia. (research papers)

  6. A RADIAL VELOCITY STUDY OF COMPOSITE-SPECTRA HOT SUBDWARF STARS WITH THE HOBBY-EBERLY TELESCOPE

    Energy Technology Data Exchange (ETDEWEB)

    Barlow, Brad N.; Wade, Richard A.; Liss, Sandra E. [Department of Astronomy and Astrophysics, Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States); Ostensen, Roy H.; Van Winckel, Hans [Instituut voor Sterrenkunde, K.U. Leuven, B-3001 Leuven (Belgium)

    2012-10-10

    Many hot subdwarf stars show composite spectral energy distributions indicative of cool main-sequence (MS) companions. Binary population synthesis (BPS) models demonstrate such systems can be formed via Roche lobe overflow or common envelope evolution but disagree on whether the resulting orbital periods will be long (years) or short (days). Few studies have been carried out to assess the orbital parameters of these spectroscopic composite binaries; current observations suggest the periods are long. To help address this problem, we selected 15 moderately bright (V {approx} 13) hot subdwarfs with F-K dwarf companions and monitored their radial velocities from 2005 January to 2008 July using the bench-mounted Medium Resolution Spectrograph on the Hobby-Eberly Telescope (HET). Here we describe the details of our observing, reduction, and analysis techniques, and present preliminary results for all targets. By combining the HET data with recent observations from the Mercator Telescope, we are able to calculate precise orbital solutions for three systems using more than six years of observations. We also present an up-to-date period histogram for all known hot subdwarf binaries, which suggests those with F-K MS companions tend to have orbital periods on the order of several years. Such long periods challenge the predictions of conventional BPS models, although a larger sample is needed for a thorough assessment of the models' predictive success. Lastly, one of our targets has an eccentric orbit, implying some composite-spectrum systems might have formerly been hierarchical triple systems, in which the inner binary merged to create the hot subdwarf.

  7. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  8. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  9. K2 Ultracool Dwarfs Survey. II. The White Light Flare Rate of Young Brown Dwarfs

    Science.gov (United States)

    Gizis, John E.; Paudel, Rishi R.; Mullan, Dermott; Schmidt, Sarah J.; Burgasser, Adam J.; Williams, Peter K. G.

    2017-08-01

    We use Kepler K2 Campaign 4 short-cadence (one-minute) photometry to measure white light flares in the young, moving group brown dwarfs 2MASS J03350208+2342356 (2M0335+23) and 2MASS J03552337+1133437 (2M0355+11), and report on long-cadence (thirty-minute) photometry of a superflare in the Pleiades M8 brown dwarf CFHT-PL-17. The rotation period (5.24 hr) and projected rotational velocity (45 km s-1) confirm 2M0335+23 is inflated (R≥slant 0.20 {R}⊙ ) as predicted for a 0.06 {M}⊙ , 24 Myr old brown dwarf βPic moving group member. We detect 22 white light flares on 2M0335+23. The flare frequency distribution follows a power-law distribution with slope -α =-1.8+/- 0.2 over the range 1031 to 1033 erg. This slope is similar to that observed in the Sun and warmer flare stars, and is consistent with lower-energy flares in previous work on M6-M8 very-low-mass stars; taking the two data sets together, the flare frequency distribution for ultracool dwarfs is a power law over 4.3 orders of magnitude. The superflare (2.6× {10}34 erg) on CFHT-PL-17 shows higher-energy flares are possible. We detect no flares down to a limit of 2× {10}30 erg in the nearby L5γ AB Dor moving group brown dwarf 2M0355+11, consistent with the view that fast magnetic reconnection is suppressed in cool atmospheres. We discuss two multi-peaked flares observed in 2M0335+23, and argue that these complex flares can be understood as sympathetic flares, in which fast-mode magnetohydrodynamic waves similar to extreme-ultraviolet waves in the Sun trigger magnetic reconnection in different active regions.

  10. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  11. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  12. Asteroseismic Constraints on the Models of Hot B Subdwarfs: Convective Helium-Burning Cores

    Science.gov (United States)

    Schindler, Jan-Torge; Green, Elizabeth M.; Arnett, W. David

    2017-10-01

    Asteroseismology of non-radial pulsations in Hot B Subdwarfs (sdB stars) offers a unique view into the interior of core-helium-burning stars. Ground-based and space-borne high precision light curves allow for the analysis of pressure and gravity mode pulsations to probe the structure of sdB stars deep into the convective core. As such asteroseismological analysis provides an excellent opportunity to test our understanding of stellar evolution. In light of the newest constraints from asteroseismology of sdB and red clump stars, standard approaches of convective mixing in 1D stellar evolution models are called into question. The problem lies in the current treatment of overshooting and the entrainment at the convective boundary. Unfortunately no consistent algorithm of convective mixing exists to solve the problem, introducing uncertainties to the estimates of stellar ages. Three dimensional simulations of stellar convection show the natural development of an overshooting region and a boundary layer. In search for a consistent prescription of convection in one dimensional stellar evolution models, guidance from three dimensional simulations and asteroseismological results is indispensable.

  13. THE DISCOVERY OF DIFFERENTIAL RADIAL ROTATION IN THE PULSATING SUBDWARF B STAR KIC 3527751

    Energy Technology Data Exchange (ETDEWEB)

    Foster, H. M.; Reed, M. D. [Department of Physics, Astronomy, and Materials Science, Missouri State University, Springfield, MO 65897 (United States); Telting, J. H. [Nordic Optical Telescope, Rambla José Ana Fernández Pérez 7, E-38711 Breña Baja (Spain); Østensen, R. H. [Instituut voor Sterrenkunde, KU Leuven, Celestijnenlaan 200 D, B-3001 Leuven (Belgium); Baran, A. S. [Uniwersytet Pedagogiczny, Obserwatorium na Suhorze, ul. Podchorażych 2, 30-084 Kraków (Poland)

    2015-06-01

    We analyze 3 yr of nearly continuous Kepler spacecraft short cadence observations of the pulsating subdwarf B (sdB) star KIC 3527751. We detect a total of 251 periodicities, most in the g-mode domain, but some where p-modes occur, confirming that KIC 3527751 is a hybrid pulsator. We apply seismic tools to the periodicities to characterize the properties of KIC 3527751. Techniques to identify modes include asymptotic period spacing relationships, frequency multiplets, and the separation of multiplet splittings. These techniques allow for 189 (75%) of the 251 periods to be associated with pulsation modes. Included in these are three sets of ℓ = 4 multiplets and possibly an ℓ = 9 multiplet. Period spacing sequences indicate ℓ = 1 and 2 overtone spacings of 266.4 ± 0.2 and 153.2 ± 0.2 s, respectively. We also calculate reduced periods, from which we find evidence of trapped pulsations. Such mode trappings can be used to constrain the core/atmosphere transition layers. Interestingly, frequency multiplets in the g-mode region, which sample deep into the star, indicate a rotation period of 42.6 ± 3.4 days while p-mode multiplets, which sample the outer envelope, indicate a rotation period of 15.3 ± 0.7 days. We interpret this as differential rotation in the radial direction with the core rotating more slowly. This is the first example of differential rotation for a sdB star.

  14. An Analysis of Pulsating Subdwarf B Star EPIC 203948264 Observed During Campaign 2 of K2

    Directory of Open Access Journals (Sweden)

    Ketzer Laura

    2017-01-01

    Full Text Available We present a preliminary analysis of the newly–discovered pulsating subdwarf B (sdB star EPIC 203948264. The target was observed for 83 days in short cadence mode during Campaign 2 of K2, the two–gyro mission of the Kepler space telescope. A time–series analysis of the data revealed 22 independent pulsation frequencies in the g–mode region ranging from 100 to 600 μHz (0:5 to 2:8 hours. The main method we use to identify pulsation modes is asymptotic period spacing, and we were able to assign all but one of the pulsations to either l = 1 or l = 2. The average period spacings of both sequences are 261:34 ± 0.78 s and 151:18 ± 0.34 s, respectively. The pulsation amplitudes range from 0.77 ppt down to the detection limit at 0.212 ppt, and are not stable over the duration of the campaign. We detected one possible low–amplitude, l = 2, rotationally split multiplet, which allowed us to constrain the rotation period to 46 days or longer. This makes EPIC 203948264 another slowly rotating sdB star.

  15. VizieR Online Data Catalog: Subdwarf A stars vs ELM WDs radial velocities (Brown+, 2017)

    Science.gov (United States)

    Brown, W. R.; Kilic, M.; Gianninas, A.

    2017-11-01

    Our sample is comprised of 11 subdwarf A-type (sdA) stars suspected of being eclipsing binaries (S. O. Kepler 2015, private communication) and 11 previously unpublished extremely low mass (ELM) white dwarf (WD) candidates that have sdA-like temperatures summarized in Table 1. We obtain time-series spectroscopy for all 22 objects and time-series optical photometry for 21 objects. We also obtain JHK infrared photometry for 6 objects. We obtain time-series spectroscopy for 20 of the 22 objects with the 6.5m MMT telescope. We obtain spectra for the two brightest objects with the 1.5m Tillinghast telescope at Fred Lawrence Whipple Observatory. We obtain additional spectra for six objects with the 4m Mayall telescope at Kitt Peak National Observatory. The spectra were mostly acquired in observing runs between 2014 December and 2016 December. We search the Catalina Surveys Data Release 2 (Drake+ 2009, J/ApJ/696/870) and find time-series V-band photometry for 21 of the 22 objects. Six objects show significant eclipses. (3 data files).

  16. The Solar Neighborhood. XLII. Parallax Results from the CTIOPI 0.9 m Program—Identifying New Nearby Subdwarfs Using Tangential Velocities and Locations on the H–R Diagram

    Science.gov (United States)

    Jao, Wei-Chun; Henry, Todd J.; Winters, Jennifer G.; Subasavage, John P.; Riedel, Adric R.; Silverstein, Michele L.; Ianna, Philip A.

    2017-11-01

    Parallaxes, proper motions, and optical photometry are presented for 51 systems consisting of 37 cool subdwarf and 14 additional high proper motion systems. Thirty-seven systems have parallaxes reported for the first time, 15 of which have proper motions of at least 1″ yr‑1. The sample includes 22 newly identified cool subdwarfs within 100 pc, of which three are within 25 pc, and an additional five subdwarfs from 100 to 160 pc. Two systems—LSR 1610-0040 AB and LHS 440 AB—are close binaries exhibiting clear astrometric perturbations that will ultimately provide important masses for cool subdwarfs. We use the accurate parallaxes and proper motions provided here, combined with additional data from our program and others, to determine that effectively all nearby stars with tangential velocities greater than 200 km s‑1 are subdwarfs. We compare a sample of 167 confirmed cool subdwarfs to nearby main sequence dwarfs and Pleiades members on an observational Hertzsprung–Russell diagram using M V versus (V ‑ K s ) to map trends of age and metallicity. We find that subdwarfs are clearly separated for spectral types K5–M5, indicating that the low metallicities of subdwarfs set them apart in the H–R diagram for (V ‑ K s ) = 3–6. We then apply the tangential velocity cutoff and the subdwarf region of the H–R diagram to stars with parallaxes from Gaia Data Release 1 and the MEarth Project to identify a total of 29 new nearby subdwarf candidates that fall clearly below the main sequence.

  17. Finding ultracool brown dwarfs with MegaCam on CFHT: method and first results

    Science.gov (United States)

    Delorme, P.; Willott, C. J.; Forveille, T.; Delfosse, X.; Reylé, C.; Bertin, E.; Albert, L.; Artigau, E.; Robin, A. C.; Allard, F.; Doyon, R.; Hill, G. J.

    2008-06-01

    Aims: We present the first results of a wide field survey for cool brown dwarfs with the MegaCam camera on the CFHT telescope, the Canada-France Brown Dwarf Survey, hereafter CFBDS. Our objectives are to find ultracool brown dwarfs and to constrain the field-brown dwarf mass function thanks to a larger sample of L and T dwarfs. Methods: We identify candidates in CFHT/MegaCam i' and z' images using optimised psf-fitting within Source Extractor, and follow them up with pointed near-infrared imaging on several telescopes. Results: We have so far analysed over 350 square degrees and found 770 brown dwarf candidates brighter than z'_AB=22.5. We currently have J-band photometry for 220 of these candidates, which confirms 37% as potential L or T dwarfs. Some are among the reddest and farthest brown dwarfs currently known, including an independent identification of the recently published ULAS J003402.77-005206.7 and the discovery of a second brown dwarf later than T8, CFBDS J005910.83-011401.3. Infrared spectra of three T dwarf candidates confirm their nature, and validate the selection process. Conclusions: The completed survey will discover ~100 T dwarfs and ~500 L dwarfs or M dwarfs later than M8, approximately doubling the number of currently known brown dwarfs. The resulting sample will have a very well-defined selection function, and will therefore produce a very clean luminosity function. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. Based on observations made

  18. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  19. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  20. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  1. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  2. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  3. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  4. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  5. Model atmospheres for M (sub)dwarf stars. 1: The base model grid

    Science.gov (United States)

    Allard, France; Hauschildt, Peter H.

    1995-01-01

    We have calculated a grid of more than 700 model atmospheres valid for a wide range of parameters encompassing the coolest known M dwarfs, M subdwarfs, and brown dwarf candidates: 1500 less than or equal to T(sub eff) less than or equal to 4000 K, 3.5 less than or equal to log g less than or equal to 5.5, and -4.0 less than or equal to (M/H) less than or equal to +0.5. Our equation of state includes 105 molecules and up to 27 ionization stages of 39 elements. In the calculations of the base grid of model atmospheres presented here, we include over 300 molecular bands of four molecules (TiO, VO, CaH, FeH) in the JOLA approximation, the water opacity of Ludwig (1971), collision-induced opacities, b-f and f-f atomic processes, as well as about 2 million spectral lines selected from a list with more than 42 million atomic and 24 million molecular (H2, CH, NH, OH, MgH, SiH, C2, CN, CO, SiO) lines. High-resolution synthetic spectra are obtained using an opacity sampling method. The model atmospheres and spectra are calculated with the generalized stellar atmosphere code PHOENIX, assuming LTE, plane-parallel geometry, energy (radiative plus convective) conservation, and hydrostatic equilibrium. The model spectra give close agreement with observations of M dwarfs across a wide spectral range from the blue to the near-IR, with one notable exception: the fit to the water bands. We discuss several practical applications of our model grid, e.g., broadband colors derived from the synthetic spectra. In light of current efforts to identify genuine brown dwarfs, we also show how low-resolution spectra of cool dwarfs vary with surface gravity, and how the high-regulation line profile of the Li I resonance doublet depends on the Li abundance.

  6. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  7. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  8. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  9. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  10. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  11. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  12. First Kepler results on compact pulsators – VIII. Mode identifications via period spacings in g-mode pulsating subdwarf B stars

    DEFF Research Database (Denmark)

    Reed, M.D.; Baran, A.; Quint, A.C.

    2011-01-01

    We investigate the possibility of nearly equally spaced periods in 13 hot subdwarf B (sdB) stars observed with the Kepler spacecraft and one observed with CoRoT. Asymptotic limits for gravity (g-)mode pulsations provide relationships between equal-period spacings of modes with differing degrees ℓ...

  13. Hot subdwarf stars in close-up view. I. Rotational properties of subdwarf B stars in close binary systems and nature of their unseen companions

    Science.gov (United States)

    Geier, S.; Heber, U.; Podsiadlowski, Ph.; Edelmann, H.; Napiwotzki, R.; Kupfer, T.; Müller, S.

    2010-09-01

    The origin of hot subdwarf B stars (sdBs) is still unclear. About half of the known sdBs are in close binary systems for which common envelope ejection is the most likely formation channel. Little is known about this dynamic phase of binary evolution. Since most of the known sdB systems are single-lined spectroscopic binaries, it is difficult to derive masses and unravel the companions' nature, which is the aim of this paper. Due to the tidal influence of the companion in close binary systems, the rotation of the primary becomes synchronised to its orbital motion. In this case it is possible to constrain the mass of the companion, if the primary mass, its projected rotational velocity as well as its surface gravity are known. For the first time we measured the projected rotational velocities of a large sdB binary sample from high resolution spectra. We analysed a sample of 51 sdB stars in close binaries, 40 of which have known orbital parameters comprising half of all such systems known today. Synchronisation in sdB binaries is discussed both from the theoretical and the observational point of view. The masses and the nature of the unseen companions could be constrained in 31 cases. We found orbital synchronisation most likely to be established in binaries with orbital periods shorter than 1.2 d. Only in five cases it was impossible to decide whether the sdB's companion is a white dwarf or an M dwarf. The companions to seven sdBs could be clearly identified as late M stars. One binary may have a brown dwarf companion. The unseen companions of nine sdBs are white dwarfs with typical masses. The mass of one white dwarf companion is very low. In eight cases (including the well known system KPD1930+2752) the companion mass exceeds 0.9~M_⊙, four of which even exceed the Chandrasekhar limit indicating that they may be neutron stars. Even stellar mass black holes are possible for the most massive companions. The distribution of the inclinations of the systems with low

  14. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  15. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  16. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  17. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  18. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    .... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...

  19. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  20. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  1. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  2. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  3. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  4. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  5. Benchmarking the Netherlands. Benchmarking for growth

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity

  6. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  7. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  8. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  9. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  10. First Kepler results on compact pulsators - V. Slowly pulsating subdwarf B stars in short-period binaries

    DEFF Research Database (Denmark)

    Kawaler, Stephen D.; Reed, Michael D.; Østensen, Roy H.

    2010-01-01

    of sdB stars with a close M-dwarf companion with orbital periods of less than half a day. Because the orbital period is so short, the stars should be in synchronous rotation, and if so, the rotation period should imprint itself on the multiplet structure of the pulsations. However, we do not find clear......The survey phase of the Kepler Mission includes a number of hot subdwarf B (sdB) stars to search for non-radial pulsations. We present our analysis of two sdB stars that are found to be g-mode pulsators of the V1093 Her class. These two stars also display the distinct irradiation effect typical...... evidence for such rotational splitting. Though the stars do show some frequency spacings that are consistent with synchronous rotation, they also display multiplets with splittings that are much smaller. Longer-duration time series photometry will be needed to determine if those small splittings...

  11. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  12. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  13. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  14. [Do you mean benchmarking?].

    Science.gov (United States)

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  15. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  16. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  17. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  18. The ultracool-field dwarf luminosity-function and space density from the Canada-France Brown Dwarf Survey

    Science.gov (United States)

    Reylé, C.; Delorme, P.; Willott, C. J.; Albert, L.; Delfosse, X.; Forveille, T.; Artigau, E.; Malo, L.; Hill, G. J.; Doyon, R.

    2010-11-01

    Context. Thanks to recent and ongoing large scale surveys, hundreds of brown dwarfs have been discovered in the last decade. The Canada-France Brown Dwarf Survey is a wide-field survey for cool brown dwarfs conducted with the MegaCam camera on the Canada-France-Hawaii Telescope. Aims: Our objectives are to find ultracool brown dwarfs and to constrain the field brown-dwarf luminosity function and the mass function from a large and homogeneous sample of L and T dwarfs. Methods: We identify candidates in CFHT/MegaCam i' and z' images and follow them up with pointed near infrared (NIR) imaging on several telescopes. Halfway through our survey we found ~50 T dwarfs and ~170 L or ultra cool M dwarfs drawn from a larger sample of 1400 candidates with typical ultracool dwarfs i'-z' colours, found in 780 square degrees. Results: We have currently completed the NIR follow-up on a large part of the survey for all candidates from mid-L dwarfs down to the latest T dwarfs known with utracool dwarfs' colours. This allows us to draw on a complete and well defined sample of 102 ultracool dwarfs to investigate the luminosity function and space density of field dwarfs. Conclusions: We found the density of late L5 to T0 dwarfs to be 2.0+0.8-0.7 × 10-3 objects pc-3, the density of T0.5 to T5.5 dwarfs to be 1.4+0.3-0.2 × 10-3 objects pc-3, and the density of T6 to T8 dwarfs to be 5.3+3.1-2.2 × 10-3 objects pc-3. We found that these results agree better with a flat substellar mass function. Three latest dwarfs at the boundary between T and Y dwarfs give the high density 8.3+9.0-5.1 × 10-3 objects pc-3. Although the uncertainties are very large this suggests that many brown dwarfs should be found in this late spectral type range, as expected from the cooling of brown dwarfs, whatever their mass, down to very low temperature. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by

  19. BINARIES DISCOVERED BY THE MUCHFUSS PROJECT: SDSS J08205+0008-AN ECLIPSING SUBDWARF B BINARY WITH A BROWN DWARF COMPANION

    International Nuclear Information System (INIS)

    Geier, S.; Schaffenroth, V.; Drechsel, H.; Heber, U.; Kupfer, T.; Tillich, A.; Oestensen, R. H.; Smolders, K.; Degroote, P.; Maxted, P. F. L.; Barlow, B. N.; Gaensicke, B. T.; Marsh, T. R.; Napiwotzki, R.

    2011-01-01

    Hot subdwarf B stars (sdBs) are extreme horizontal branch stars believed to originate from close binary evolution. Indeed about half of the known sdB stars are found in close binaries with periods ranging from a few hours to a few days. The enormous mass loss required to remove the hydrogen envelope of the red-giant progenitor almost entirely can be explained by common envelope ejection. A rare subclass of these binaries are the eclipsing HW Vir binaries where the sdB is orbited by a dwarf M star. Here, we report the discovery of an HW Vir system in the course of the MUCHFUSS project. A most likely substellar object (≅0.068 M sun ) was found to orbit the hot subdwarf J08205+0008 with a period of 0.096 days. Since the eclipses are total, the system parameters are very well constrained. J08205+0008 has the lowest unambiguously measured companion mass yet found in a subdwarf B binary. This implies that the most likely substellar companion has not only survived the engulfment by the red-giant envelope, but also triggered its ejection and enabled the sdB star to form. The system provides evidence that brown dwarfs may indeed be able to significantly affect late stellar evolution.

  20. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  1. Cloud benchmarking for performance

    OpenAIRE

    Varghese, Blesson; Akgun, Ozgur; Miguel, Ian; Thai, Long; Barker, Adam

    2014-01-01

    Date of Acceptance: 20/09/2014 How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computa...

  2. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  3. K2 Ultracool Dwarfs Survey. III. White Light Flares Are Ubiquitous in M6-L0 Dwarfs

    Science.gov (United States)

    Paudel, Rishi R.; Gizis, John E.; Mullan, D. J.; Schmidt, Sarah J.; Burgasser, Adam J.; Williams, Peter K. G.; Berger, Edo

    2018-05-01

    We report the white light flare rates for 10 ultracool dwarfs using Kepler K2 short-cadence data. Among our sample stars, two have spectral type M6, three are M7, three are M8, and two are L0. Most of our targets are old low-mass stars. We identify a total of 283 flares in all of the stars in our sample, with Kepler energies in the range log E Kp ∼ (29–33.5) erg. Using the maximum-likelihood method of line fitting, we find that the flare frequency distribution (FFD) for each star in our sample follows a power law with slope ‑α in the range ‑(1.3–2.0). We find that cooler objects tend to have shallower slopes. For some of our targets, the FFD follows either a broken power law, or a power law with an exponential cutoff. For the L0 dwarf 2MASS J12321827-0951502, we find a very shallow slope (‑α = ‑1.3) in the Kepler energy range (0.82–130) × 1030 erg: this L0 dwarf has flare rates which are comparable to those of high-energy flares in stars of earlier spectral types. In addition, we report photometry of two superflares: one on the L0 dwarf 2MASS J12321827-0951502 and another on the M7 dwarf 2MASS J08352366+1029318. In the case of 2MASS J12321827-0951502, we report a flare brightening by a factor of ∼144 relative to the quiescent photospheric level. Likewise, for 2MASS J08352366+1029318, we report a flare brightening by a factor of ∼60 relative to the quiescent photospheric level. These two superflares have bolometric (ultraviolet/optical/infrared) energies 3.6 × 1033 erg and 8.9 × 1033 erg respectively, while the full width half maximum timescales are very short, ∼2 min. We find that the M8 star TRAPPIST-1 is more active than the M8.5 dwarf 2M03264453+1919309, but less active than another M8 dwarf (2M12215066-0843197).

  4. Benchmarking HIV health care

    DEFF Research Database (Denmark)

    Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda

    2012-01-01

    ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care we...... document pronounced regional differences in adherence to guidelines and can help to identify gaps and direct target interventions. It may serve as a tool for assessment and benchmarking the clinical management of HIV-patients in any setting worldwide....

  5. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  6. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects...... takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  7. Benchmarking multimedia performance

    Science.gov (United States)

    Zandi, Ahmad; Sudharsanan, Subramania I.

    1998-03-01

    With the introduction of faster processors and special instruction sets tailored to multimedia, a number of exciting applications are now feasible on the desktops. Among these is the DVD playback consisting, among other things, of MPEG-2 video and Dolby digital audio or MPEG-2 audio. Other multimedia applications such as video conferencing and speech recognition are also becoming popular on computer systems. In view of this tremendous interest in multimedia, a group of major computer companies have formed, Multimedia Benchmarks Committee as part of Standard Performance Evaluation Corp. to address the performance issues of multimedia applications. The approach is multi-tiered with three tiers of fidelity from minimal to full compliant. In each case the fidelity of the bitstream reconstruction as well as quality of the video or audio output are measured and the system is classified accordingly. At the next step the performance of the system is measured. In many multimedia applications such as the DVD playback the application needs to be run at a specific rate. In this case the measurement of the excess processing power, makes all the difference. All these make a system level, application based, multimedia benchmark very challenging. Several ideas and methodologies for each aspect of the problems will be presented and analyzed.

  8. Core Benchmarks Descriptions

    International Nuclear Information System (INIS)

    Pavlovichev, A.M.

    2001-01-01

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented

  9. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  10. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  11. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....

  12. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  13. Power reactor pressure vessel benchmarks

    International Nuclear Information System (INIS)

    Rahn, F.J.

    1978-01-01

    A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)

  14. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  15. Benchmarking Academic Anatomic Pathologists

    Directory of Open Access Journals (Sweden)

    Barbara S. Ducatman MD

    2016-10-01

    Full Text Available The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical “full-time faculty” (0.60 clinical full-time equivalent and above. The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative

  16. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  17. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  18. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  19. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  20. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  1. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  2. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  3. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  4. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  5. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  6. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  7. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  8. Benchmarking: A Process for Improvement.

    Science.gov (United States)

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  9. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  10. An ancient F-type subdwarf from the halo crossing the Galactic plane

    Science.gov (United States)

    Scholz, R.-D.; Heber, U.; Heuser, C.; Ziegerer, E.; Geier, S.; Niederhofer, F.

    2015-02-01

    Aims: We selected the bluest object, WISE J0725-2351, from Luhman's new high proper motion (HPM) survey based on observations with the Wide-field Infrared Survey Explorer (WISE) for spectroscopic follow-up observations. Our aim was to unravel the nature of this relatively bright (V ~ 12, J ~ 11) HPM star (μ = 267 mas/yr). Methods: We obtained low- and medium-resolution spectra with the European Southern Observatory (ESO) New Technology Telescope (NTT)/EFOSC2 and Very Large Telescope (VLT)/X-Shooter instruments, investigated the radial velocity and performed a quantitative spectral analysis that allowed us to determine physical parameters. The fit of the spectral energy distribution based on the available photometry to low-metallicity model spectra and the similarity of our target to a metal-poor benchmark star (HD 84937) allowed us to estimate the distance and space velocity. Results: As in the case of HD 84937, we classified WISE J0725-2351 as sdF5: or a metal-poor turnoff star with [ Fe/H ] = -2.0 ± 0.2, Teff = 6250 ± 100 K, log g = 4.0 ± 0.2, and a possible age of about 12 Gyr. At an estimated distance of more than 400 pc, its proper motion translates to a tangential velocity of more than 500 km s-1. Together with its constant (on timescales of hours, days, and months) and large radial velocity (about +240 km s-1), the resulting Galactic restframe velocity is about 460 km s-1, implying a bound retrograde orbit for this extreme halo object that currently crosses the Galactic plane at high speed. Based on observations at the La Silla-Paranal Observatory of the European Southern Observatory for programmes 092.D-0040(A) and 093.D-0127(A).

  11. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. First Kepler results on compact pulsators - III. Subdwarf B stars with V1093 Her and hybrid (DW Lyn) type pulsations

    Science.gov (United States)

    Reed, M. D.; Kawaler, S. D.; Østensen, R. H.; Bloemen, S.; Baran, A.; Telting, J. H.; Silvotti, R.; Charpinet, S.; Quint, A. C.; Handler, G.; Gilliland, R. L.; Borucki, W. J.; Koch, D. G.; Kjeldsen, H.; Christensen-Dalsgaard, J.

    2010-12-01

    We present the discovery of non-radial pulsations in five hot subdwarf B (sdB) stars based on 27 d of nearly continuous time series photometry using the Kepler spacecraft. We find that every sdB star cooler than ≈27 500 K that Kepler has observed (seven so far) is a long-period pulsator of the V1093 Her (PG 1716) class or a hybrid star with both short and long periods. The apparently non-binary long-period and hybrid pulsators are described here. The V1093 Her periods range from 1 to 4.5 h and are associated with g-mode pulsations. Three stars also exhibit short periods indicative of p-modes with periods of 2-5 min and in addition, these stars exhibit periodicities between both classes from 15 to 45 min. We detect the coolest and longest-period V1093 Her-type pulsator to date, KIC010670103 (Teff≈ 20 900 K, Pmax≈ 4.5 h) as well as a suspected hybrid pulsator, KIC002697388, which is extremely cool (Teff≈ 23 900 K) and for the first time hybrid pulsators which have larger g-mode amplitudes than p-mode ones. All of these pulsators are quite rich with many frequencies and we are able to apply asymptotic relationships to associate periodicities with modes for KIC010670103. Kepler data are particularly well suited for these studies as they are long duration, extremely high duty cycle observations with well-behaved noise properties.

  13. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  14. Benchmarking Nuclear Power Plants

    International Nuclear Information System (INIS)

    Jakic, I.

    2016-01-01

    One of the main tasks an owner have is to keep its business competitive on the market while delivering its product. Being owner of nuclear power plant bear the same (or even more complex and stern) responsibility due to safety risks and costs. In the past, nuclear power plant managements could (partly) ignore profit or it was simply expected and to some degree assured through the various regulatory processes governing electricity rate design. It is obvious now that, with the deregulation, utility privatization and competitive electricity market, key measure of success used at nuclear power plants must include traditional metrics of successful business (return on investment, earnings and revenue generation) as well as those of plant performance, safety and reliability. In order to analyze business performance of (specific) nuclear power plant, benchmarking, as one of the well-established concept and usual method was used. Domain was conservatively designed, with well-adjusted framework, but results have still limited application due to many differences, gaps and uncertainties. (author).

  15. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  16. AER benchmark specification sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the VVER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational Fluid Dynamics (CFD) codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D CFD modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the 23rd cycle of the Paks NPP's Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (authors)

  17. AER Benchmark Specification Sheet

    International Nuclear Information System (INIS)

    Aszodi, A.; Toth, S.

    2009-01-01

    In the WWER-440/213 type reactors, the core outlet temperature field is monitored with in-core thermocouples, which are installed above 210 fuel assemblies. These measured temperatures are used in determination of the fuel assembly powers and they have important role in the reactor power limitation. For these reasons, correct interpretation of the thermocouple signals is an important question. In order to interpret the signals in correct way, knowledge of the coolant mixing in the assembly heads is necessary. Computational fluid dynamics codes and experiments can help to understand better these mixing processes and they can provide information which can support the more adequate interpretation of the thermocouple signals. This benchmark deals with the 3D computational fluid dynamics modeling of the coolant mixing in the heads of the profiled fuel assemblies with 12.2 mm rod pitch. Two assemblies of the twenty third cycle of the Paks NPPs Unit 3 are investigated. One of them has symmetrical pin power profile and another possesses inclined profile. (Authors)

  18. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  19. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  20. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  1. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  2. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  3. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  4. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  5. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...

  6. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  7. Benchmarking: contexts and details matter.

    Science.gov (United States)

    Zheng, Siyuan

    2017-07-05

    Benchmarking is an essential step in the development of computational tools. We take this opportunity to pitch in our opinions on tool benchmarking, in light of two correspondence articles published in Genome Biology.Please see related Li et al. and Newman et al. correspondence articles: www.dx.doi.org/10.1186/s13059-017-1256-5 and www.dx.doi.org/10.1186/s13059-017-1257-4.

  8. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  9. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  10. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  11. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  12. Oscillation Mode Variability in Evolved Compact Pulsators from Kepler Photometry. I. The Hot B Subdwarf Star KIC 3527751

    Science.gov (United States)

    Zong, Weikai; Charpinet, Stéphane; Fu, Jian-Ning; Vauclair, Gérard; Niu, Jia-Shu; Su, Jie

    2018-02-01

    We present the first results of an ensemble and systematic survey of oscillation mode variability in pulsating hot B subdwarf (sdB) and white dwarf stars observed with the original Kepler mission. The satellite provides uninterrupted high-quality photometric data with a time baseline that can reach up to 4 yr collected on pulsating stars. This is a unique opportunity to characterize long-term behaviors of oscillation modes. A mode modulation in amplitude and frequency can be independently inferred by its fine structure in the Fourier spectrum, from the sLSP, or with prewhitening methods applied to various parts of the light curve. We apply all these techniques to the sdB star KIC 3527751, a long-period-dominated hybrid pulsator. We find that all the detected modes with sufficiently large amplitudes to be thoroughly studied show amplitude and/or frequency variations. Components of three identified quintuplets around 92, 114, and 253 μHz show signatures that can be linked to nonlinear interactions according to the resonant mode coupling theory. This interpretation is further supported by the fact that many oscillation modes are found to have amplitudes and frequencies showing correlated or anticorrelated variations, a behavior that can be linked to the amplitude equation formalism, where nonlinear frequency corrections are determined by their amplitude variations. Our results suggest that oscillation modes varying with diverse patterns are a very common phenomenon in pulsating sdB stars. Close structures around main frequencies therefore need to be carefully interpreted in light of this finding to secure a robust identification of real eigenfrequencies, which is crucial for seismic modeling. The various modulation patterns uncovered should encourage further developments in the field of nonlinear stellar oscillation theory. It also raises a warning to any long-term project aiming at measuring the rate of period change of pulsations caused by stellar evolution, or at

  13. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  14. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  15. AEOLUS: A MARKOV CHAIN MONTE CARLO CODE FOR MAPPING ULTRACOOL ATMOSPHERES. AN APPLICATION ON JUPITER AND BROWN DWARF HST LIGHT CURVES

    Energy Technology Data Exchange (ETDEWEB)

    Karalidi, Theodora; Apai, Dániel; Schneider, Glenn; Hanson, Jake R. [Steward Observatory, Department of Astronomy, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States); Pasachoff, Jay M., E-mail: tkaralidi@email.arizona.edu [Hopkins Observatory, Williams College, 33 Lab Campus Drive, Williamstown, MA 01267 (United States)

    2015-11-20

    Deducing the cloud cover and its temporal evolution from the observed planetary spectra and phase curves can give us major insight into the atmospheric dynamics. In this paper, we present Aeolus, a Markov chain Monte Carlo code that maps the structure of brown dwarf and other ultracool atmospheres. We validated Aeolus on a set of unique Jupiter Hubble Space Telescope (HST) light curves. Aeolus accurately retrieves the properties of the major features of the Jovian atmosphere, such as the Great Red Spot and a major 5 μm hot spot. Aeolus is the first mapping code validated on actual observations of a giant planet over a full rotational period. For this study, we applied Aeolus to J- and H-band HST light curves of 2MASS J21392676+0220226 and 2MASS J0136565+093347. Aeolus retrieves three spots at the top of the atmosphere (per observational wavelength) of these two brown dwarfs, with a surface coverage of 21% ± 3% and 20.3% ± 1.5%, respectively. The Jupiter HST light curves will be publicly available via ADS/VIZIR.

  16. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  17. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  18. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  19. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  20. Strategic behaviour under regulatory benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management

    2004-09-01

    In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)

  1. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    Makai, M.

    1998-01-01

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  2. First Kepler results on compact pulsators - II. KIC 010139564, a new pulsating subdwarf B (V361 Hya) star with an additional low-frequency mode

    DEFF Research Database (Denmark)

    Kawaler, Stephen; Reed, M.D.; Quint, A.C.

    2010-01-01

    We present the discovery of non-radial pulsations in a hot subdwarf B star based on 30.5 d of nearly continuous time series photometry using the Kepler spacecraft. KIC 010139564 is found to be a short-period pulsator of the V361 Hya (EC 14026) class with more than 10 independent pulsation modes...... whose periods range from 130 to 190 s. It also shows one periodicity at a period of 3165 s. If this periodicity is a high-order g-mode, then this star may be the hottest member of the hybrid DW Lyn stars. In addition to the resolved pulsation frequencies, additional periodic variations in the light...... are independent stellar oscillation modes. We find that most of the identified periodicities are indeed stable in phase and amplitude, suggesting a rotation period of 2-3 weeks for this star, but further observations are needed to confirm this suspicion....

  3. A pulsation analysis of K2 observations of the subdwarf B star PG 1142-037 during Campaign 1: A subsynchronously rotating ellipsoidal variable

    DEFF Research Database (Denmark)

    Reed, M. D.; Baran, A. S.; Østensen, R. H.

    2016-01-01

    We report a new subdwarf B pulsator, PG 1142-037, discovered during the first full-length campaign of K2, the two-gyro mission of the Kepler space telescope. 14 periodicities have been detected between 0.9 and 2.5 hr with amplitudes below 0.35 parts-per-thousand. We have been able to associate all...... of the pulsations with low-degree, ℓ ≤ 2 modes. Follow-up spectroscopy of PG 1142 has revealed it to be in a binary with a period of 0.54 d. Phase-folding the K2 photometry reveals a two-component variation including both Doppler boosting and ellipsoidal deformation. Perhaps the most surprising and interesting...

  4. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.; Tyhurst, Janis

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  5. Prismatic Core Coupled Transient Benchmark

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.

    2011-01-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  6. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  7. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  8. SIMULTANEOUS MULTIWAVELENGTH OBSERVATIONS OF MAGNETIC ACTIVITY IN ULTRACOOL DWARFS. IV. THE ACTIVE, YOUNG BINARY NLTT 33370 AB (= 2MASS J13142039+1320011)

    Energy Technology Data Exchange (ETDEWEB)

    Williams, P. K. G.; Berger, E.; Irwin, J.; Charbonneau, D. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Berta-Thompson, Z. K., E-mail: pwilliams@cfa.harvard.edu [MIT Kavli Institute, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States)

    2015-02-01

    We present multi-epoch simultaneous radio, optical, Hα, UV, and X-ray observations of the active, young, low-mass binary NLTT 33370 AB (blended spectral type M7e). This system is remarkable for its extreme levels of magnetic activity: it is the most radio-luminous ultracool dwarf (UCD) known, and here we show that it is also one of the most X-ray luminous UCDs known. We detect the system in all bands and find a complex phenomenology of both flaring and periodic variability. Analysis of the optical light curve reveals the simultaneous presence of two periodicities, 3.7859 ± 0.0001 and 3.7130 ± 0.0002 hr. While these differ by only ∼2%, studies of differential rotation in the UCD regime suggest that it cannot be responsible for the two signals. The system's radio emission consists of at least three components: rapid 100% polarized flares, bright emission modulating periodically in phase with the optical emission, and an additional periodic component that appears only in the 2013 observational campaign. We interpret the last of these as a gyrosynchrotron feature associated with large-scale magnetic fields and a cool, equatorial plasma torus. However, the persistent rapid flares at all rotational phases imply that small-scale magnetic loops are also present and reconnect nearly continuously. We present a spectral energy distribution of the blended system spanning more than 9 orders of magnitude in wavelength. The significant magnetism present in NLTT 33370 AB will affect its fundamental parameters, with the components' radii and temperatures potentially altered by ∼+20% and ∼–10%, respectively. Finally, we suggest spatially resolved observations that could clarify many aspects of this system's nature.

  9. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  10. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  11. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  12. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  13. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  14. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...

  15. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  16. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  17. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart, TC

    2015-11-01

    Full Text Available Benchmarks   Terrence C. Stewart 1* , Travis DeWolf 1 , Ashley Kleinhans 2 , Chris Eliasmith 1   1 University of Waterloo, Canada, 2 Council for Scientific and Industrial Research, South Africa   Submitted to Journal:   Frontiers in Neuroscience   Specialty... Eliasmith 1 1Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada 2Mobile Intelligent Autonomous Systems group, Council for Scientific and Industrial Research, Pretoria, South Africa Correspondence*: Terrence C. Stewart Centre...

  19. Investible benchmarks & hedge fund liquidity

    OpenAIRE

    Freed, Marc S; McMillan, Ben

    2011-01-01

    A lack of commonly accepted benchmarks for hedge fund performance has permitted hedge fund managers to attribute to skill returns that may actually accrue from market risk factors and illiquidity. Recent innovations in hedge fund replication permits us to estimate the extent of this misattribution. Using an option-based model, we find evidence that the value of liquidity options that investors implicitly grant managers when they invest may account for part or even all hedge fund returns. C...

  20. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  1. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  2. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  3. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    Kluth, Stefan

    2014-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  4. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  5. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  6. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  7. NEACRP thermal fission product benchmark

    International Nuclear Information System (INIS)

    Halsall, M.J.; Taubman, C.J.

    1989-09-01

    The objective of the thermal fission product benchmark was to compare the range of fission product data in use at the present time. A simple homogeneous problem was set with 200 atoms H/1 atom U235, to be burnt up to 1000 days and then decay for 1000 days. The problem was repeated with 200 atoms H/1 atom Pu239, 20 atoms H/1 atom U235 and 20 atoms H/1 atom Pu239. There were ten participants and the submissions received are detailed in this report. (author)

  8. Benchmark neutron porosity log calculations

    International Nuclear Information System (INIS)

    Little, R.C.; Michael, M.; Verghese, K.; Gardner, R.P.

    1989-01-01

    Calculations have been made for a benchmark neutron porosity log problem with the general purpose Monte Carlo code MCNP and the specific purpose Monte Carlo code McDNL. For accuracy and timing comparison purposes the CRAY XMP and MicroVax II computers have been used with these codes. The CRAY has been used for an analog version of the MCNP code while the MicroVax II has been used for the optimized variance reduction versions of both codes. Results indicate that the two codes give the same results within calculated standard deviations. Comparisons are given and discussed for accuracy (precision) and computation times for the two codes

  9. The SpeX Prism Library for Ultracool Dwarfs: A Resource for Stellar, Exoplanet and Galactic Science and Student-Led Research

    Science.gov (United States)

    Burgasser, Adam

    The NASA Infrared Telescope Facility's (IRTF) SpeX spectrograph has been an essential tool in the discovery and characterization of ultracool dwarf (UCD) stars, brown dwarfs and exoplanets. Over ten years of SpeX data have been collected on these sources, and a repository of low-resolution (R 100) SpeX prism spectra has been maintained by the PI at the SpeX Prism Spectral Libraries website since 2008. As the largest existing collection of NIR UCD spectra, this repository has facilitated a broad range of investigations in UCD, exoplanet, Galactic and extragalactic science, contributing to over 100 publications in the past 6 years. However, this repository remains highly incomplete, has not been uniformly calibrated, lacks sufficient contextual data for observations and sources, and most importantly provides no data visualization or analysis tools for the user. To fully realize the scientific potential of these data for community research, we propose a two-year program to (1) calibrate and expand existing repository and archival data, and make it virtual-observatory compliant; (2) serve the data through a searchable web archive with basic visualization tools; and (3) develop and distribute an open-source, Python-based analysis toolkit for users to analyze the data. These resources will be generated through an innovative, student-centered research model, with undergraduate and graduate students building and validating the analysis tools through carefully designed coding challenges and research validation activities. The resulting data archive, the SpeX Prism Library, will be a legacy resource for IRTF and SpeX, and will facilitate numerous investigations using current and future NASA capabilities. These include deep/wide surveys of UCDs to measure Galactic structure and chemical evolution, and probe UCD populations in satellite galaxies (e.g., JWST, WFIRST); characterization of directly imaged exoplanet spectra (e.g., FINESSE), and development of low

  10. Reevaluation of the Jezebel Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-10

    Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.

  11. SCWEB, Scientific Workstation Evaluation Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Raffenetti, R C [Computing Services-Support Services Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439 (United States)

    1988-06-16

    1 - Description of program or function: The SCWEB (Scientific Workstation Evaluation Benchmark) software includes 16 programs which are executed in a well-defined scenario to measure the following performance capabilities of a scientific workstation: implementation of FORTRAN77, processor speed, memory management, disk I/O, monitor (or display) output, scheduling of processing (multiprocessing), and scheduling of print tasks (spooling). 2 - Method of solution: The benchmark programs are: DK1, DK2, and DK3, which do Fourier series fitting based on spline techniques; JC1, which checks the FORTRAN function routines which produce numerical results; JD1 and JD2, which solve dense systems of linear equations in double- and single-precision, respectively; JD3 and JD4, which perform matrix multiplication in single- and double-precision, respectively; RB1, RB2, and RB3, which perform substantial amounts of I/O processing on files other than the input and output files; RR1, which does intense single-precision floating-point multiplication in a tight loop, RR2, which initializes a 512x512 integer matrix in a manner which skips around in the address space rather than initializing each consecutive memory cell in turn; RR3, which writes alternating text buffers to the output file; RR4, which evaluates the timer routines and demonstrates that they conform to the specification; and RR5, which determines whether the workstation is capable of executing a 4-megabyte program

  12. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  13. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  14. THE FAST-ROTATING, LOW-GRAVITY SUBDWARF B STAR EC 22081-1916: REMNANT OF A COMMON ENVELOPE MERGER EVENT

    International Nuclear Information System (INIS)

    Geier, S.; Classen, L.; Heber, U.

    2011-01-01

    Hot subdwarf B stars (sdBs) are evolved core helium-burning stars with very thin hydrogen envelopes. In order to form an sdB, the progenitor has to lose almost all of its hydrogen envelope right at the tip of the red-giant branch. In binary systems, mass transfer to the companion provides the extraordinary mass loss required for their formation. However, apparently single sdBs exist as well and their formation has been unclear for decades. The merger of helium white dwarfs (He-WDs) leading to an ignition of core helium burning or the merger of a helium core and a low-mass star during the common envelope phase have been proposed as processes leading to sdB formation. Here we report the discovery of EC 22081-1916 as a fast-rotating, single sdB star of low gravity. Its atmospheric parameters indicate that the hydrogen envelope must be unusually thick, which is at variance with the He-WD merger scenario, but consistent with a common envelope merger of a low-mass, possibly substellar object with a red-giant core.

  15. Analysis of a molten salt reactor benchmark

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Bajpai, Anil; Degweker, S.B.

    2013-01-01

    This paper discusses results of our studies of an IAEA molten salt reactor (MSR) benchmark. The benchmark, proposed by Japan, involves burnup calculations of a single lattice cell of a MSR for burning plutonium and other minor actinides. We have analyzed this cell with in-house developed burnup codes BURNTRAN and McBURN. This paper also presents a comparison of the results of our codes and those obtained by the proposers of the benchmark. (author)

  16. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  17. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  18. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the benchmarking...

  19. HPC Benchmark Suite NMx, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  20. High Energy Physics (HEP) benchmark program

    International Nuclear Information System (INIS)

    Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.

    1993-01-01

    High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)

  1. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  2. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  3. Benchmarking of nuclear economics tools

    International Nuclear Information System (INIS)

    Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh

    2017-01-01

    Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and

  4. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  5. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  6. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  7. ENVIRONMENTAL BENCHMARKING FOR LOCAL AUTHORITIES

    Directory of Open Access Journals (Sweden)

    Marinela GHEREŞ

    2010-01-01

    Full Text Available This paper is an attempt to clarify and present the many definitions ofbenchmarking. It also attempts to explain the basic steps of benchmarking, toshow how this tool can be applied by local authorities as well as to discuss itspotential benefits and limitations. It is our strong belief that if cities useindicators and progressively introduce targets to improve management andrelated urban life quality, and to measure progress towards more sustainabledevelopment, we will also create a new type of competition among cities andfoster innovation. This is seen to be important because local authorities’actions play a vital role in responding to the challenges of enhancing thestate of the environment not only in policy-making, but also in the provision ofservices and in the planning process. Local communities therefore need tobe aware of their own sustainability performance levels and should be able toengage in exchange of best practices to respond effectively to the ecoeconomicalchallenges of the century.

  8. Benchmark results in radiative transfer

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Siewert, C.E.

    1986-02-01

    Several aspects of the F N method are reported, and the method is used to solve accurately some benchmark problems in radiative transfer in the field of atmospheric physics. The method was modified to solve cases of pure scattering and an improved process was developed for computing the radiation intensity. An algorithms for computing several quantities used in the F N method was done. An improved scheme to evaluate certain integrals relevant to the method is done, and a two-term recursion relation that has proved useful for the numerical evaluation of matrix elements, basic for the method, is given. The methods used to solve the encountered linear algebric equations are discussed, and the numerical results are evaluated. (M.C.K.) [pt

  9. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  10. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    Burns, Phil; Jenkins, Cloda; Riechmann, Christoph

    2005-01-01

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  11. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  12. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  13. Benchmarking in digital circuit design automation

    NARCIS (Netherlands)

    Jozwiak, L.; Gawlowski, D.M.; Slusarczyk, A.S.

    2008-01-01

    This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious

  14. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  15. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  16. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  17. Benchmarking, Total Quality Management, and Libraries.

    Science.gov (United States)

    Shaughnessy, Thomas W.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)

  18. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  19. SP2Bench: A SPARQL Performance Benchmark

    Science.gov (United States)

    Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg

    A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.

  20. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  1. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  2. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center

    2017-10-19

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction

  3. What Randomized Benchmarking Actually Measures

    International Nuclear Information System (INIS)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-01-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  4. Benchmarking Commercial Conformer Ensemble Generators.

    Science.gov (United States)

    Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes

    2017-11-27

    We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.

  5. Benchmark tests of JENDL-1

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.

    1982-02-01

    Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)

  6. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  7. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...

  8. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  9. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  10. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  11. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  12. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  13. Benchmarking and validation activities within JEFF project

    Directory of Open Access Journals (Sweden)

    Cabellos O.

    2017-01-01

    Full Text Available The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  14. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group

    2018-04-13

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  15. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  16. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    R. Angles Rojas (Renzo); M.-D. Pham (Minh-Duc); P.A. Boncz (Peter)

    2014-01-01

    htmlabstractWith inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics

  17. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  18. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  19. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...

  20. Second benchmark problem for WIPP structural computations

    International Nuclear Information System (INIS)

    Krieg, R.D.; Morgan, H.S.; Hunter, T.O.

    1980-12-01

    This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications

  1. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  2. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  3. Benchmarking and validation activities within JEFF project

    Science.gov (United States)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  4. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  5. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  6. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  7. Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission

  8. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  9. Ad hoc committee on reactor physics benchmarks

    International Nuclear Information System (INIS)

    Diamond, D.J.; Mosteller, R.D.; Gehin, J.C.

    1996-01-01

    In the spring of 1994, an ad hoc committee on reactor physics benchmarks was formed under the leadership of two American Nuclear Society (ANS) organizations. The ANS-19 Standards Subcommittee of the Reactor Physics Division and the Computational Benchmark Problem Committee of the Mathematics and Computation Division had both seen a need for additional benchmarks to help validate computer codes used for light water reactor (LWR) neutronics calculations. Although individual organizations had employed various means to validate the reactor physics methods that they used for fuel management, operations, and safety, additional work in code development and refinement is under way, and to increase accuracy, there is a need for a corresponding increase in validation. Both organizations thought that there was a need to promulgate benchmarks based on measured data to supplement the LWR computational benchmarks that have been published in the past. By having an organized benchmark activity, the participants also gain by being able to discuss their problems and achievements with others traveling the same route

  10. Benchmarking for controllere: metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels Erik; Dietrichson, Lars Grubbe

    2008-01-01

    Benchmarking indgår på mange måder i både private og offentlige virksomheders ledelsespraksis. I økonomistyring anvendes benchmark-baserede indikatorer (eller nøgletal), eksempelvis ved fastlæggelse af mål i resultatkontrakter eller for at angive det ønskede niveau for visse nøgletal i et Balanced...... Scorecard eller tilsvarende målstyringsmodeller. Artiklen redegør for begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det, samt redegør for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et...... benchmarkingprojekt. Dernæst bliver forskellen på resultatbenchmarking og procesbenchmarking behandlet, hvorefter brugen af intern hhv. ekstern benchmarking, samt brugen af benchmarking i budgetlægning og budgetopfølgning, behandles....

  11. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  12. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  13. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  14. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  15. Benchmarking - a validation of UTDefect

    International Nuclear Information System (INIS)

    Niklasson, Jonas; Bostroem, Anders; Wirdelius, Haakan

    2006-06-01

    New and stronger demands on reliability of used NDE/NDT procedures and methods have stimulated the development of simulation tools of NDT. Modelling of ultrasonic non-destructive testing is useful for a number of reasons, e.g. physical understanding, parametric studies and in the qualification of procedures and personnel. The traditional way of qualifying a procedure is to generate a technical justification by employing experimental verification of the chosen technique. The manufacturing of test pieces is often very expensive and time consuming. It also tends to introduce a number of possible misalignments between the actual NDT situation and the proposed experimental simulation. The UTDefect computer code (SUNDT/simSUNDT) has been developed, together with the Dept. of Mechanics at Chalmers Univ. of Technology, during a decade and simulates the entire ultrasonic testing situation. A thorough validated model has the ability to be an alternative and a complement to the experimental work in order to reduce the extensive cost. The validation can be accomplished by comparisons with other models, but ultimately by comparisons with experiments. This project addresses the last alternative but provides an opportunity to, in a later stage, compare with other software when all data are made public and available. The comparison has been with experimental data from an international benchmark study initiated by the World Federation of NDE Centers. The experiments have been conducted with planar and spherically focused immersion transducers. The defects considered are side-drilled holes, flat-bottomed holes, and a spherical cavity. The data from the experiments are a reference signal used for calibration (the signal from the front surface of the test block at normal incidence) and the raw output from the scattering experiment. In all, more than forty cases have been compared. The agreement between UTDefect and the experiments was in general good (deviation less than 2dB) when the

  16. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  17. Raising Quality and Achievement. A College Guide to Benchmarking.

    Science.gov (United States)

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  18. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  19. Benchmarks: The Development of a New Approach to Student Evaluation.

    Science.gov (United States)

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  20. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  1. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  2. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind......Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... as it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This paper addresses...

  3. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  4. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  5. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  6. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  7. A simplified 2D HTTR benchmark problem

    International Nuclear Information System (INIS)

    Zhang, Z.; Rahnema, F.; Pounders, J. M.; Zhang, D.; Ougouag, A.

    2009-01-01

    To access the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of relevant whole core configurations. In this paper we have created a numerical benchmark problem in 2D configuration typical of a high temperature gas cooled prismatic core. This problem was derived from the HTTR start-up experiment. For code-to-code verification, complex details of geometry and material specification of the physical experiments are not necessary. To this end, the benchmark problem presented here is derived by simplifications that remove the unnecessary details while retaining the heterogeneity and major physics properties from the neutronics viewpoint. Also included here is a six-group material (macroscopic) cross section library for the benchmark problem. This library was generated using the lattice depletion code HELIOS. Using this library, benchmark quality Monte Carlo solutions are provided for three different configurations (all-rods-in, partially-controlled and all-rods-out). The reference solutions include the core eigenvalue, block (assembly) averaged fuel pin fission density distributions, and absorption rate in absorbers (burnable poison and control rods). (authors)

  8. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    Cavarec, C.; Perron, J.F.; Verwaerde, D.; West, J.P.

    1994-09-01

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (P ij , S n , Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  9. ZZ WPPR, Pu Recycling Benchmark Results

    International Nuclear Information System (INIS)

    Lutz, D.; Mattes, M.; Delpech, Marc; Juanola, Marc

    2002-01-01

    Description of program or function: The NEA NSC Working Party on Physics of Plutonium Recycling has commissioned a series of benchmarks covering: - Plutonium recycling in pressurized-water reactors; - Void reactivity effect in pressurized-water reactors; - Fast Plutonium-burner reactors: beginning of life; - Plutonium recycling in fast reactors; - Multiple recycling in advanced pressurized-water reactors. The results have been published (see references). ZZ-WPPR-1-A/B contains graphs and tables relative to the PWR Mox pin cell benchmark, representing typical fuel for plutonium recycling, one corresponding to a first cycle, the second for a fifth cycle. These computer readable files contain the complete set of results, while the printed report contains only a subset. ZZ-WPPR-2-CYC1 are the results from cycle 1 of the multiple recycling benchmarks

  10. Interior beam searchlight semi-analytical benchmark

    International Nuclear Information System (INIS)

    Ganapol, Barry D.; Kornreich, Drew E.

    2008-01-01

    Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)

  11. The national hydrologic bench-mark network

    Science.gov (United States)

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  12. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  13. Benchmark referencing of neutron dosimetry measurements

    International Nuclear Information System (INIS)

    Eisenhauer, C.M.; Grundl, J.A.; Gilliam, D.M.; McGarry, E.D.; Spiegel, V.

    1980-01-01

    The concept of benchmark referencing involves interpretation of dosimetry measurements in applied neutron fields in terms of similar measurements in benchmark fields whose neutron spectra and intensity are well known. The main advantage of benchmark referencing is that it minimizes or eliminates many types of experimental uncertainties such as those associated with absolute detection efficiencies and cross sections. In this paper we consider the cavity external to the pressure vessel of a power reactor as an example of an applied field. The pressure vessel cavity is an accessible location for exploratory dosimetry measurements aimed at understanding embrittlement of pressure vessel steel. Comparisons with calculated predictions of neutron fluence and spectra in the cavity provide a valuable check of the computational methods used to estimate pressure vessel safety margins for pressure vessel lifetimes

  14. MIPS bacterial genomes functional annotation benchmark dataset.

    Science.gov (United States)

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  15. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  16. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  17. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  18. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II.

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  19. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  20. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.E.; Cheng, E.T.

    1985-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li/sub 17/Pb/sub 83/ and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the TBR to group structure and weighting spectrum increases and Li enrichment decrease with up to 20% discrepancies for thin natural Li/sub 17/Pb/sub 83/ blankets

  1. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.L.; Cheng, E.T.

    1986-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li 17 Pb 83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li 17 Pb 83 blankets. (author)

  2. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  3. Reactor group constants and benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)

  4. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  5. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  6. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    solvers in IPOPT and FMINCON, and the sequential quadratic programming method in SNOPT, are benchmarked on the library using performance profiles. Whenever possible the methods are applied to both the nested and the Simultaneous Analysis and Design (SAND) formulations of the problem. The performance...

  7. Developing Benchmarking Criteria for CO2 Emissions

    Energy Technology Data Exchange (ETDEWEB)

    Neelis, M.; Worrell, E.; Mueller, N.; Angelini, T. [Ecofys, Utrecht (Netherlands); Cremer, C.; Schleich, J.; Eichhammer, W. [The Fraunhofer Institute for Systems and Innovation research, Karlsruhe (Germany)

    2009-02-15

    A European Union (EU) wide greenhouse gas (GHG) allowance trading scheme (EU ETS) was implemented in the EU in 2005. In the first two trading periods of the scheme (running up to 2012), free allocation based on historical emissions was the main methodology for allocation of allowances to existing installations. For the third trading period (2013 - 2020), the European Commission proposed in January 2008 a more important role of auctioning of allowances rather then free allocation. (Transitional) free allocation of allowances to industrial sectors will be determined via harmonized allocation rules, where feasible based on benchmarking. In general terms, a benchmark based method allocates allowances based on a certain amount of emissions per unit of productive output (i.e. the benchmark). This study aims to derive criteria for an allocation methodology for the EU Emission Trading Scheme based on benchmarking for the period 2013 - 2020. To test the feasibility of the criteria, we apply them to four example product groups: iron and steel, pulp and paper, lime and glass. The basis for this study is the Commission proposal for a revised ETS directive put forward on 23 January 2008 and does not take into account any changes to this proposal in the co-decision procedure that resulted in the adoption of the Energy and Climate change package in December 2008.

  8. Why and How to Benchmark XML Databases

    NARCIS (Netherlands)

    A.R. Schmidt; F. Waas; M.L. Kersten (Martin); D. Florescu; M.J. Carey; I. Manolescu; R. Busse

    2001-01-01

    textabstractBenchmarks belong to the very standard repertory of tools deployed in database development. Assessing the capabilities of a system, analyzing actual and potential bottlenecks, and, naturally, comparing the pros and cons of different systems architectures have become indispensable tasks

  9. Determination of Benchmarks Stability within Ahmadu Bello ...

    African Journals Online (AJOL)

    Heights of six geodetic benchmarks over a total distance of 8.6km at the Ahmadu Bello University (ABU), Zaria, Nigeria were recomputed and analysed using least squares adjustment technique. The network computations were tied to two fix primary reference pillars situated outside the campus. The two-tail Chi-square ...

  10. Benchmarking and performance management in health care

    OpenAIRE

    Buttigieg, Sandra; ; EHMA Annual Conference : Public Health Care : Who Pays, Who Provides?

    2012-01-01

    Current economic conditions challenge health care providers globally. Healthcare organizations need to deliver optimal financial, operational, and clinical performance to sustain quality of service delivery. Benchmarking is one of the most potent and under-utilized management tools available and an analytic tool to understand organizational performance. Additionally, it is required for financial survival and organizational excellence.

  11. Benchmarking 2010: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  12. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  13. 2010 Recruiting Benchmarks Survey. Research Brief

    Science.gov (United States)

    National Association of Colleges and Employers (NJ1), 2010

    2010-01-01

    The National Association of Colleges and Employers conducted its annual survey of employer members from June 15, 2010 to August 15, 2010, to benchmark data relevant to college recruiting. From a base of 861 employers holding organizational membership, there were 268 responses for a response rate of 31 percent. Following are some of the major…

  14. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  15. Benchmarking of methods for genomic taxonomy

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Cosentino, Salvatore; Lukjancenko, Oksana

    2014-01-01

    . Nevertheless, the method has been found to have a number of shortcomings. In the current study, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene...

  16. Parton Distribution Benchmarking with LHC Data

    NARCIS (Netherlands)

    Ball, Richard D.; Carrazza, Stefano; Debbio, Luigi Del; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C. -P.

    2012-01-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross

  17. What Is the Impact of Subject Benchmarking?

    Science.gov (United States)

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  18. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias; Smith, Neil; Ghanem, Bernard

    2016-01-01

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  19. Prague texture segmentation data generator and benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal

    2006-01-01

    Roč. 2006, č. 64 (2006), s. 67-68 ISSN 0926-4981 R&D Projects: GA MŠk(CZ) 1M0572; GA AV ČR(CZ) 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * texture * benchmark * web Subject RIV: BD - Theory of Information

  20. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  1. Choice Complexity, Benchmarks and Costly Information

    NARCIS (Netherlands)

    Harms, Job; Rosenkranz, S.; Sanders, M.W.J.L.

    In this study we investigate how two types of information interventions, providing a benchmark and providing costly information on option ranking, can improve decision-making in complex choices. In our experiment subjects made a series of incentivized choices between four hypothetical financial

  2. Resolution for the Loviisa benchmark problem

    International Nuclear Information System (INIS)

    Garcia, C.R.; Quintero, R.; Milian, D.

    1992-01-01

    In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations

  3. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    Baker, S. P.; Carter, R. G.; Watkins, K. E.; Jones, D. B.

    2004-01-01

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  4. Benchmarking Academic Libraries: An Australian Case Study.

    Science.gov (United States)

    Robertson, Margaret; Trahn, Isabella

    1997-01-01

    Discusses experiences and outcomes of benchmarking at the Queensland University of Technology (Australia) library that compared acquisitions, cataloging, document delivery, and research support services with those of the University of New South Wales. Highlights include results as a catalyst for change, and the use of common output and performance…

  5. Calculus of a reactor VVER-1000 benchmark

    International Nuclear Information System (INIS)

    Dourougie, C.

    1998-01-01

    In the framework of the FMDP (Fissile Materials Disposition Program between the US and Russian, a benchmark was tested. The pin cells contain low enriched uranium (LEU) and mixed oxide fuels (MOX). The calculations are done for a wide range of temperatures and solute boron concentrations, in accidental conditions. (A.L.B.)

  6. Benchmarking transaction and analytical processing systems the creation of a mixed workload benchmark and its application

    CERN Document Server

    Bog, Anja

    2014-01-01

    This book introduces a new benchmark for hybrid database systems, gauging the effect of adding OLAP to an OLTP workload and analyzing the impact of commonly used optimizations in historically separate OLTP and OLAP domains in mixed-workload scenarios.

  7. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  8. Suggested benchmarks for shape optimization for minimum stress concentration

    DEFF Research Database (Denmark)

    Pedersen, Pauli

    2008-01-01

    Shape optimization for minimum stress concentration is vital, important, and difficult. New formulations and numerical procedures imply the need for good benchmarks. The available analytical shape solutions rely on assumptions that are seldom satisfied, so here, we suggest alternative benchmarks...

  9. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  10. Benchmarking specialty hospitals, a scoping review on theory and practice.

    Science.gov (United States)

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  11. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    First draft: July 16, 2008 This version: October 7, 2008 The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, an...

  12. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  13. Revaluering benchmarking - A topical theme for the construction industry

    OpenAIRE

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in...

  14. The extent of benchmarking in the South African financial sector

    OpenAIRE

    W Vermeulen

    2014-01-01

    Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective impleme...

  15. MTCB: A Multi-Tenant Customizable database Benchmark

    NARCIS (Netherlands)

    van der Zijden, WIm; Hiemstra, Djoerd; van Keulen, Maurice

    2017-01-01

    We argue that there is a need for Multi-Tenant Customizable OLTP systems. Such systems need a Multi-Tenant Customizable Database (MTC-DB) as a backing. To stimulate the development of such databases, we propose the benchmark MTCB. Benchmarks for OLTP exist and multi-tenant benchmarks exist, but no

  16. Benchmarking in the globalised world and its impact on South ...

    African Journals Online (AJOL)

    In order to understand the potential impact of international benchmarking on South African institutions, it is important to explore the future role of benchmarking on the international level. In this regard, examples of transnational benchmarking activities will be considered. As a result of the involvement of South African ...

  17. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  18. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Science.gov (United States)

    2010-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...

  19. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  20. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  1. Benchmark and Continuous Improvement of Performance

    Directory of Open Access Journals (Sweden)

    Alina Alecse Stanciu

    2017-12-01

    Full Text Available The present Economic Environment is challenge us to perform, to think and re-think our personal strategies in according with our entities strategies, even if we are simply employed or we are entrepreneurs. Is an environment characterised by Volatility, Uncertainity, Complexity and Ambiguity - a VUCA World in which the entities must fight for their position gained in the market, disrupt new markets and new economies, developing their client portofolio, with the Performance as one final goal. The pressure of driving forces known as the 2030 Megatrends: Globalization 2.0, Environmental Crisis and the Scarcity of Resources, Individualism and Value Pluralism, Demographic Change, This paper examines whether using benchmark is an opportunity to increase the competitiveness of Romanian SMEs and the results show that benchmark is therefore a powerful instrument, combining reduced negative impact on the environment with a positive impact on the economy and society.

  2. The Benchmark Test Results of QNX RTOS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jang Yeol; Lee, Young Jun; Cheon, Se Woo; Lee, Jang Soo; Kwon, Kee Choon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    A Real-Time Operating System(RTOS) is an Operating System(OS) intended for real-time applications. Benchmarking is a point of reference by which something can be measured. The QNX is a Real Time Operating System(RTOS) developed by QSSL(QNX Software Systems Ltd.) in Canada. The ELMSYS is the brand name of commercially available Personal Computer(PC) for applications such as Cabinet Operator Module(COM) of Digital Plant Protection System(DPPS) and COM of Digital Engineered Safety Features Actuation System(DESFAS). The ELMSYS PC Hardware is being qualified by KTL(Korea Testing Lab.) for use as a Cabinet Operator Module(COM). The QNX RTOS is being dedicated by Korea Atomic Energy Research Institute (KAERI). This paper describes the outline and benchmarking test results on Context Switching, Message Passing, Synchronization and Deadline Violation of QNX RTOS under the ELMSYS PC platform

  3. The Benchmark Test Results of QNX RTOS

    International Nuclear Information System (INIS)

    Kim, Jang Yeol; Lee, Young Jun; Cheon, Se Woo; Lee, Jang Soo; Kwon, Kee Choon

    2010-01-01

    A Real-Time Operating System(RTOS) is an Operating System(OS) intended for real-time applications. Benchmarking is a point of reference by which something can be measured. The QNX is a Real Time Operating System(RTOS) developed by QSSL(QNX Software Systems Ltd.) in Canada. The ELMSYS is the brand name of commercially available Personal Computer(PC) for applications such as Cabinet Operator Module(COM) of Digital Plant Protection System(DPPS) and COM of Digital Engineered Safety Features Actuation System(DESFAS). The ELMSYS PC Hardware is being qualified by KTL(Korea Testing Lab.) for use as a Cabinet Operator Module(COM). The QNX RTOS is being dedicated by Korea Atomic Energy Research Institute (KAERI). This paper describes the outline and benchmarking test results on Context Switching, Message Passing, Synchronization and Deadline Violation of QNX RTOS under the ELMSYS PC platform

  4. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  5. Benchmark West Texas Intermediate crude assayed

    International Nuclear Information System (INIS)

    Rhodes, A.K.

    1994-01-01

    The paper gives an assay of West Texas Intermediate, one of the world's market crudes. The price of this crude, known as WTI, is followed by market analysts, investors, traders, and industry managers around the world. WTI price is used as a benchmark for pricing all other US crude oils. The 41 degree API < 0.34 wt % sulfur crude is gathered in West Texas and moved to Cushing, Okla., for distribution. The WTI posted prices is the price paid for the crude at the wellhead in West Texas and is the true benchmark on which other US crudes are priced. The spot price is the negotiated price for short-term trades of the crude. And the New York Mercantile Exchange, or Nymex, price is a futures price for barrels delivered at Cushing

  6. BENCHMARKING – BETWEEN TRADITIONAL & MODERN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Mihaela Ungureanu

    2011-09-01

    Full Text Available The concept of benchmarking requires a continuous process of performance improvement of different organizations in order to obtain superiority towards those perceived as market leader’s competitors. This superiority can always be questioned, its relativity originating in the quick growing evolution of the economic environment. The approach supports innovation in relation with traditional methods and it is based on the will of those managers who want to determine limits and seek excellence. The end of the twentieth century is the period of broad expression of benchmarking in various areas and its transformation from a simple quantitative analysis tool, to a resource of information on performance and quality of goods and services.

  7. BENCHMARKING - PRACTICAL TOOLS IDENTIFY KEY SUCCESS FACTORS

    Directory of Open Access Journals (Sweden)

    Olga Ju. Malinina

    2016-01-01

    Full Text Available The article gives a practical example of the application of benchmarking techniques. The object of study selected fashion store Company «HLB & M Hennes & Mauritz», located in the shopping center «Gallery», Krasnodar. Hennes & Mauritz. The purpose of this article is to identify the best ways to develop a fashionable brand clothing store Hennes & Mauritz on the basis of benchmarking techniques. On the basis of conducted market research is a comparative analysis of the data from different perspectives. The result of the author’s study is a generalization of the ndings, the development of the key success factors that will allow to plan a successful trading activities in the future, based on the best experience of competitors.

  8. KENO-IV code benchmark calculation, (6)

    International Nuclear Information System (INIS)

    Nomura, Yasushi; Naito, Yoshitaka; Yamakawa, Yasuhiro.

    1980-11-01

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO 3 ) 4 aqueous solution, Pu metal or PuO 2 -polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)

  9. Summary of ACCSIM and ORBIT Benchmarking Simulations

    CERN Document Server

    AIBA, M

    2009-01-01

    We have performed a benchmarking study of ORBIT and ACCSIM which are accelerator tracking codes having routines to evaluate space charge effects. The study is motivated by the need of predicting/understanding beam behaviour in the CERN Proton Synchrotron Booster (PSB) in which direct space charge is expected to be the dominant performance limitation. Historically at CERN, ACCSIM has been employed for space charge simulation studies. A benchmark study using ORBIT has been started to confirm the results from ACCSIM and to profit from the advantages of ORBIT such as the capability of parallel processing. We observed a fair agreement in emittance evolution in the horizontal plane but not in the vertical one. This may be partly due to the fact that the algorithm to compute the space charge field is different between the two codes.

  10. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  11. Argonne Code Center: benchmark problem book

    International Nuclear Information System (INIS)

    1977-06-01

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification

  12. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  13. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  14. Benchmark problems for repository siting models

    International Nuclear Information System (INIS)

    Ross, B.; Mercer, J.W.; Thomas, S.D.; Lester, B.H.

    1982-12-01

    This report describes benchmark problems to test computer codes used in siting nuclear waste repositories. Analytical solutions, field problems, and hypothetical problems are included. Problems are included for the following types of codes: ground-water flow in saturated porous media, heat transport in saturated media, ground-water flow in saturated fractured media, heat and solute transport in saturated porous media, solute transport in saturated porous media, solute transport in saturated fractured media, and solute transport in unsaturated porous media

  15. Benchmarking and validation activities within JEFF project

    OpenAIRE

    Cabellos O.; Alvarez-Velarde F.; Angelone M.; Diez C.J.; Dyrda J.; Fiorito L.; Fischer U.; Fleming M.; Haeck W.; Hill I.; Ichou R.; Kim D. H.; Klix A.; Kodeli I.; Leconte P.

    2017-01-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient be...

  16. Healthy Foodservice Benchmarking and Leading Practices

    Science.gov (United States)

    2012-07-01

    cafeterias, managed by the Military and companies such Healthy Foodservice Benchmarking and Leading Practices | 7 as ARAMARK (Rolfsen, 2010) and...machine, a cafeteria line, a table where a patron gives his or her selection to a waiter , a cashier’s counter, a drive-thru window, a phone where orders...Nutrition and Weight Management Center at Boston Medical Center, the Medical Director of the Obesity Consult Center at Tufts University School of

  17. The challenge of benchmarking health systems

    OpenAIRE

    Lapão, Luís Velez

    2015-01-01

    WOS:000359623300001 PMID: 26301085 The article by Catan et al. presents a benchmarking exercise comparing Israel and Portugal on the implementation of Information and Communication Technologies in the healthcare sector. Special attention was given to e-Health and m-Health. The authors collected information via a set of interviews with key stakeholders. They compared two different cultures and societies, which have reached slightly different implementation outcomes. Although the comparison ...

  18. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W.

    1996-01-01

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  19. Reactor calculation benchmark PCA blind test results

    International Nuclear Information System (INIS)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables

  20. Reactor calculation benchmark PCA blind test results

    Energy Technology Data Exchange (ETDEWEB)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables.

  1. Benchmark testing calculations for 232Th

    International Nuclear Information System (INIS)

    Liu Ping

    2003-01-01

    The cross sections of 232 Th from CNDC and JENDL-3.3 were processed with NJOY97.45 code in the ACE format for the continuous-energy Monte Carlo Code MCNP4C. The K eff values and central reaction rates based on CENDL-3.0, JENDL-3.3 and ENDF/B-6.2 were calculated using MCNP4C code for benchmark assembly, and the comparisons with experimental results are given. (author)

  2. Benchmarking and energy management schemes in SMEs

    Energy Technology Data Exchange (ETDEWEB)

    Huenges Wajer, Boudewijn [SenterNovem (Netherlands); Helgerud, Hans Even [New Energy Performance AS (Norway); Lackner, Petra [Austrian Energy Agency (Austria)

    2007-07-01

    Many companies are reluctant to focus on energy management or to invest in energy efficiency measures. Nevertheless, there are many good examples proving that the right approach to implementing energy efficiency can very well be combined with the business-priorities of most companies. SMEs in particular can benefit from a facilitated European approach because they normally have a lack of resources and time to invest in energy efficiency. In the EU supported pilot project BESS, 60 SMEs from 11 European countries of the food and drink industries successfully tested a package of interactive instruments which offers such a facilitated approach. A number of pilot companies show a profit increase of 3 up to 10 %. The package includes a user-friendly and web based E-learning scheme for implementing energy management as well as a benchmarking module for company specific comparison of energy performance indicators. Moreover, it has several practical and tested tools to support the cycle of continuous improvement of energy efficiency in the company such as checklists, sector specific measure lists, templates for auditing and energy conservation plans. An important feature and also a key trigger for companies is the possibility for SMEs to benchmark anonymously their energy situation against others of the same sector. SMEs can participate in a unique web based benchmarking system to interactively benchmark in a way which fully guarantees confidentiality and safety of company data. Furthermore, the available data can contribute to a bottom-up approach to support the objectives of (national) monitoring and targeting and thereby also contributing to the EU Energy Efficiency and Energy Services Directive. A follow up project to expand the number of participating SMEs of various sectors is currently being developed.

  3. FENDL-2 and associated benchmark calculations

    International Nuclear Information System (INIS)

    Pashchenko, A.B.; Muir, D.W.

    1992-03-01

    The present Report contains the Summary of the IAEA Advisory Group Meeting on ''The FENDL-2 and Associated Benchmark Calculations'' convened on 18-22 November 1991, at the IAEA Headquarters in Vienna, Austria, by the IAEA Nuclear Data Section. The Advisory Group Meeting Conclusions and Recommendations and the Report on the Strategy for the Future Development of the FENDL and on Future Work towards establishing FENDL-2 are also included in this Summary Report. (author). 1 ref., 4 tabs

  4. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248 ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 2.145, year: 2015 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  5. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  6. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  7. Model based energy benchmarking for glass furnace

    International Nuclear Information System (INIS)

    Sardeshpande, Vishal; Gaitonde, U.N.; Banerjee, Rangan

    2007-01-01

    Energy benchmarking of processes is important for setting energy efficiency targets and planning energy management strategies. Most approaches used for energy benchmarking are based on statistical methods by comparing with a sample of existing plants. This paper presents a model based approach for benchmarking of energy intensive industrial processes and illustrates this approach for industrial glass furnaces. A simulation model for a glass furnace is developed using mass and energy balances, and heat loss equations for the different zones and empirical equations based on operating practices. The model is checked with field data from end fired industrial glass furnaces in India. The simulation model enables calculation of the energy performance of a given furnace design. The model results show the potential for improvement and the impact of different operating and design preferences on specific energy consumption. A case study for a 100 TPD end fired furnace is presented. An achievable minimum energy consumption of about 3830 kJ/kg is estimated for this furnace. The useful heat carried by glass is about 53% of the heat supplied by the fuel. Actual furnaces operating at these production scales have a potential for reduction in energy consumption of about 20-25%

  8. The Benchmarking of Integrated Business Structures

    Directory of Open Access Journals (Sweden)

    Nifatova Olena M.

    2017-12-01

    Full Text Available The aim of the article is to study the role of the benchmarking in the process of integration of business structures in the aspect of knowledge sharing. The results of studying the essential content of the concept “integrated business structure” and its semantic analysis made it possible to form our own understanding of this category with an emphasis on the need to consider it in the plane of three projections — legal, economic and organizational one. The economic projection of the essential content of integration associations of business units is supported by the organizational projection, which is expressed through such essential aspects as existence of a single center that makes key decisions; understanding integration as knowledge sharing; using the benchmarking as exchange of experience on key business processes. Understanding the process of integration of business units in the aspect of knowledge sharing involves obtaining certain information benefits. Using the benchmarking as exchange of experience on key business processes in integrated business structures will help improve the basic production processes, increase the efficiency of activity of both the individual business unit and the IBS as a whole.

  9. Boiling water reactor turbine trip (TT) benchmark

    International Nuclear Information System (INIS)

    2005-01-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts as well as for current applications. Recently developed 'best-estimate' computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for coupling core phenomena and system dynamics (PWR, BWR, VVER) need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for this purpose. The present report is the second in a series of four and summarises the results of the first benchmark exercise, which identifies the key parameters and important issues concerning the thermalhydraulic system modelling of the transient, with specified core average axial power distribution and fission power time transient history. The transient addressed is a turbine trip in a boiling water reactor, involving pressurization events in which the coupling between core phenomena and system dynamics plays an important role. In addition, the data made available from experiments carried out at the Peach Bottom 2 reactor (a GE-designed BWR/4) make the present benchmark particularly valuable. (author)

  10. Boiling water reactor turbine trip (TT) benchmark

    International Nuclear Information System (INIS)

    2001-06-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts, as well as for current nuclear applications Recently developed 'best-estimate' computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for the coupling of core phenomena and system dynamics (PWR, BWR, VVER) need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for the purpose. The present volume describes the specification of such a benchmark. The transient addressed is a turbine trip (TT) in a BWR involving pressurization events in which the coupling between core phenomena and system dynamics plays an important role. In addition, the data made available from experiments carried out at the plant make the present benchmark very valuable. The data used are from events at the Peach Bottom 2 reactor (a GE-designed BWR/4). (authors)

  11. Benchmarking for On-Scalp MEG Sensors.

    Science.gov (United States)

    Xie, Minshu; Schneiderman, Justin F; Chukharkin, Maxim L; Kalabukhov, Alexei; Riaz, Bushra; Lundqvist, Daniel; Whitmarsh, Stephen; Hamalainen, Matti; Jousmaki, Veikko; Oostenveld, Robert; Winkler, Dag

    2017-06-01

    We present a benchmarking protocol for quantitatively comparing emerging on-scalp magnetoencephalography (MEG) sensor technologies to their counterparts in state-of-the-art MEG systems. As a means of validation, we compare a high-critical-temperature superconducting quantum interference device (high T c SQUID) with the low- T c SQUIDs of an Elekta Neuromag TRIUX system in MEG recordings of auditory and somatosensory evoked fields (SEFs) on one human subject. We measure the expected signal gain for the auditory-evoked fields (deeper sources) and notice some unfamiliar features in the on-scalp sensor-based recordings of SEFs (shallower sources). The experimental results serve as a proof of principle for the benchmarking protocol. This approach is straightforward, general to various on-scalp MEG sensors, and convenient to use on human subjects. The unexpected features in the SEFs suggest on-scalp MEG sensors may reveal information about neuromagnetic sources that is otherwise difficult to extract from state-of-the-art MEG recordings. As the first systematically established on-scalp MEG benchmarking protocol, magnetic sensor developers can employ this method to prove the utility of their technology in MEG recordings. Further exploration of the SEFs with on-scalp MEG sensors may reveal unique information about their sources.

  12. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  13. Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...

    Science.gov (United States)

    This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms.  This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment.  This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a

  14. Benchmark matrix and guide: Part II.

    Science.gov (United States)

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  15. Monte Carlo benchmarking: Validation and progress

    International Nuclear Information System (INIS)

    Sala, P.

    2010-01-01

    Document available in abstract form only. Full text of publication follows: Calculational tools for radiation shielding at accelerators are faced with new challenges from the present and next generations of particle accelerators. All the details of particle production and transport play a role when dealing with huge power facilities, therapeutic ion beams, radioactive beams and so on. Besides the traditional calculations required for shielding, activation predictions have become an increasingly critical component. Comparison and benchmarking with experimental data is obviously mandatory in order to build up confidence in the computing tools, and to assess their reliability and limitations. Thin target particle production data are often the best tools for understanding the predictive power of individual interaction models and improving their performances. Complex benchmarks (e.g. thick target data, deep penetration, etc.) are invaluable in assessing the overall performances of calculational tools when all ingredients are put at work together. A review of the validation procedures of Monte Carlo tools will be presented with practical and real life examples. The interconnections among benchmarks, model development and impact on shielding calculations will be highlighted. (authors)

  16. Benchmarking Organisational Capability using The 20 Keys

    Directory of Open Access Journals (Sweden)

    Dino Petrarolo

    2012-01-01

    Full Text Available Organisations have over the years implemented many improvement initiatives, many of which were applied individually with no real, lasting improvement. Approaches such as quality control, team activities, setup reduction and many more seldom changed the fundamental constitution or capability of an organisation. Leading companies in the world have come to realise that an integrated approach is required which focuses on improving more than one factor at the same time - by recognising the importance of synergy between different improvement efforts and the need for commitment at all levels of the company to achieve total system-wide improvement.

    The 20 Keys approach offers a way to look at the strenqth of organisations and to systemically improve it, one step at a time by focusing on 20 different but interrelated aspects. One feature of the approach is the benchmarking system which forms the main focus of this paper. The benchmarking system is introduced as an important part of the 20 Keys philosophy in measuring organisational strength. Benchmarking results from selected South African companies are provided, as well as one company's results achieved through the adoption of the 20 Keys philosophy.

  17. A Pan-STARRS1 Proper-Motion Survey for Young Brown Dwarfs in the Nearest Star-Forming Regions and a Reddening-Free Classification Method for Ultracool Dwarfs

    Science.gov (United States)

    Zhang, Zhoujian; Liu, Michael C.; Best, William M. J.; Magnier, Eugene; Aller, Kimberly

    2018-01-01

    Young brown dwarfs are of prime importance to investigate the universality of the initial mass function (IMF). Based on photometry and proper motions from the Pan-STARRS1 (PS1) 3π survey, we are conducting the widest and deepest brown dwarf survey in the nearby star-forming regions, Taurus–Auriga (Taurus) and Upper Scorpius (USco). Our work is the first to measure proper motions, a robust proxy of membership, for brown dwarf candidates in Taurus and USco over such a large area and long time baseline (≈ 15 year) with such high precision (≈ 4 mas yr-1). Since extinction complicates spectral classification, we have developed a new approach to quantitatively determine reddening-free spectral types, extinctions, and gravity classifications for mid-M to late-L ultracool dwarfs (≈ 100–5 MJup), using low-resolution near-infrared spectra. So far, our IRTF/SpeX spectroscopic follow-up has increased the substellar and planetary-mass census of Taurus by ≈ 50% and almost doubled the substellar census of USco, constituting the largest single increases of brown dwarfs and free-floating planets found in both regions to date. Most notably, our new discoveries reveal an older (> 10 Myr) low-mass population in Taurus, in accord with recent studies of the higher-mass stellar members. In addition, the mass function appears to differ between the younger and older Taurus populations, possibly due to incompleteness of the older stellar members or different star formation processes. Upon completion, our survey will establish the most complete substellar and planetary-mass census in both Taurus and USco associations, make a significant addition to the low-mass IMF in both regions, and deliver more comprehensive pictures of star formation histories.

  18. The Pan-STARRS1 Proper-motion Survey for Young Brown Dwarfs in Nearby Star-forming Regions. I. Taurus Discoveries and a Reddening-free Classification Method for Ultracool Dwarfs

    Science.gov (United States)

    Zhang, Zhoujian; Liu, Michael C.; Best, William M. J.; Magnier, Eugene A.; Aller, Kimberly M.; Chambers, K. C.; Draper, P. W.; Flewelling, H.; Hodapp, K. W.; Kaiser, N.; Kudritzki, R.-P.; Metcalfe, N.; Wainscoat, R. J.; Waters, C.

    2018-05-01

    We are conducting a proper-motion survey for young brown dwarfs in the Taurus-Auriga molecular cloud based on the Pan-STARRS1 3π Survey. Our search uses multi-band photometry and astrometry to select candidates, and is wider (370 deg2) and deeper (down to ≈3 M Jup) than previous searches. We present here our search methods and spectroscopic follow-up of our high-priority candidates. Since extinction complicates spectral classification, we have developed a new approach using low-resolution (R ≈ 100) near-infrared spectra to quantify reddening-free spectral types, extinctions, and gravity classifications for mid-M to late-L ultracool dwarfs (≲100–3 M Jup in Taurus). We have discovered 25 low-gravity (VL-G) and the first 11 intermediate-gravity (INT-G) substellar (M6–L1) members of Taurus, constituting the largest single increase of Taurus brown dwarfs to date. We have also discovered 1 new Pleiades member and 13 new members of the Perseus OB2 association, including a candidate very wide separation (58 kau) binary. We homogeneously reclassify the spectral types and extinctions of all previously known Taurus brown dwarfs. Altogether our discoveries have thus far increased the substellar census in Taurus by ≈40% and added three more L-type members (≲5–10 M Jup). Most notably, our discoveries reveal an older (>10 Myr) low-mass population in Taurus, in accord with recent studies of the higher-mass stellar members. The mass function appears to differ between the younger and older Taurus populations, possibly due to incompleteness of the older stellar members or different star formation processes.

  19. Integrating Best Practice and Performance Indicators To Benchmark the Performance of a School System. Benchmarking Paper 940317.

    Science.gov (United States)

    Cuttance, Peter

    This paper provides a synthesis of the literature on the role of benchmarking, with a focus on its use in the public sector. Benchmarking is discussed in the context of quality systems, of which it is an important component. The paper describes the basic types of benchmarking, pertinent research about its application in the public sector, the…

  20. The extent of benchmarking in the South African financial sector

    Directory of Open Access Journals (Sweden)

    W Vermeulen

    2014-06-01

    Full Text Available Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective implementation of benchmarking. Based on the research findings, recommendations for achieving success are suggested.

  1. MANAGING BENCHMARKING IN A CORPORATE ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    D.M. Mouton

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Most new generation organisations have management models and processes for measuring and managing organisational performance. However, the application of these models and the direction the company needs to take are not always clearly established. Benchmarking can be defined as the search for industry best practices that lead to superior performance. The emphasis is on “best” and “superior”. There are no limitations on the search; the more creative the thinking, the greater the potential reward. Unlike traditional competitive analysis that focuses on outputs, benchmarking is applied to key operational processes within the business. Processes are compared and the best process is adapted into the organisation. Benchmarking is not guaranteed to be successful though, it needs to be managed and nurtured in the organisation and allowed to grow throughout the organisation to finally become a way of life. It also needs to be integrated into key business processes in order to ensure that the benefits can be reaped into the distant future. This paper provide guidelines for creating, managing and sustaining a benchmarking capability in a corporation.

    AFRIKAANSE OPSOMMING: Die nuwe generasie van ondernemings beskik oor bestuursmodelle en –prosesse wat meting en die bestuur van ondernemingsvertoning in die hand werk. Die wyse waarop die modelle toegepas word en hoe die onderneming sy besluite moet vorm is nog nie deeglik uitgetrap nie. Praktykvergelykings ("Benchmarking" word beskryf as die soeke na beste bedryfspraktyke wat lei tot uitstekende vertoning. Die klem word geplaas op die woorde "beste" en "uitstekende". Die soektog word geensins beperk nie; hoe meer kreatief die benadering, des te beter is die potensiële beloning. Waar tradisionele mededingingsnanalise ondernemingsuitsette onder die loep neem word praktykvergelyking togepas op sleutelprosesse in die bedryf van die onderneming. Prosesse word met mekaar

  2. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  3. The International Criticality Safety Benchmark Evaluation Project

    International Nuclear Information System (INIS)

    Briggs, B. J.; Dean, V. F.; Pesic, M. P.

    2001-01-01

    In order to properly manage the risk of a nuclear criticality accident, it is important to establish the conditions for which such an accident becomes possible for any activity involving fissile material. Only when this information is known is it possible to establish the likelihood of actually achieving such conditions. It is therefore important that criticality safety analysts have confidence in the accuracy of their calculations. Confidence in analytical results can only be gained through comparison of those results with experimental data. The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the US Department of Energy. The project was managed through the Idaho National Engineering and Environmental Laboratory (INEEL), but involved nationally known criticality safety experts from Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Savannah River Technology Center, Oak Ridge National Laboratory and the Y-12 Plant, Hanford, Argonne National Laboratory, and the Rocky Flats Plant. An International Criticality Safety Data Exchange component was added to the project during 1994 and the project became what is currently known as the International Criticality Safety Benchmark Evaluation Project (ICSBEP). Representatives from the United Kingdom, France, Japan, the Russian Federation, Hungary, Kazakhstan, Korea, Slovenia, Yugoslavia, Spain, and Israel are now participating on the project In December of 1994, the ICSBEP became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency's (OECD-NEA) Nuclear Science Committee. The United States currently remains the lead country, providing most of the administrative support. The purpose of the ICSBEP is to: (1) identify and evaluate a comprehensive set of critical benchmark data; (2) verify the data, to the extent possible, by reviewing original and subsequently revised documentation, and by talking with the

  4. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several...... visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  5. Numisheet2005 Benchmark Analysis on Forming of an Automotive Deck Lid Inner Panel: Benchmark 1

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    Numerical simulations in sheet metal forming processes have been a very challenging topic in industry. There are many computer codes and modeling techniques existing today. However, there are many unknowns affecting the prediction accuracy. Systematic benchmark tests are needed to accelerate the future implementations and to provide as a reference. This report presents an international cooperative benchmark effort for an automotive deck lid inner panel. Predictions from simulations are analyzed and discussed against the corresponding experimental results. The correlations between accuracy of each parameter of interest are discussed in this report

  6. Benchmarking urban energy efficiency in the UK

    International Nuclear Information System (INIS)

    Keirstead, James

    2013-01-01

    This study asks what is the ‘best’ way to measure urban energy efficiency. There has been recent interest in identifying efficient cities so that best practices can be shared, a process known as benchmarking. Previous studies have used relatively simple metrics that provide limited insight on the complexity of urban energy efficiency and arguably fail to provide a ‘fair’ measure of urban performance. Using a data set of 198 urban UK local administrative units, three methods are compared: ratio measures, regression residuals, and data envelopment analysis. The results show that each method has its own strengths and weaknesses regarding the ease of interpretation, ability to identify outliers and provide consistent rankings. Efficient areas are diverse but are notably found in low income areas of large conurbations such as London, whereas industrial areas are consistently ranked as inefficient. The results highlight the shortcomings of the underlying production-based energy accounts. Ideally urban energy efficiency benchmarks would be built on consumption-based accounts, but interim recommendations are made regarding the use of efficiency measures that improve upon current practice and facilitate wider conversations about what it means for a specific city to be energy-efficient within an interconnected economy. - Highlights: • Benchmarking is a potentially valuable method for improving urban energy performance. • Three different measures of urban energy efficiency are presented for UK cities. • Most efficient areas are diverse but include low-income areas of large conurbations. • Least efficient areas perform industrial activities of national importance. • Improve current practice with grouped per capita metrics or regression residuals

  7. Thermal reactor benchmark tests on JENDL-2

    International Nuclear Information System (INIS)

    Takano, Hideki; Tsuchihashi, Keichiro; Yamane, Tsuyoshi; Akino, Fujiyoshi; Ishiguro, Yukio; Ido, Masaru.

    1983-11-01

    A group constant library for the thermal reactor standard nuclear design code system SRAC was produced by using the evaluated nuclear data JENDL-2. Furthermore, the group constants for 235 U were calculated also from ENDF/B-V. Thermal reactor benchmark calculations were performed using the produced group constant library. The selected benchmark cores are two water-moderated lattices (TRX-1 and 2), two heavy water-moderated cores (DCA and ETA-1), two graphite-moderated cores (SHE-8 and 13) and eight critical experiments for critical safety. The effective multiplication factors and lattice cell parameters were calculated and compared with the experimental values. The results are summarized as follows. (1) Effective multiplication factors: The results by JENDL-2 are considerably improved in comparison with ones by ENDF/B-IV. The best agreement is obtained by using JENDL-2 and ENDF/B-V (only 235 U) data. (2) Lattice cell parameters: For the rho 28 (the ratio of epithermal to thermal 238 U captures) and C* (the ratio of 238 U captures to 235 U fissions), the values calculated by JENDL-2 are in good agreement with the experimental values. The rho 28 (the ratio of 238 U to 235 U fissions) are overestimated as found also for the fast reactor benchmarks. The rho 02 (the ratio of epithermal to thermal 232 Th captures) calculated by JENDL-2 or ENDF/B-IV are considerably underestimated. The functions of the SRAC system have been continued to be extended according to the needs of its users. A brief description will be given, in Appendix B, to the extended parts of the SRAC system together with the input specification. (author)

  8. Regional restoration benchmarks for Acropora cervicornis

    Science.gov (United States)

    Schopmeyer, Stephanie A.; Lirman, Diego; Bartels, Erich; Gilliam, David S.; Goergen, Elizabeth A.; Griffin, Sean P.; Johnson, Meaghan E.; Lustic, Caitlin; Maxwell, Kerry; Walter, Cory S.

    2017-12-01

    Coral gardening plays an important role in the recovery of depleted populations of threatened Acropora cervicornis in the Caribbean. Over the past decade, high survival coupled with fast growth of in situ nursery corals have allowed practitioners to create healthy and genotypically diverse nursery stocks. Currently, thousands of corals are propagated and outplanted onto degraded reefs on a yearly basis, representing a substantial increase in the abundance, biomass, and overall footprint of A. cervicornis. Here, we combined an extensive dataset collected by restoration practitioners to document early (1-2 yr) restoration success metrics in Florida and Puerto Rico, USA. By reporting region-specific data on the impacts of fragment collection on donor colonies, survivorship and productivity of nursery corals, and survivorship and productivity of outplanted corals during normal conditions, we provide the basis for a stop-light indicator framework for new or existing restoration programs to evaluate their performance. We show that current restoration methods are very effective, that no excess damage is caused to donor colonies, and that once outplanted, corals behave just as wild colonies. We also provide science-based benchmarks that can be used by programs to evaluate successes and challenges of their efforts, and to make modifications where needed. We propose that up to 10% of the biomass can be collected from healthy, large A. cervicornis donor colonies for nursery propagation. We also propose the following benchmarks for the first year of activities for A. cervicornis restoration: (1) >75% live tissue cover on donor colonies; (2) >80% survivorship of nursery corals; and (3) >70% survivorship of outplanted corals. Finally, we report productivity means of 4.4 cm yr-1 for nursery corals and 4.8 cm yr-1 for outplants as a frame of reference for ranking performance within programs. Such benchmarks, and potential subsequent adaptive actions, are needed to fully assess the

  9. Human factors reliability benchmark exercise: a review

    International Nuclear Information System (INIS)

    Humphreys, P.

    1990-01-01

    The Human Factors Reliability Benchmark Exercise has addressed the issues of identification, analysis, representation and quantification of Human Error in order to identify the strengths and weaknesses of available techniques. Using a German PWR nuclear powerplant as the basis for the studies, fifteen teams undertook evaluations of a routine functional Test and Maintenance procedure plus an analysis of human actions during an operational transient. The techniques employed by the teams are discussed and reviewed on a comparative basis. The qualitative assessments performed by each team compare well, but at the quantification stage there is much less agreement. (author)

  10. Benchmarks of Global Clean Energy Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-01-01

    The Clean Energy Manufacturing Analysis Center (CEMAC), sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), provides objective analysis and up-to-date data on global supply chains and manufacturing of clean energy technologies. Benchmarks of Global Clean Energy Manufacturing sheds light on several fundamental questions about the global clean technology manufacturing enterprise: How does clean energy technology manufacturing impact national economies? What are the economic opportunities across the manufacturing supply chain? What are the global dynamics of clean energy technology manufacturing?

  11. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  12. COVE 2A Benchmarking calculations using NORIA

    International Nuclear Information System (INIS)

    Carrigan, C.R.; Bixler, N.E.; Hopkins, P.L.; Eaton, R.R.

    1991-10-01

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs

  13. ABM11 parton distributions and benchmarks

    International Nuclear Information System (INIS)

    Alekhin, Sergey; Bluemlein, Johannes; Moch, Sven-Olaf

    2012-08-01

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant α s at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n f =3,4,5 and uses the MS scheme for α s and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  14. ABM11 parton distributions and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, Sergey [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institut Fiziki Vysokikh Ehnergij, Protvino (Russian Federation); Bluemlein, Johannes; Moch, Sven-Olaf [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-08-15

    We present a determination of the nucleon parton distribution functions (PDFs) and of the strong coupling constant {alpha}{sub s} at next-to-next-to-leading order (NNLO) in QCD based on the world data for deep-inelastic scattering and the fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS scheme for {alpha}{sub s} and the heavy quark masses. The fit results are compared with other PDFs and used to compute the benchmark cross sections at hadron colliders to the NNLO accuracy.

  15. Air Quality Monitoring System and Benchmarking

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Nielsen, Per Sieverts

    2017-01-01

    Air quality monitoring has become an integral part of smart city solutions. This paper presents an air quality monitoring system based on Internet of Things (IoT) technologies, and establishes a cloud-based platform to address the challenges related to IoT data management and processing capabilit...... capabilities, including data collection, storage, analysis, and visualization. In addition, this paper also benchmarks four state-of-the-art database systems to investigate the appropriate technologies for managing large-scale IoT datasets....

  16. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich

    2015-01-01

    In multicomponent electrolyte solutions, the tendency of ions to diffuse at different rates results in a charge imbalance that is counteracted by the electrostatic coupling between charged species leading to a process called “electrochemical migration” or “electromigration.” Although not commonly...... not been published to date. This contribution provides a set of three benchmark problems that demonstrate the effect of electric coupling during multicomponent diffusion and electrochemical migration and at the same time facilitate the intercomparison of solutions from existing reactive transport codes...

  17. Benchmark calculation programme concerning typical LMFBR structures

    International Nuclear Information System (INIS)

    Donea, J.; Ferrari, G.; Grossetie, J.C.; Terzaghi, A.

    1982-01-01

    This programme, which is part of a comprehensive activity aimed at resolving difficulties encountered in using design procedures based on ASME Code Case N-47, should allow to get confidence in computer codes which are supposed to provide a realistic prediction of the LMFBR component behaviour. The calculations started on static analysis of typical structures made of non linear materials stressed by cyclic loads. The fluid structure interaction analysis is also being considered. Reasons and details of the different benchmark calculations are described, results obtained are commented and future computational exercise indicated

  18. International Benchmarking of Electricity Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2014-01-01

    Electricity transmission system operators (TSO) in Europe are increasing subject to high-powered performance-based regulation, such as revenue-cap regimes. The determination of the parameters in such regimes is challenging for national regulatory authorities (NRA), since there is normally a single...... TSO operating in each jurisdiction. The solution for European regulators has been found in international regulatory benchmarking, organized in collaboration with the Council of European Energy Regulators (CEER) in 2008 and 2012 for 22 and 23 TSOs, respectively. The frontier study provides static cost...... weight restrictions and a correction method for opening balances....

  19. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...... hospitals focus on productivity and reducing errors provide operational benefits, which primarily is achieved by high degree of overwork among staff. Danish hospitals on the contrary pay the price of productivity, with focus on pleasing caring needs of the patient and limiting overwork among employees....

  20. 2 D 1/2 graphical benchmark

    International Nuclear Information System (INIS)

    Brochard, P.; Colin De Verdiere, G.; Nomine, J.P.; Perros, J.P.

    1993-01-01

    Within the framework of the development of a new version of the Psyche software, the author reports a benchmark study on different graphical libraries and systems and on the Psyche application. The author outlines the current context of development of graphical tools which still lacks of standardisation. This makes the comparison somehow limited and finally related to envisaged applications. The author presents the various systems and libraries, test principles, and characteristics of machines. Results and interpretations are then presented with reference to faced problems

  1. Benchmark comparisons of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Resler, D.A.; Howerton, R.J.; White, R.M.

    1994-05-01

    With the availability and maturity of several evaluated nuclear data files, it is timely to compare the results of integral tests with calculations using these different files. We discuss here our progress in making integral benchmark tests of the following nuclear data files: ENDL-94, ENDF/B-V and -VI, JENDL-3, JEF-2, and BROND-2. The methods used to process these evaluated libraries in a consistent way into applications files for use in Monte Carlo calculations is presented. Using these libraries, we are calculating and comparing to experiment k eff for 68 fast critical assemblies of 233,235 U and 239 Pu with reflectors of various material and thickness

  2. Benchmark simulations of ICRF antenna coupling

    International Nuclear Information System (INIS)

    Louche, F.; Lamalle, P. U.; Messiaen, A. M.; Compernolle, B. van; Milanesio, D.; Maggiora, R.

    2007-01-01

    The paper reports on ongoing benchmark numerical simulations of antenna input impedance parameters in the ion cyclotron range of frequencies with different coupling codes: CST Microwave Studio, TOPICA and ANTITER 2. In particular we study the validity of the approximation of a magnetized plasma slab by a dielectric medium of suitably chosen permittivity. Different antenna models are considered: a single-strap antenna, a 4-strap antenna and the 24-strap ITER antenna array. Whilst the diagonal impedances are mostly in good agreement, some differences between the mutual terms predicted by Microwave Studio and TOPICA have yet to be resolved

  3. Calculus of a reactor VVER-1000 benchmark; Calcul d'un benchmark de reacteur VVER-1000

    Energy Technology Data Exchange (ETDEWEB)

    Dourougie, C

    1998-07-01

    In the framework of the FMDP (Fissile Materials Disposition Program between the US and Russian, a benchmark was tested. The pin cells contain low enriched uranium (LEU) and mixed oxide fuels (MOX). The calculations are done for a wide range of temperatures and solute boron concentrations, in accidental conditions. (A.L.B.)

  4. Parallel Ada benchmarks for the SVMS

    Science.gov (United States)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  5. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  6. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-01-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another – from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a ‘lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds. PMID:24322528

  7. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    Karlson, C.F.; Reed, M.L.; Webb, J.R.; Elzea, J.D.

    1990-01-01

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  8. Swiss electricity grid - Benchmarking pilot project

    International Nuclear Information System (INIS)

    2001-01-01

    This article is a short version of the ENET number 210369. This report for the Swiss Federal Office of Energy (SFOE) describes a benchmarking pilot project carried out as a second phase in the development of a formula for the regulation of an open electricity market in Switzerland. It follows on from an initial phase involving the definition of a 'blue print' and a basic concept. The aims of the pilot project - to check out the practicability of the concept - are discussed. The collection of anonymised data for the benchmarking model from over 30 electricity utilities operating on all 7 Swiss grid levels and their integration in the three areas 'Technology', 'Grid Costs' and 'Capital Invested' are discussed in detail. In particular, confidentiality and data protection aspects are looked at. The methods used in the analysis of the data are described and the results of an efficiency analysis of various utilities are presented. The report is concluded with a listing of questions concerning data collection and analysis as well as operational and capital costs that are still to be answered

  9. Multiscale benchmarking of drug delivery vectors.

    Science.gov (United States)

    Summers, Huw D; Ware, Matthew J; Majithia, Ravish; Meissner, Kenith E; Godin, Biana; Rees, Paul

    2016-10-01

    Cross-system comparisons of drug delivery vectors are essential to ensure optimal design. An in-vitro experimental protocol is presented that separates the role of the delivery vector from that of its cargo in determining the cell response, thus allowing quantitative comparison of different systems. The technique is validated through benchmarking of the dose-response of human fibroblast cells exposed to the cationic molecule, polyethylene imine (PEI); delivered as a free molecule and as a cargo on the surface of CdSe nanoparticles and Silica microparticles. The exposure metrics are converted to a delivered dose with the transport properties of the different scale systems characterized by a delivery time, τ. The benchmarking highlights an agglomeration of the free PEI molecules into micron sized clusters and identifies the metric determining cell death as the total number of PEI molecules presented to cells, determined by the delivery vector dose and the surface density of the cargo. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    Science.gov (United States)

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies. ©2015 American Association for Cancer Research.

  11. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. © 2015 Wiley Periodicals, Inc.

  12. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  13. Benchmarking the internal combustion engine and hydrogen

    International Nuclear Information System (INIS)

    Wallace, J.S.

    2006-01-01

    The internal combustion engine is a cost-effective and highly reliable energy conversion technology. Exhaust emission regulations introduced in the 1970's triggered extensive research and development that has significantly improved in-use fuel efficiency and dramatically reduced exhaust emissions. The current level of gasoline vehicle engine development is highlighted and representative emissions and efficiency data are presented as benchmarks. The use of hydrogen fueling for IC engines has been investigated over many decades and the benefits and challenges arising are well-known. The current state of hydrogen-fueled engine development will be reviewed and evaluated against gasoline-fueled benchmarks. The prospects for further improvements to hydrogen-fueled IC engines will be examined. While fuel cells are projected to offer greater energy efficiency than IC engines and zero emissions, the availability of fuel cells in quantity at reasonable cost is a barrier to their widespread adaptation for the near future. In their current state of development, hydrogen fueled IC engines are an effective technology to create demand for hydrogen fueling infrastructure until fuel cells become available in commercial quantities. During this transition period, hydrogen fueled IC engines can achieve PZEV/ULSLEV emissions. (author)

  14. REVISED STREAM CODE AND WASP5 BENCHMARK

    International Nuclear Information System (INIS)

    Chen, K

    2005-01-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls

  15. Results of LWR core transient benchmarks

    International Nuclear Information System (INIS)

    Finnemann, H.; Bauer, H.; Galati, A.; Martinelli, R.

    1993-10-01

    LWR core transient (LWRCT) benchmarks, based on well defined problems with a complete set of input data, are used to assess the discrepancies between three-dimensional space-time kinetics codes in transient calculations. The PWR problem chosen is the ejection of a control assembly from an initially critical core at hot zero power or at full power, each for three different geometrical configurations. The set of problems offers a variety of reactivity excursions which efficiently test the coupled neutronic/thermal - hydraulic models of the codes. The 63 sets of submitted solutions are analyzed by comparison with a nodal reference solution defined by using a finer spatial and temporal resolution than in standard calculations. The BWR problems considered are reactivity excursions caused by cold water injection and pressurization events. In the present paper, only the cold water injection event is discussed and evaluated in some detail. Lacking a reference solution the evaluation of the 8 sets of BWR contributions relies on a synthetic comparative discussion. The results of this first phase of LWRCT benchmark calculations are quite satisfactory, though there remain some unresolved issues. It is therefore concluded that even more challenging problems can be successfully tackled in a suggested second test phase. (authors). 46 figs., 21 tabs., 3 refs

  16. The Medical Library Association Benchmarking Network: results.

    Science.gov (United States)

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C; Smith, Bernie Todd

    2006-04-01

    This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries.

  17. The Medical Library Association Benchmarking Network: results*

    Science.gov (United States)

    Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd

    2006-01-01

    Objective: This article presents some limited results from the Medical Library Association (MLA) Benchmarking Network survey conducted in 2002. Other uses of the data are also presented. Methods: After several years of development and testing, a Web-based survey opened for data input in December 2001. Three hundred eighty-five MLA members entered data on the size of their institutions and the activities of their libraries. The data from 344 hospital libraries were edited and selected for reporting in aggregate tables and on an interactive site in the Members-Only area of MLANET. The data represent a 16% to 23% return rate and have a 95% confidence level. Results: Specific questions can be answered using the reports. The data can be used to review internal processes, perform outcomes benchmarking, retest a hypothesis, refute a previous survey findings, or develop library standards. The data can be used to compare to current surveys or look for trends by comparing the data to past surveys. Conclusions: The impact of this project on MLA will reach into areas of research and advocacy. The data will be useful in the everyday working of small health sciences libraries as well as provide concrete data on the current practices of health sciences libraries. PMID:16636703

  18. Direct data access protocols benchmarking on DPM

    Science.gov (United States)

    Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina

    2015-12-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.

  19. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  20. Development of a California commercial building benchmarking database

    International Nuclear Information System (INIS)

    Kinney, Satkartar; Piette, Mary Ann

    2002-01-01

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database

  1. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  2. Integral benchmarks with reference to thorium fuel cycle

    International Nuclear Information System (INIS)

    Ganesan, S.

    2003-01-01

    This is a power point presentation about the Indian participation in the CRP 'Evaluated Data for the Thorium-Uranium fuel cycle'. The plans and scope of the Indian participation are to provide selected integral experimental benchmarks for nuclear data validation, including Indian Thorium burn up benchmarks, post-irradiation examination studies, comparison of basic evaluated data files and analysis of selected benchmarks for Th-U fuel cycle

  3. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  4. Calculation of the 5th AER dynamic benchmark with APROS

    International Nuclear Information System (INIS)

    Puska, E.K.; Kontio, H.

    1998-01-01

    The model used for calculation of the 5th AER dynamic benchmark with APROS code is presented. In the calculation of the 5th AER dynamic benchmark the three-dimensional neutronics model of APROS was used. The core was divided axially into 20 nodes according to the specifications of the benchmark and each six identical fuel assemblies were placed into one one-dimensional thermal hydraulic channel. The five-equation thermal hydraulic model was used in the benchmark. The plant process and automation was described with a generic VVER-440 plant model created by IVO PE. (author)

  5. Benchmarking Best Practices in Transformation for Sea Enterprise

    National Research Council Canada - National Science Library

    Brook, Douglas A; Hudgens, Bryan; Nguyen, Nam; Walsh, Katherine

    2006-01-01

    ... applied to reinvestment and recapitalization. Sea Enterprise contracted the Center for Defense Management Reform to research transformation and benchmarking best practices in the private sector...

  6. A 3D stylized half-core CANDU benchmark problem

    International Nuclear Information System (INIS)

    Pounders, Justin M.; Rahnema, Farzad; Serghiuta, Dumitru; Tholammakkil, John

    2011-01-01

    A 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem is presented. The benchmark problem is comprised of a heterogeneous lattice of 37-element natural uranium fuel bundles, heavy water moderated, heavy water cooled, with adjuster rods included as reactivity control devices. Furthermore, a 2-group macroscopic cross section library has been developed for the problem to increase the utility of this benchmark for full-core deterministic transport methods development. Monte Carlo results are presented for the benchmark problem in cooled, checkerboard void, and full coolant void configurations.

  7. U.S. integral and benchmark experiments

    International Nuclear Information System (INIS)

    Maienschein, F.C.

    1978-01-01

    Verification of methods for analysis of radiation-transport (shielding) problems in Liquid-Metal Fast Breeder Reactors has required a series of experiments that can be classified as benchmark, parametric, or design-confirmation experiments. These experiments, performed at the Oak Ridge Tower Shielding Facility, have included measurements of neutron transport in bulk shields of sodium, steel, and inconel and in configurations that simulate lower axial shields, pipe chases, and top-head shields. They have also included measurements of the effects of fuel stored within the reactor vessel and of gamma-ray energy deposition (heating). The paper consists of brief comments on these experiments, and also on a recent experiment in which neutron streaming problems in a Gas-Cooled Fast Breeder Reactor were studied. The need for additional experiments for a few areas of LMFBR shielding is also cited

  8. Benchmarking organic mixed conductors for transistors

    KAUST Repository

    Inal, Sahika; Malliaras, George G.; Rivnay, Jonathan

    2017-01-01

    Organic mixed conductors have garnered significant attention in applications from bioelectronics to energy storage/generation. Their implementation in organic transistors has led to enhanced biosensing, neuromorphic function, and specialized circuits. While a narrow class of conducting polymers continues to excel in these new applications, materials design efforts have accelerated as researchers target new functionality, processability, and improved performance/stability. Materials for organic electrochemical transistors (OECTs) require both efficient electronic transport and facile ion injection in order to sustain high capacity. In this work, we show that the product of the electronic mobility and volumetric charge storage capacity (µC*) is the materials/system figure of merit; we use this framework to benchmark and compare the steady-state OECT performance of ten previously reported materials. This product can be independently verified and decoupled to guide materials design and processing. OECTs can therefore be used as a tool for understanding and designing new organic mixed conductors.

  9. Benchmarking organic mixed conductors for transistors

    KAUST Repository

    Inal, Sahika

    2017-11-20

    Organic mixed conductors have garnered significant attention in applications from bioelectronics to energy storage/generation. Their implementation in organic transistors has led to enhanced biosensing, neuromorphic function, and specialized circuits. While a narrow class of conducting polymers continues to excel in these new applications, materials design efforts have accelerated as researchers target new functionality, processability, and improved performance/stability. Materials for organic electrochemical transistors (OECTs) require both efficient electronic transport and facile ion injection in order to sustain high capacity. In this work, we show that the product of the electronic mobility and volumetric charge storage capacity (µC*) is the materials/system figure of merit; we use this framework to benchmark and compare the steady-state OECT performance of ten previously reported materials. This product can be independently verified and decoupled to guide materials design and processing. OECTs can therefore be used as a tool for understanding and designing new organic mixed conductors.

  10. Supply network configuration—A benchmarking problem

    Science.gov (United States)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  11. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  12. Heavy nucleus resonant absorption calculation benchmarks

    International Nuclear Information System (INIS)

    Tellier, H.; Coste, H.; Raepsaet, C.; Van der Gucht, C.

    1993-01-01

    The calculation of the space and energy dependence of the heavy nucleus resonant absorption in a heterogeneous lattice is one of the hardest tasks in reactor physics. Because of the computer time and memory needed, it is impossible to represent finely the cross-section behavior in the resonance energy range for everyday computations. Consequently, reactor physicists use a simplified formalism, the self-shielding formalism. As no clean and detailed experimental results are available to validate the self-shielding calculations, Monte Carlo computations are used as a reference. These results, which were obtained with the TRIPOLI continuous-energy Monte Carlo code, constitute a set of numerical benchmarks than can be used to evaluate the accuracy of the techniques or formalisms that are included in any reactor physics codes. Examples of such evaluations, for the new assembly code APOLLO2 and the slowing-down code SECOL, are given for cases of 238 U and 232 Th fuel elements

  13. Development of an MPI benchmark program library

    Energy Technology Data Exchange (ETDEWEB)

    Uehara, Hitoshi

    2001-03-01

    Distributed parallel simulation software with message passing interfaces has been developed to realize large-scale and high performance numerical simulations. The most popular API for message communication is an MPI. The MPI will be provided on the Earth Simulator. It is known that performance of message communication using the MPI libraries gives a significant influence on a whole performance of simulation programs. We developed an MPI benchmark program library named MBL in order to measure the performance of message communication precisely. The MBL measures the performance of major MPI functions such as point-to-point communications and collective communications and the performance of major communication patterns which are often found in application programs. In this report, the description of the MBL and the performance analysis of the MPI/SX measured on the SX-4 are presented. (author)

  14. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    of an undergraduate business school education. This paper presents case analysis of the research-oriented participatory education curriculum developed at Copenhagen Business School because it appears uniquely suited, by a curious mix of Danish education tradition and deliberate innovation, to offer an educational......While the knowledge management literature has addressed the explicit and tacit skills needed for successful performance in the modern enterprise, little attention has been paid to date in this particular literature as to how these wide-ranging skills may be suitably acquired during the course...... experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...

  15. Gatemon Benchmarking and Two-Qubit Operation

    Science.gov (United States)

    Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.

  16. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  17. Development of solutions to benchmark piping problems

    Energy Technology Data Exchange (ETDEWEB)

    Reich, M; Chang, T Y; Prachuktam, S; Hartzman, M

    1977-12-01

    Benchmark problems and their solutions are presented. The problems consist in calculating the static and dynamic response of selected piping structures subjected to a variety of loading conditions. The structures range from simple pipe geometries to a representative full scale primary nuclear piping system, which includes the various components and their supports. These structures are assumed to behave in a linear elastic fashion only, i.e., they experience small deformations and small displacements with no existing gaps, and remain elastic through their entire response. The solutions were obtained by using the program EPIPE, which is a modification of the widely available program SAP IV. A brief outline of the theoretical background of this program and its verification is also included.

  18. HELIOS calculations for UO2 lattice benchmarks

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1998-01-01

    Calculations for the ANS UO 2 lattice benchmark have been performed with the HELIOS lattice-physics code and six of its cross-section libraries. The results obtained from the different libraries permit conclusions to be drawn regarding the adequacy of the energy group structures and of the ENDF/B-VI evaluation for 238 U. Scandpower A/S, the developer of HELIOS, provided Los Alamos National Laboratory with six different cross section libraries. Three of the libraries were derived directly from Release 3 of ENDF/B-VI (ENDF/B-VI.3) and differ only in the number of groups (34, 89 or 190). The other three libraries are identical to the first three except for a modification to the cross sections for 238 U in the resonance range

  19. Hospital benchmarking: are U.S. eye hospitals ready?

    Science.gov (United States)

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added

  20. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  1. Common Nearest Neighbor Clustering—A Benchmark

    Directory of Open Access Journals (Sweden)

    Oliver Lemke

    2018-02-01

    Full Text Available Cluster analyses are often conducted with the goal to characterize an underlying probability density, for which the data-point density serves as an estimate for this probability density. We here test and benchmark the common nearest neighbor (CNN cluster algorithm. This algorithm assigns a spherical neighborhood R to each data point and estimates the data-point density between two data points as the number of data points N in the overlapping region of their neighborhoods (step 1. The main principle in the CNN cluster algorithm is cluster growing. This grows the clusters by sequentially adding data points and thereby effectively positions the border of the clusters along an iso-surface of the underlying probability density. This yields a strict partitioning with outliers, for which the cluster represents peaks in the underlying probability density—termed core sets (step 2. The removal of the outliers on the basis of a threshold criterion is optional (step 3. The benchmark datasets address a series of typical challenges, including datasets with a very high dimensional state space and datasets in which the cluster centroids are aligned along an underlying structure (Birch sets. The performance of the CNN algorithm is evaluated with respect to these challenges. The results indicate that the CNN cluster algorithm can be useful in a wide range of settings. Cluster algorithms are particularly important for the analysis of molecular dynamics (MD simulations. We demonstrate how the CNN cluster results can be used as a discretization of the molecular state space for the construction of a core-set model of the MD improving the accuracy compared to conventional full-partitioning models. The software for the CNN clustering is available on GitHub.

  2. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  3. Benchmarking in Thoracic Surgery. Third Edition.

    Science.gov (United States)

    Freixinet Gilart, Jorge; Varela Simó, Gonzalo; Rodríguez Suárez, Pedro; Embún Flor, Raúl; Rivas de Andrés, Juan José; de la Torre Bravos, Mercedes; Molins López-Rodó, Laureano; Pac Ferrer, Joaquín; Izquierdo Elena, José Miguel; Baschwitz, Benno; López de Castro, Pedro E; Fibla Alfara, Juan José; Hernando Trancho, Florentino; Carvajal Carrasco, Ángel; Canalís Arrayás, Emili; Salvatierra Velázquez, Ángel; Canela Cardona, Mercedes; Torres Lanzas, Juan; Moreno Mata, Nicolás

    2016-04-01

    Benchmarking entails continuous comparison of efficacy and quality among products and activities, with the primary objective of achieving excellence. To analyze the results of benchmarking performed in 2013 on clinical practices undertaken in 2012 in 17 Spanish thoracic surgery units. Study data were obtained from the basic minimum data set for hospitalization, registered in 2012. Data from hospital discharge reports were submitted by the participating groups, but staff from the corresponding departments did not intervene in data collection. Study cases all involved hospital discharges recorded in the participating sites. Episodes included were respiratory surgery (Major Diagnostic Category 04, Surgery), and those of the thoracic surgery unit. Cases were labelled using codes from the International Classification of Diseases, 9th revision, Clinical Modification. The refined diagnosis-related groups classification was used to evaluate differences in severity and complexity of cases. General parameters (number of cases, mean stay, complications, readmissions, mortality, and activity) varied widely among the participating groups. Specific interventions (lobectomy, pneumonectomy, atypical resections, and treatment of pneumothorax) also varied widely. As in previous editions, practices among participating groups varied considerably. Some areas for improvement emerge: admission processes need to be standardized to avoid urgent admissions and to improve pre-operative care; hospital discharges should be streamlined and discharge reports improved by including all procedures and complications. Some units have parameters which deviate excessively from the norm, and these sites need to review their processes in depth. Coding of diagnoses and comorbidities is another area where improvement is needed. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.

  4. Benchmarking on the management of radioactive waste; Benchmarking sobre la gestion de los residuos radiactivos

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Gomez, M. a.; Gonzalez Gandal, R.; Gomez Castano, N.

    2013-09-01

    In this project, an evaluation of the practices carried out in the waste management field at the Spanish nuclear power plants has been done following the Benchmarking methodology. This process has allowed the identification of aspects to improve waste treatment processes; to reduce the volume of waste; to reduce management costs and to establish ways of management for the waste stream which do not have. (Author)

  5. Evaluation of PWR and BWR pin cell benchmark results

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Gruppelaar, H.; Janssen, A.J.; Hoogenboom, J.E.; Leege, P.F.A. de; Voet, J. van der; Verhagen, F.C.M.

    1991-12-01

    Benchmark results of the Dutch PINK working group on PWR and BWR pin cell calculational benchmark as defined by EPRI are presented and evaluated. The observed discrepancies are problem dependent: a part of the results is satisfactory, some other results require further analysis. A brief overview is given of the different code packages used in this analysis. (author). 14 refs., 9 figs., 30 tabs

  6. Evaluation of PWR and BWR pin cell benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Pijlgroms, B.J.; Gruppelaar, H.; Janssen, A.J. (Netherlands Energy Research Foundation (ECN), Petten (Netherlands)); Hoogenboom, J.E.; Leege, P.F.A. de (Interuniversitair Reactor Inst., Delft (Netherlands)); Voet, J. van der (Gemeenschappelijke Kernenergiecentrale Nederland NV, Dodewaard (Netherlands)); Verhagen, F.C.M. (Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands))

    1991-12-01

    Benchmark results of the Dutch PINK working group on PWR and BWR pin cell calculational benchmark as defined by EPRI are presented and evaluated. The observed discrepancies are problem dependent: a part of the results is satisfactory, some other results require further analysis. A brief overview is given of the different code packages used in this analysis. (author). 14 refs., 9 figs., 30 tabs.

  7. Advocacy for Benchmarking in the Nigerian Institute of Advanced ...

    African Journals Online (AJOL)

    The paper gave a general overview of benchmarking and its novel application to library practice with a view to achieve organizational change and improved performance. Based on literature, the paper took an analytic, descriptive and qualitative overview of benchmarking practices vis a vis services in law libraries generally ...

  8. What Are the ACT College Readiness Benchmarks? Information Brief

    Science.gov (United States)

    ACT, Inc., 2013

    2013-01-01

    The ACT College Readiness Benchmarks are the minimum ACT® college readiness assessment scores required for students to have a high probability of success in credit-bearing college courses--English Composition, social sciences courses, College Algebra, or Biology. This report identifies the College Readiness Benchmarks on the ACT Compass scale…

  9. Presidential Address 1997--Benchmarks for the Next Millennium.

    Science.gov (United States)

    Baker, Pamela C.

    1997-01-01

    Reflects on the century's preeminent benchmarks, including the evolution in the lives of people with disabilities and the prevention of many causes of mental retardation. The ethical challenges of genetic engineering and diagnostic technology and the need for new benchmarks in policy, practice, and research are discussed. (CR)

  10. Case mix classification and a benchmark set for surgery scheduling

    NARCIS (Netherlands)

    Leeftink, Gréanne; Hans, Erwin W.

    Numerous benchmark sets exist for combinatorial optimization problems. However, in healthcare scheduling, only a few benchmark sets are known, mainly focused on nurse rostering. One of the most studied topics in the healthcare scheduling literature is surgery scheduling, for which there is no widely

  11. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    International Nuclear Information System (INIS)

    Bess, John D.; Montierth, Leland; Köberl, Oliver

    2014-01-01

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235 U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  12. Benchmarking with the BLASST Sessional Staff Standards Framework

    Science.gov (United States)

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  13. Nomenclatural Benchmarking: The roles of digital typification and telemicroscopy

    Science.gov (United States)

    The process of nomenclatural benchmarking is the examination of type specimens of all available names to ascertain which currently accepted species the specimen bearing the name falls within. We propose a strategy for addressing four challenges for nomenclatural benchmarking. First, there is the mat...

  14. Marking Closely or on the Bench?: An Australian's Benchmark Statement.

    Science.gov (United States)

    Jones, Roy

    2000-01-01

    Reviews the benchmark statements of the Quality Assurance Agency for Higher Education in the United Kingdom. Examines the various sections within the benchmark. States that in terms of emphasizing the positive attributes of the geography discipline the statements have wide utility and applicability. (CMK)

  15. BIM quickscan: benchmark of BIM performance in the Netherlands

    NARCIS (Netherlands)

    Berlo, L.A.H.M. van; Dijkmans, T.J.A.; Hendriks, H.; Spekkink, D.; Pel, W.

    2012-01-01

    In 2009 a “BIM QuickScan” for benchmarking BIM performance was created in the Netherlands (Sebastian, Berlo 2010). This instrument aims to provide insight into the current BIM performance of a company. The benchmarking instrument combines quantitative and qualitative assessments of the ‘hard’ and

  16. EU and OECD benchmarking and peer review compared

    NARCIS (Netherlands)

    Groenendijk, Nico

    2009-01-01

    Benchmarking and peer review are essential elements of the so-called EU open method of coordination (OMC) which has been contested in the literature for lack of effectiveness. In this paper we compare benchmarking and peer review procedures as used by the EU with those used by the OECD. Different

  17. The benchmark testing of 9Be of CENDL-3

    International Nuclear Information System (INIS)

    Liu Ping

    2002-01-01

    CENDL-3, the latest version of China Evaluated Nuclear Data Library was finished. The data of 9 Be were updated, and distributed for benchmark analysis recently. The calculated results were presented, and compared with the experimental data and the results based on other evaluated nuclear data libraries. The results show that CENDL-3 is better than others for most benchmarks

  18. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we ...

  19. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  20. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    Science.gov (United States)

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  1. Quality indicators for international benchmarking of mental health care

    DEFF Research Database (Denmark)

    Hermann, Richard C; Mattke, Soeren; Somekh, David

    2006-01-01

    To identify quality measures for international benchmarking of mental health care that assess important processes and outcomes of care, are scientifically sound, and are feasible to construct from preexisting data.......To identify quality measures for international benchmarking of mental health care that assess important processes and outcomes of care, are scientifically sound, and are feasible to construct from preexisting data....

  2. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  3. Benchmarking in health care: using the Internet to identify resources.

    Science.gov (United States)

    Lingle, V A

    1996-01-01

    Benchmarking is a quality improvement tool that is increasingly being applied to the health care field and to the libraries within that field. Using mostly resources assessible at no charge through the Internet, a collection of information was compiled on benchmarking and its applications. Sources could be identified in several formats including books, journals and articles, multi-media materials, and organizations.

  4. WWER in-core fuel management benchmark definition

    International Nuclear Information System (INIS)

    Apostolov, T.; Alekova, G.; Prodanova, R.; Petrova, T.; Ivanov, K.

    1994-01-01

    Two benchmark problems for WWER-440, including design parameters, operating conditions and measured quantities are discussed in this paper. Some benchmark results for infinitive multiplication factor -K eff , natural boron concentration - C β and relative power distribution - K q obtained by use of the code package are represented. (authors). 5 refs., 3 tabs

  5. Benchmarking in pathology: development of a benchmarking complexity unit and associated key performance indicators.

    Science.gov (United States)

    Neil, Amanda; Pfeffer, Sally; Burnett, Leslie

    2013-01-01

    This paper details the development of a new type of pathology laboratory productivity unit, the benchmarking complexity unit (BCU). The BCU provides a comparative index of laboratory efficiency, regardless of test mix. It also enables estimation of a measure of how much complex pathology a laboratory performs, and the identification of peer organisations for the purposes of comparison and benchmarking. The BCU is based on the theory that wage rates reflect productivity at the margin. A weighting factor for the ratio of medical to technical staff time was dynamically calculated based on actual participant site data. Given this weighting, a complexity value for each test, at each site, was calculated. The median complexity value (number of BCUs) for that test across all participating sites was taken as its complexity value for the Benchmarking in Pathology Program. The BCU allowed implementation of an unbiased comparison unit and test listing that was found to be a robust indicator of the relative complexity for each test. Employing the BCU data, a number of Key Performance Indicators (KPIs) were developed, including three that address comparative organisational complexity, analytical depth and performance efficiency, respectively. Peer groups were also established using the BCU combined with simple organisational and environmental metrics. The BCU has enabled productivity statistics to be compared between organisations. The BCU corrects for differences in test mix and workload complexity of different organisations and also allows for objective stratification into peer groups.

  6. SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI

    Directory of Open Access Journals (Sweden)

    Ralf Reussner

    2002-01-01

    Full Text Available The main objective of the MPI communication library is to enable portable parallel programming with high performance within the message-passing paradigm. Since the MPI standard has no associated performance model, and makes no performance guarantees, comprehensive, detailed and accurate performance figures for different hardware platforms and MPI implementations are important for the application programmer, both for understanding and possibly improving the behavior of a given program on a given platform, as well as for assuring a degree of predictable behavior when switching to another hardware platform and/or MPI implementation. We term this latter goal performance portability, and address the problem of attaining performance portability by benchmarking. We describe the SKaMPI benchmark which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability. SKaMPI is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.

  7. Introduction to 'International Handbook of Criticality Safety Benchmark Experiments'

    International Nuclear Information System (INIS)

    Komuro, Yuichi

    1998-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated year by year by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used. The author briefly introduces the informative handbook and would like to encourage Japanese engineers who are in charge of nuclear criticality safety to use the handbook. (author)

  8. OWL2 benchmarking for the evaluation of knowledge based systems.

    Directory of Open Access Journals (Sweden)

    Sher Afgun Khan

    Full Text Available OWL2 semantics are becoming increasingly popular for the real domain applications like Gene engineering and health MIS. The present work identifies the research gap that negligible attention has been paid to the performance evaluation of Knowledge Base Systems (KBS using OWL2 semantics. To fulfil this identified research gap, an OWL2 benchmark for the evaluation of KBS is proposed. The proposed benchmark addresses the foundational blocks of an ontology benchmark i.e. data schema, workload and performance metrics. The proposed benchmark is tested on memory based, file based, relational database and graph based KBS for performance and scalability measures. The results show that the proposed benchmark is able to evaluate the behaviour of different state of the art KBS on OWL2 semantics. On the basis of the results, the end users (i.e. domain expert would be able to select a suitable KBS appropriate for his domain.

  9. Benchmark for Evaluating Moving Object Indexes

    DEFF Research Database (Denmark)

    Chen, Su; Jensen, Christian Søndergaard; Lin, Dan

    2008-01-01

    that targets techniques for the indexing of the current and near-future positions of moving objects. This benchmark enables the comparison of existing and future indexing techniques. It covers important aspects of such indexes that have not previously been covered by any benchmark. Notable aspects covered......Progress in science and engineering relies on the ability to measure, reliably and in detail, pertinent properties of artifacts under design. Progress in the area of database-index design thus relies on empirical studies based on prototype implementations of indexes. This paper proposes a benchmark...... include update efficiency, query efficiency, concurrency control, and storage requirements. Next, the paper applies the benchmark to half a dozen notable moving-object indexes, thus demonstrating the viability of the benchmark and offering new insight into the performance properties of the indexes....

  10. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  11. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In

  12. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  13. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    International Nuclear Information System (INIS)

    Marshall, M.A.; Bess, J.D.

    2011-01-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas(reg s ign) reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 ± 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  14. Semi-Analytical Benchmarks for MCNP6

    Energy Technology Data Exchange (ETDEWEB)

    Grechanuk, Pavel Aleksandrovi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-07

    Code verification is an extremely important process that involves proving or disproving the validity of code algorithms by comparing them against analytical results of the underlying physics or mathematical theory on which the code is based. Monte Carlo codes such as MCNP6 must undergo verification and testing upon every release to ensure that the codes are properly simulating nature. Specifically, MCNP6 has multiple sets of problems with known analytic solutions that are used for code verification. Monte Carlo codes primarily specify either current boundary sources or a volumetric fixed source, either of which can be very complicated functions of space, energy, direction and time. Thus, most of the challenges with modeling analytic benchmark problems in Monte Carlo codes come from identifying the correct source definition to properly simulate the correct boundary conditions. The problems included in this suite all deal with mono-energetic neutron transport without energy loss, in a homogeneous material. The variables that differ between the problems are source type (isotropic/beam), medium dimensionality (infinite/semi-infinite), etc.

  15. LHC benchmarks from flavored gauge mediation

    Energy Technology Data Exchange (ETDEWEB)

    Ierushalmi, N.; Iwamoto, S.; Lee, G.; Nepomnyashy, V.; Shadmi, Y. [Physics Department, Technion - Israel Institute of Technology,Haifa 32000 (Israel)

    2016-07-12

    We present benchmark points for LHC searches from flavored gauge mediation models, in which messenger-matter couplings give flavor-dependent squark masses. Our examples include spectra in which a single squark — stop, scharm, or sup — is much lighter than all other colored superpartners, motivating improved quark flavor tagging at the LHC. Many examples feature flavor mixing; in particular, large stop-scharm mixing is possible. The correct Higgs mass is obtained in some examples by virtue of the large stop A-term. We also revisit the general flavor and CP structure of the models. Even though the A-terms can be substantial, their contributions to EDM’s are very suppressed, because of the particular dependence of the A-terms on the messenger coupling. This holds regardless of the messenger-coupling texture. More generally, the special structure of the soft terms often leads to stronger suppression of flavor- and CP-violating processes, compared to naive estimates.

  16. Benchmarking homogenization algorithms for monthly data

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.

    2013-09-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  17. One dimensional benchmark calculations using diffusion theory

    International Nuclear Information System (INIS)

    Ustun, G.; Turgut, M.H.

    1986-01-01

    This is a comparative study by using different one dimensional diffusion codes which are available at our Nuclear Engineering Department. Some modifications have been made in the used codes to fit the problems. One of the codes, DIFFUSE, solves the neutron diffusion equation in slab, cylindrical and spherical geometries by using 'Forward elimination- Backward substitution' technique. DIFFUSE code calculates criticality, critical dimensions and critical material concentrations and adjoint fluxes as well. It is used for the space and energy dependent neutron flux distribution. The whole scattering matrix can be used if desired. Normalisation of the relative flux distributions to the reactor power, plotting of the flux distributions and leakage terms for the other two dimensions have been added. Some modifications also have been made for the code output. Two Benchmark problems have been calculated with the modified version and the results are compared with BBD code which is available at our department and uses same techniques of calculation. Agreements are quite good in results such as k-eff and the flux distributions for the two cases studies. (author)

  18. Benchmark for license plate character segmentation

    Science.gov (United States)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  19. A NEW BENCHMARK FOR PLANTWIDE PROCESS CONTROL

    Directory of Open Access Journals (Sweden)

    N. Klafke

    Full Text Available Abstract The hydrodealkylation process of toluene (HDA has been used as a case study in a large number of control studies. However, in terms of industrial application, this process has become obsolete and is nowadays superseded by new technologies capable of processing heavy aromatic compounds, which increase the added value of the raw materials, such as the process of transalkylation and disproportionation of toluene (TADP. TADP also presents more complex feed and product streams and challenging operational characteristics both in the reactor and separator sections than in HDA. This work is aimed at proposing the TADP process as a new benchmark for plantwide control studies in lieu of the HAD process. For this purpose, a nonlinear dynamic rigorous model for the TADP process was developed using Aspen Plus™ and Aspen Dynamics™ and industrial conditions. Plantwide control structures (oriented to control and to the process were adapted and applied for the first time for this process. The results show that, even though both strategies are similar in terms of control performance, the optimization of economic factors must still be sought.

  20. Benchmark of systematic human action reliability procedure

    International Nuclear Information System (INIS)

    Spurgin, A.J.; Hannaman, G.W.; Moieni, P.

    1986-01-01

    Probabilistic risk assessment (PRA) methodology has emerged as one of the most promising tools for assessing the impact of human interactions on plant safety and understanding the importance of the man/machine interface. Human interactions were considered to be one of the key elements in the quantification of accident sequences in a PRA. The approach to quantification of human interactions in past PRAs has not been very systematic. The Electric Power Research Institute sponsored the development of SHARP to aid analysts in developing a systematic approach for the evaluation and quantification of human interactions in a PRA. The SHARP process has been extensively peer reviewed and has been adopted by the Institute of Electrical and Electronics Engineers as the basis of a draft guide for the industry. By carrying out a benchmark process, in which SHARP is an essential ingredient, however, it appears possible to assess the strengths and weaknesses of SHARP to aid human reliability analysts in carrying out human reliability analysis as part of a PRA