WorldWideScience

Sample records for benchmark definition updated

  1. Building America Research Benchmark Definition: Updated August 15, 2007

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2007-09-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a 'moving target'.

  2. Building America Research Benchmark Definition: Updated December 20, 2007

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2008-01-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a 'moving target'.

  3. Building America Research Benchmark Definition, Updated December 15, 2006

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2007-01-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a ''moving target''.

  4. Building America Research Benchmark Definition: Updated December 2009

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.; Engebrecht, C.

    2010-01-01

    The Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without chasing a 'moving target.'

  5. Building America Research Benchmark Definition, Updated December 2009

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engebrecht, Cheryn [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2010-01-01

    To track progress toward aggressive multi-year, whole-house energy savings goals of 40%–70% and on-site power production of up to 30%, the U.S. Department of Energy (DOE) Residential Buildings Program and the National Renewable Energy Laboratory (NREL) developed the Building America (BA) Research Benchmark in consultation with the Building America industry teams.

  6. Building America Research Benchmark Definition, Updated December 19, 2008

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2008-12-19

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Bui

  7. Building America Research Benchmark Definition: Updated December 19, 2008

    Energy Technology Data Exchange (ETDEWEB)

    Hendron, R.

    2008-12-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams.

  8. Dukovany NPP fuel cycle benchmark definition

    International Nuclear Information System (INIS)

    The new benchmark based on Dukovany NPP Unit-2 history is defined. The main goal of this benchmark is to compare results obtained by different codes used for neutron-physics calculation in organisations which are interested in this task. All needed are described in this paper or there are given references, where it is possible to obtain this information. Input data are presented in tables, requested output data format for automatic processing is described (Authors)

  9. Core periphery power tilt benchmark for WWER-440 definition

    International Nuclear Information System (INIS)

    The unstable accuracy of power forecasts at periphery fuel pins and utilization of exploitation data from WWER-440 reactors are main motivations for benchmark definition. Second generation fuel assemblies with mean enrichment 4,25 % at 5-year cycle at Unit 4 of NPP Bohunice are analyzed with emphasis on the last cycle. Starting point, calculated period and results are characterized. SCORPIO data will be used for comparison. (Authors)

  10. Advanced fuel cycles options for LWRs and IMF benchmark definition

    International Nuclear Information System (INIS)

    In the paper, different advanced nuclear fuel cycles including thorium-based fuel and inert-matrix fuel are examined under light water reactor conditions, especially VVER-440, and compared. Two investigated thorium based fuels include one solely plutonium-thorium based fuel and the second one plutonium-thorium based fuel with initial uranium content. Both of them are used to carry and burn or transmute plutonium created in the classical UOX cycle. The inert-matrix fuel consist of plutonium and minor actinides separated from spent UOX fuel fixed in Yttria-stabilised zirconia matrix. The article shows analysed fuel cycles and their short description. The conclusion is concentrated on the rate of Pu transmutation and Pu with minor actinides cumulating in the spent advanced thorium fuel and its comparison to UOX open fuel cycle. Definition of IMF benchmark based on presented scenario is given. (authors)

  11. Preeclampsia: Updates in Pathogenesis, Definitions, and Guidelines.

    Science.gov (United States)

    Phipps, Elizabeth; Prasanna, Devika; Brima, Wunnie; Jim, Belinda

    2016-06-01

    Preeclampsia is becoming an increasingly common diagnosis in the developed world and remains a high cause of maternal and fetal morbidity and mortality in the developing world. Delay in childbearing in the developed world feeds into the risk factors associated with preeclampsia, which include older maternal age, obesity, and/or vascular diseases. Inadequate prenatal care partially explains the persistent high prevalence in the developing world. In this review, we begin by presenting the most recent concepts in the pathogenesis of preeclampsia. Upstream triggers of the well described angiogenic pathways, such as the heme oxygenase and hydrogen sulfide pathways, as well as the roles of autoantibodies, misfolded proteins, nitric oxide, and oxidative stress will be described. We also detail updated definitions, classification schema, and treatment targets of hypertensive disorders of pregnancy put forth by obstetric and hypertensive societies throughout the world. The shift has been made to view preeclampsia as a systemic disease with widespread endothelial damage and the potential to affect future cardiovascular diseases rather than a self-limited occurrence. At the very least, we now know that preeclampsia does not end with delivery of the placenta. We conclude by summarizing the latest strategies for prevention and treatment of preeclampsia. A better understanding of this entity will help in the care of at-risk women before delivery and for decades after. PMID:27094609

  12. WWER in-core fuel management benchmark definition

    International Nuclear Information System (INIS)

    Two benchmark problems for WWER-440, including design parameters, operating conditions and measured quantities are discussed in this paper. Some benchmark results for infinitive multiplication factor -Keff, natural boron concentration - Cβ and relative power distribution - Kq obtained by use of the code package are represented. (authors). 5 refs., 3 tabs

  13. Update of KASHIL-E6 library for shielding analysis and benchmark calculations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. H.; Kil, C. S.; Jang, J. H. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2004-07-01

    For various shielding and reactor pressure vessel dosimetry applications, a pseudo-problem-independent neutron-photon coupled MATXS-format library based on the last release of ENDF/B-VI has been generated as a part of the update program for KASHIL-E6, which was based on ENDF/B-VI.5. It has VITAMIN-B6 neutron and photon energy group structures, i.e., 199 groups for neutron and 42 groups for photon. The neutron and photon weighting functions and the Legendre order of scattering are same as KASHIL-E6. The library has been validated through some benchmarks: the PCA-REPLICA and NESDIP-2 experiments for LWR pressure vessel facility benchmark, the Winfrith Iron88 experiment for validation of iron data, and the Winfrith Graphite experiment for validation of graphite data. These calculations were performed by the TRANSXlDANTSYS code system. In addition, the substitutions of the JENDL-3.3 and JEFF-3.0 data for Fe, Cr, Cu and Ni, which are very important nuclides for shielding analyses, were investigated to estimate the effects on the benchmark calculation results.

  14. Analysis of the burnup credit benchmark with an updated WIMS-D Library

    International Nuclear Information System (INIS)

    The OECD/NEA Burnup Credit Benchmark was analyzed with the WIMSD5B code using a fully updated library based on ENDF/B-VI Revision 5 data. Parts-1A and 1B were considered. The criticality prediction tested in Part-1A was in very good agreement with the reference result. A slight trend to overestimate the absorption rate by the fission products was noted, which can be explained by spectral effects resulting from the coarseness of the WIMS-D 69-group energy grid. The isotopic composition prediction tested in Part-1B was within the uncertainty interval of the reference results, except for 109 Ag at lower burnup and 155 Gd in all the cases. For 109 Ag the cause of the discrepancy was the use of old fission yield data in generating the reference solution. Similarly for 155 Gd the difference was due to old 155 Eu capture cross sections. Compared to the measurements, a serious underprediction of Sm isotopes is observed. This could be due to problems in the measured values or in the nuclear data of Sm precursors. We conclude that our processing methods do not introduce significant errors to the basic nuclear data. Care should be taken in the interpretation of the reference average benchmark solution due to a possible bias towards the ENDF/B-V evaluated nuclear data files

  15. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  16. Definition and analysis of heavy water reactor benchmarks for testing new multigroup libraries

    International Nuclear Information System (INIS)

    A set of heavy water reactor benchmarks has been selected for testing new WIMS-D libraries. The libraries were constricted using data from ENDF/B-VI, Release 7, JENDL-3.2 and JEF-2.2 evaluated nuclear data files. The benchmarks cover a wide variety of reactor types and conditions, from fresh fuel to high burnup, and for natural and enriched uranium and Th-U fuels. The main parameters compared are the effective multiplication factor and other integral parameters, and isotopic composition of actinides on burnup cases. Besides, further investigations related with energy spectra used for preparation of WIMS-D libraries when applied on HWTR reactor calculations are included. Mostly of the benchmarks show a good agreement between experimental measurements and calculated values for all libraries. One exception is Th232 benchmark, were it is found that a library with JEND-3.2 Th232 data produces better results than ENDF/B-VI, R.7 and JEF-2.2 Th232 data. Results are slightly improved when HWTR spectra are used for weighting function to prepare the multi-group cross sections. This work is part of the International Atomic Energy Agency's Coordinated Research Project on 'Final Stage of WIMS-D Library Update Project'. (author)

  17. Analysis of the Rowlands uranium oxide pin-cell benchmark with an updated WIMS-D library

    International Nuclear Information System (INIS)

    The Rowlands uranium oxide light water reactor pin-cell numerical benchmark results from the literature were analysed to obtain a self consistent set which can be used as reference. The materials relevant to the benchmark from the JEF-2.2 evaluated nuclear data file were processed with the NJOY code and the WIMS-D multigroup library was updated. An input for WIMSD-5A was prepared. Integral parameters, which include reaction rates and multiplication factors for the pin cell at different temperatures, moderator density and leakage were calculated. The results were compared to the previously defined reference values

  18. IAEA CRP on HTGR Uncertainty Analysis: Benchmark Definition and Test Cases

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Frederik Reitsma; Hans Gougar; Bismark Tyobeka; Kostadin Ivanov

    2012-11-01

    Uncertainty and sensitivity studies are essential elements of the reactor simulation code verification and validation process. Although several international uncertainty quantification activities have been launched in recent years in the LWR, BWR and VVER domains (e.g. the OECD/NEA BEMUSE program [1], from which the current OECD/NEA LWR Uncertainty Analysis in Modelling (UAM) benchmark [2] effort was derived), the systematic propagation of uncertainties in cross-section, manufacturing and model parameters for High Temperature Reactor (HTGR) designs has not been attempted yet. This paper summarises the scope, objectives and exercise definitions of the IAEA Coordinated Research Project (CRP) on HTGR UAM [3]. Note that no results will be included here, as the HTGR UAM benchmark was only launched formally in April 2012, and the specification is currently still under development.

  19. Definition and Analysis of Heavy Water Reactor Benchmarks for Testing New Wims-D Libraries

    International Nuclear Information System (INIS)

    This work is part of the IAEA-WIMS Library Update Project (WLUP). A group of heavy water reactor benchmarks have been selected for testing new WIMS-D libraries, including calculations with WIMSD5B program and the analysis of results.These benchmarks cover a wide variety of reactors and conditions, from fresh fuels to high burnup, and from natural to enriched uranium.Besides, each benchmark includes variations in lattice pitch and in coolants (normally heavy water and void).Multiplication factors with critical experimental bucklings and other parameters are calculated and compared with experimental reference values.The WIMS libraries used for the calculations were generated with basic data from JEF-2.2 Rev.3 (JEF) and ENDF/B-VI iNReleaseln 5 (E6) Results obtained with WIMS-86 (W86) library, included with WIMSD5B package, from Windfrith, UK with adjusted data, are included also, for showing the improvements obtained with the new -not adjusted- libraries.The calculations with WIMSD5B were made with two methods (input program options): PIJ (two-dimension collision probability method) and DSN (one-dimension Sn method, with homogenization of materials by ring).The general conclusions are: the library based on JEF data and the DSN meted give the best results, that in average are acceptable

  20. Improved precision and accuracy for microarrays using updated probe set definitions

    Directory of Open Access Journals (Sweden)

    Larsson Ola

    2007-02-01

    Full Text Available Abstract Background Microarrays enable high throughput detection of transcript expression levels. Different investigators have recently introduced updated probe set definitions to more accurately map probes to our current knowledge of genes and transcripts. Results We demonstrate that updated probe set definitions provide both better precision and accuracy in probe set estimates compared to the original Affymetrix definitions. We show that the improved precision mainly depends on the increased number of probes that are integrated into each probe set, but we also demonstrate an improvement when the same number of probes is used. Conclusion Updated probe set definitions does not only offer expression levels that are more accurately associated to genes and transcripts but also improvements in the estimated transcript expression levels. These results give support for the use of updated probe set definitions for analysis and meta-analysis of microarray data.

  1. Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, G. A.; Hiergesell, R. A.

    2013-11-12

    The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow

  2. WLUP benchmarks

    International Nuclear Information System (INIS)

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  3. Recent accelerator experiments updates in Shielding INtegral Benchmark Archive Database (SINBAD)

    Science.gov (United States)

    Kodeli, I.; Sartori, E.; Kirk, B.

    2006-06-01

    SINBAD is an internationally established set of radiation shielding and dosimetry data relative to experiments relevant in reactor shielding, fusion blanket neutronics and accelerator shielding. In addition to the characterization of the radiation source, it describes shielding materials and instrumentation and the relevant detectors. The experimental results, be it dose, reaction rates or unfolded spectra are presented in tabular ASCII form that can easily be exported to different computer environments for further use. Most sets in SINBAD also contain the computer model used for the interpretation of the experiment and, where available, results from uncertainty analysis. The set of primary documents used for the benchmark compilation and evaluation are provided in computer readable form. SINBAD is available free of charge from RSICC and from the NEA Data Bank.

  4. Recent accelerator experiments updates in Shielding INtegral Benchmark Archive Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kodeli, I. [IAEA Representative at OECD/NEA Data Bank, 12 bd. des Iles, 92130 Issy-les-Moulinaux (France)]. E-mail: ivo.kodeli@oecd.org; Sartori, E. [OECD Nuclear Energy Agency, 12 bd des Iles, 92130 Issy les Moulineaux (France); Kirk, B. [RSICC, Oak Ridge National Laboratory, POB 2008, Oak Ridge, TN 37831-6362 (United States)

    2006-06-23

    SINBAD is an internationally established set of radiation shielding and dosimetry data relative to experiments relevant in reactor shielding, fusion blanket neutronics and accelerator shielding. In addition to the characterization of the radiation source, it describes shielding materials and instrumentation and the relevant detectors. The experimental results, be it dose, reaction rates or unfolded spectra are presented in tabular ASCII form that can easily be exported to different computer environments for further use. Most sets in SINBAD also contain the computer model used for the interpretation of the experiment and, where available, results from uncertainty analysis. The set of primary documents used for the benchmark compilation and evaluation are provided in computer readable form. SINBAD is available free of charge from RSICC and from the NEA Data Bank.

  5. Recent accelerator experiments updates in Shielding INtegral Benchmark Archive Database (SINBAD)

    International Nuclear Information System (INIS)

    SINBAD is an internationally established set of radiation shielding and dosimetry data relative to experiments relevant in reactor shielding, fusion blanket neutronics and accelerator shielding. In addition to the characterization of the radiation source, it describes shielding materials and instrumentation and the relevant detectors. The experimental results, be it dose, reaction rates or unfolded spectra are presented in tabular ASCII form that can easily be exported to different computer environments for further use. Most sets in SINBAD also contain the computer model used for the interpretation of the experiment and, where available, results from uncertainty analysis. The set of primary documents used for the benchmark compilation and evaluation are provided in computer readable form. SINBAD is available free of charge from RSICC and from the NEA Data Bank

  6. Updating the definition and role of public health nursing to advance and guide the specialty.

    Science.gov (United States)

    Bekemeier, Betty; Walker Linderman, Tessa; Kneipp, Shawn; Zahner, Susan J

    2015-01-01

    National changes in the context for public health services are influencing the nature of public health nursing practice. Despite this, the document that defines public health nursing as a specialty--The Definition and Role of Public Health Nursing--has remained in wide use since its publication in 1996 without a review or update. With support from the American Public Health Association (APHA) Public Health Nursing Section, a national Task Force, was formed in November 2012 to update the definition of public health nursing, using processes that reflected deliberative democratic principles. A yearlong process was employed that included a modified Delphi technique and various modes of engagement such as online discussion boards, questionnaires, and public comment to review. The resulting 2013 document consisted of a reaffirmation of the one-sentence 1996 definition, while updating supporting documentation to align with the current social, economic, political, and health care context. The 2013 document was strongly endorsed by vote of the APHA Public Health Nursing Section elected leadership. The 2013 definition and document affirm the relevance of a population-focused definition of public health nursing to complex systems addressed in current practice and articulate critical roles of public health nurses (PHN) in these settings. PMID:25284433

  7. Benchmarking of quality metrics on ultra-high definition video sequences

    OpenAIRE

    Hanhart, Philippe; Korshunov, Pavel; Ebrahimi, Touradj

    2013-01-01

    The performance of objective quality metrics for high-definition (HD) video sequences is well studied, but little is known about their performance for ultra-high definition (UHD) video sequences. This paper analyzes the performance of several common objective quality metrics (PSNR, VSNR, SSIM, MS-SSIM, VIF, and VQM) on three different 4K UHD video sequences using subjective scores as ground truth. The findings confirm the content-dependent nature of most metrics (with VIF being the only exce...

  8. Planetary Protection and Mars Special Regions--A Suggestion for Updating the Definition.

    Science.gov (United States)

    Rettberg, Petra; Anesio, Alexandre M; Baker, Victor R; Baross, John A; Cady, Sherry L; Detsis, Emmanouil; Foreman, Christine M; Hauber, Ernst; Ori, Gian Gabriele; Pearce, David A; Renno, Nilton O; Ruvkun, Gary; Sattler, Birgit; Saunders, Mark P; Smith, David H; Wagner, Dirk; Westall, Frances

    2016-02-01

    We highlight the role of COSPAR and the scientific community in defining and updating the framework of planetary protection. Specifically, we focus on Mars "Special Regions," areas where strict planetary protection measures have to be applied before a spacecraft can explore them, given the existence of environmental conditions that may be conducive to terrestrial microbial growth. We outline the history of the concept of Special Regions and inform on recent developments regarding the COSPAR policy, namely, the MEPAG SR-SAG2 review and the Academies and ESF joint committee report on Mars Special Regions. We present some new issues that necessitate the update of the current policy and provide suggestions for new definitions of Special Regions. We conclude with the current major scientific questions that remain unanswered regarding Mars Special Regions. PMID:26848950

  9. Diabetic neuropathies: update on definitions, diagnostic criteria, estimation of severity, and treatments

    DEFF Research Database (Denmark)

    Tesfaye, Solomon; Boulton, Andrew J M; Dyck, Peter J;

    2010-01-01

    Preceding the joint meeting of the 19th annual Diabetic Neuropathy Study Group of the European Association for the Study of Diabetes (NEURODIAB) and the 8th International Symposium on Diabetic Neuropathy in Toronto, Canada, 13-18 October 2009, expert panels were convened to provide updates on cla...... classification, definitions, diagnostic criteria, and treatments of diabetic peripheral neuropathies (DPNs), autonomic neuropathy, painful DPNs, and structural alterations in DPNs.......Preceding the joint meeting of the 19th annual Diabetic Neuropathy Study Group of the European Association for the Study of Diabetes (NEURODIAB) and the 8th International Symposium on Diabetic Neuropathy in Toronto, Canada, 13-18 October 2009, expert panels were convened to provide updates on...

  10. Definition and Analysis of an Experimental Benchmark on Shutdown Rod Worths in LEU-HTR Configurations

    International Nuclear Information System (INIS)

    Although high-temperature reactors (HTRs) are endowed with a number of inherent safety features, there are still aspects of the design that need particular attention. For concepts in which shutdown rods are situated outside the core region, as is the case in contemporary modular pebble bed designs, accurate calculations are needed for the worth of these shutdown rods not only in normal operation but also under accident conditions in which significant changes occur, for instance, due to inadvertant moderation increase in the core (ingress of water or other hydrogeneous compound). Corresponding validation experiments, employing a variety of reactivity measurement techniques, were conducted in the framework of the HTR-PROTEUS program employing low-enriched uranium pebble-type fuel. Details of the experimental configurations, along with the measurement results obtained, are given for two different HTR-PROTEUS cores, in each of which four different shutdown rod combinations were investigated. Comparisons made with calculations, based on both approximative deterministic models and geometrically 'near-to-exact' Monte Carlo analyses, have clearly brought out the sensitivity of the experimental results to calculational correction factors when conventional (thermal) techniques are used for reactivity measurements in such systems. Considerably greater systematic accuracies are reflected in the experimental shutdown rod values obtained using specially developed epithermal techniques, and it is these results that are recommended for benchmarking purposes

  11. BN-600 hybrid core benchmark analyses. Results from a coordinated research project on updated codes and methods to reduce the calculational uncertainties of the LMFR reactivity effects

    International Nuclear Information System (INIS)

    To those Member States who have or have had significant fast reactor development programmes, it is of the utmost importance to have validated up-to-date codes and methods for fast reactor core physics analysis in support of R and D activities in the area of actinide utilization and incineration. They have recently focused on fast reactor systems for minor actinide transmutation and on cores optimized for consuming rather than breeding plutonium; the physics of the breeder reactor cycle having already been widely investigated. Plutonium burning systems may have an important role in managing plutonium stocks until the time when major programmes of self-sufficient fast breeder reactors are established. For assessing the safety of these systems it is important to determine the prediction accuracy of transient simulations and their associated reactivity coefficients. In response to Member States' expressed interest, the IAEA sponsored a Coordinated Research Project (CRP) on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects. This CRP was started in November 1999 and at the first meeting the members of the CRP endorsed a benchmark on the BN-600 hybrid core for consideration in its first studies. Benchmark analyses of the BN-600 hybrid core were performed during the first three phases of the CRP investigating different nuclear data and levels of approximations in the calculation of, safety related reactivity effects and their influence on uncertainties in transient analysis predictions. In an additional phase of the benchmark studies experimental data was used for the validation and verification of nuclear data libraries and methods in support of the previous three phases. This report presents the results of the benchmark analyses of the hybrid UOX/MOX fuelled BN-600 reactor core. The aim of this report is to contribute to the reduction in uncertainties associated with reactivity coefficients and their influence on LMFR

  12. Comparison of the updated solutions of the 6th dynamic AER Benchmark - main steam line break in a NPP with WWER-440

    International Nuclear Information System (INIS)

    The 6th dynamic AER Benchmark is used for the systematic validation of coupled 3D neutron kinetic/thermal hydraulic system codes. It was defined at The 10th AER-Symposium. In this benchmark, a hypothetical double ended break of one main steam line at full power in a WWER-440 plant is investigated. The main thermal hydraulic features are the consideration of incomplete coolant mixing in the lower and upper plenum of the reactor pressure vessel and an asymmetric operation of the feed water system. For the tuning of the different nuclear cross section data used by the participants, an isothermal re-criticality temperature was defined. The paper gives an overview on the behaviour of the main thermal hydraulic and neutron kinetic parameters in the provided solutions. The differences in the updated solution in comparison to the previous ones are described. Improvements in the modelling of the transient led to a better agreement of a part of the results while for another part the deviations rose up. The sensitivity of the core power behaviour on the secondary side modelling is discussed in detail (Authors)

  13. Towards a generic benchmarking platform for origin–destination flows estimation/updating algorithms: design, demonstration and validation

    OpenAIRE

    Antoniou, Constantinos; Barceló Bugeda, Jaime; BREEN Martijn; Bullejos, Manuel; Casas, Jordi; Cipriani, Ernesto; CIUFFO, Biagio; Djukic, Tamara; Hoogendoorn, Serge; Marzano, Vittorio; Montero Mercadé, Lídia; Nigro, Marialisa; PERARNAU Josep; Punzo, Vincenzo; Toledo, Tomer

    2016-01-01

    Estimation/updating of origin-destination (OD) flows and other traffic state parameters is a classical, widely adopted procedure in transport engineering, both in off-line and in on-line contexts. Notwithstanding numerous approaches proposed in the literature, there is still room for considerable improvements, also leveraging the unprecedented opportunity offered by information and communication technologies and big data. A key issue relates to the unobservability of OD flows in real networks...

  14. Towards a generic benchmarking platform for origin-destination flows estimation/updating algorithms: Design, demonstration and validation

    OpenAIRE

    Antoniou, Constantinos; BARCELÒ Jaume; BREEN Martijn; Bullejos, Manuel; Casas, Jordi; Cipriani, Ernesto; CIUFFO BIAGIO; Djukic, Tamara; Hoogendoorn, Serge P.; Marzano, Vittorio; MONTERO Lidia; Nigro, Marialisa; PERARNAU Josep; Punzo, Vincenzo; Toledo, Tomer

    2015-01-01

    Estimation/updating of origin-destination (OD) flows and other traffic state parameters is a classical, widely adopted procedure in transport engineering, both in off-line and in on-line contexts. Notwithstanding numerous approaches proposed in the literature, there is still room for considerable improvements, also leveraging the unprecedented opportunity offered by information and communication technologies and big data. A key issue relates to the unobservability of OD flows in real networks...

  15. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  16. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    Science.gov (United States)

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks. PMID:15032545

  17. Benchmarking HRD.

    Science.gov (United States)

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  18. Multidimensional benchmarking

    OpenAIRE

    Campbell, Akiko

    2016-01-01

    Benchmarking is a process of comparison between performance characteristics of separate, often competing organizations intended to enable each participant to improve its own performance in the marketplace (Kay, 2007). Benchmarking sets organizations’ performance standards based on what “others” are achieving. Most widely adopted approaches are quantitative and reveal numerical performance gaps where organizations lag behind benchmarks; however, quantitative benchmarking on its own rarely yi...

  19. Financial benchmarking

    OpenAIRE

    Boldyreva, Anna

    2014-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  20. BN-600 MOX Core Benchmark Analysis. Results from Phases 4 and 6 of a Coordinated Research Project on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects

    International Nuclear Information System (INIS)

    For those Member States that have or have had significant fast reactor development programmes, it is of utmost importance that they have validated up to date codes and methods for fast reactor physics analysis in support of R and D and core design activities in the area of actinide utilization and incineration. In particular, some Member States have recently focused on fast reactor systems for minor actinide transmutation and on cores optimized for consuming rather than breeding plutonium; the physics of the breeder reactor cycle having already been widely investigated. Plutonium burning systems may have an important role in managing plutonium stocks until the time when major programmes of self-sufficient fast breeder reactors are established. For assessing the safety of these systems, it is important to determine the prediction accuracy of transient simulations and their associated reactivity coefficients. In response to Member States' expressed interest, the IAEA sponsored a coordinated research project (CRP) on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects. The CRP started in November 1999 and, at the first meeting, the members of the CRP endorsed a benchmark on the BN-600 hybrid core for consideration in its first studies. Benchmark analyses of the BN-600 hybrid core were performed during the first three phases of the CRP, investigating different nuclear data and levels of approximation in the calculation of safety related reactivity effects and their influence on uncertainties in transient analysis prediction. In an additional phase of the benchmark studies, experimental data were used for the verification and validation of nuclear data libraries and methods in support of the previous three phases. The results of phases 1, 2, 3 and 5 of the CRP are reported in IAEA-TECDOC-1623, BN-600 Hybrid Core Benchmark Analyses, Results from a Coordinated Research Project on Updated Codes and Methods to Reduce the

  1. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  2. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added in...... order to obtain a unique selection...

  3. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  4. Precious benchmarking

    International Nuclear Information System (INIS)

    Recently, there has been a new word added to our vocabulary - benchmarking. Because of benchmarking, our colleagues travel to power plants all around the world and guests from the European power plants visit us. We asked Marek Niznansky from the Nuclear Safety Department in Jaslovske Bohunice NPP to explain us this term. (author)

  5. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  6. Shielding Benchmark Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-09-17

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC).

  7. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part II: Benchmark comparisons of PUMA core parameters with MCNP5 and improvements due to a simple cell heterogeneity correction

    International Nuclear Information System (INIS)

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO2 rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)

  8. Updating the Model Definition of the Gene in the Modern Genomic Era with Implications for Instruction

    Science.gov (United States)

    Smith, Mike U.; Adkison, Linda R.

    2010-01-01

    Gericke and Hagberg (G & H, Sci Educ 16:849-881, 2007) recently published in this journal a thoughtful analysis of the historical progression of our understanding of the nature of the gene for use in instruction. This analysis, however, did not include the findings of the Human Genome Project (HGP), which must be included in any introductory genetics in the modern genomic era today. Many of these findings, especially the limited number of genes and the similarity of this number to that of primitive animals such as roundworms, were surprising and led to questions about the definition of the gene, many of which are addressed in this manuscript. The G & H models are also amended to include crucial concepts, including the history of determining that DNA and not protein is the molecule of inheritance, the work of Barbara McClintock and the discovery of transposons, polygenic/multi-factorial inheritance, and reverse transcription. The following discussion further extends the G & H work to include the more recent work of the ENCyclopedia Of DNA Elements (ENCODE) Project. The results of this work have resulted in even more fundamental questions about the gene. For example, large sections of the genome that were previously identified as non-protein-coding ‘junk’ have been shown to be transcribed into RNA that is likely involved in regulation of genome function that might be more crucial than the coding DNA itself in distinguishing simpler from more complex species. Should these transcribed but not translated sequences be recognized as genes? This level of questioning of our basic definition has not occurred since the modern synthesis of genetics based on the work of Watson and Crick and makes this one of the most exciting times for genetics and medicine.

  9. Benchmark exercise

    International Nuclear Information System (INIS)

    The motivation to conduct this benchmark exercise, a summary of the results, and a discussion of and conclusions from the intercomparison are given in Section 5.2. This section contains further details of the results of the calculations and intercomparisons, illustrated by tables and figures, but avoiding repetition of Section 5.2 as far as possible. (author)

  10. Benchmarking for Excellence and the Nursing Process

    Science.gov (United States)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  11. Definition of the seventh dynamic AER benchmark-WWER-440 pressure vessel coolant mixing by re-connection of an isolated loop

    International Nuclear Information System (INIS)

    The seventh dynamic benchmark is a continuation of the efforts to validate systematically codes for the estimation of the transient behavior of VVER type nuclear power plants. This benchmark is a continuation of the work in the sixth dynamic benchmark. It is proposed to be simulated the transient - re-connection of an isolated circulating loop with low temperature or low boron concentration in a VVER-440 plant. It is supposed to expand the benchmark to other cases when a different number of loops are in operation leading to different symmetric and asymmetric core boundary conditions. The purposes of the proposed benchmark are: 1) Best-estimate simulations of an transient with a coolant flow mixing in the Reactor Pressure Vessel of WWER-440 plant by re-connection of one coolant loop to the several ones on operation, 2) Performing of code-to-code comparisons. The core is at the end of its first cycle with a power of 1196.25 MWt. The basic additional difference of the 7-seventh benchmark is in the detailed description of the downcomer and bottom part of the reactor vessel that allow describing the effects of coolant mixing in the Reactor Pressure Vessel without any additional conservative assumptions. The burn-up and the power distributions at this reactor state have to be calculated by the participants. The thermohydraulic conditions of the core in the beginning of the transient are specified. Participants self-generated best estimate nuclear data is to be used. The main geometrical parameters of the plant and the characteristics of the control and safety systems are also specified. Use generated input data decks developed for a WWER-440 plant and for the applied codes should be used. The behaviour of the plant should be studied applying coupled system codes, which combine a three-dimensional neutron kinetics description of the core with a pseudo or real 3D thermohydraulics system code. (Authors)

  12. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1996 revision

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W. II [Oak Ridge National Lab., TN (United States); Tsao, C.L. [Duke Univ., Durham, NC (United States). School of the Environment

    1996-06-01

    This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more complete documentation of the sources and derivation of all values are presented.

  13. Methodological background and strategy for the 2012-2013 updated consensus definitions and clinical practice guidelines from the abdominal compartment society.

    Science.gov (United States)

    Kirkpatrick, Andrew W; Roberts, Derek J; Jaeschke, Roman; De Waele, Jan J; De Keulenaer, Bart L; Duchesne, Juan; Bjorck, Martin; Leppäniemi, Ari; Ejike, Janeth C; Sugrue, Michael; Cheatham, Michael L; Ivatury, Rao; Ball, Chad G; Reintam Blaser, Annika; Regli, Adrian; Balogh, Zsolt; D'Amours, Scott; De Laet, Inneke; Malbrain, Manu L N G

    2015-01-01

    The Abdominal Compartment Society (www.wsacs.org) previously created highly cited Consensus Definitions/Management Guidelines related to intra-abdominal hypertension (IAH) and abdominal compartment syndrome (ACS). Implicit in this previous work, was a commitment to regularly reassess and update in relation to evolving research. Two years preceding the Fifth World Congress on Abdominal Compartment Syndrome, an International Guidelines committee began preparation. An oversight/steering committee formulated key clinical questions regarding IAH/ /ACS based on polling of the Executive to redundancy, structured according to the Patient, Intervention, Comparator, and Outcome (PICO) format. Scientific consultations were obtained from Methodological GRADE experts and a series of educational teleconferences were conducted to educate scientific review teams from among the wscacs. org membership. Each team conducted systematic or structured reviews to identify relevant studies and prepared evidence summaries and draft Grades of Recommendation Assessment, Development and Evaluation (GRADE) recommendations. The evidence and draft recommendations were presented and debated in person over four days. Updated consensus definitions and management statements were derived using a modified Delphi method. A writingcommittee subsequently compiled the results utilizing frequent Internet discussion and Delphi voting methods to compile a robust online Master Report and a concise peer-reviewed summarizing publication. A dedicated Paediatric Guidelines Subcommittee reviewed all recommendations and either accepted or revised them for appropriateness in children. Of the original 12 IAH/ACS definitions proposed in 2006, three (25%) were accepted unanimously, with four (33%) accepted by > 80%, and four (33%) accepted by > 50%, but required discussion to produce revised definitions. One (8%) was rejected by > 50%. In addition to previous 2006 definitions, the panel also defined the open abdomen

  14. The PRISM Benchmark Suite

    OpenAIRE

    Kwiatkowsa, Marta; Norman, Gethin; Parker, David

    2012-01-01

    We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.

  15. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  16. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  17. Proper definitions for Risk and Uncertainty (July update with (a) better notation, (b) relative risk (c) an application to insurance)

    OpenAIRE

    Thomas Cool

    1993-01-01

    The commonly adopted definitions of risk and uncertainty generate conceptual problems and inconsistencies, and they are a source of confusion in general. However, alternative and proper definitions are: (1) First there is the distinction between certainty and uncertainty. (2) Uncertainty forks into known (assumed) and unknown probabilities. (3) Unknown probabilities forks into known categories and unknown categories. (4) Known categories forks into 'including the uncertainties in the probabil...

  18. International benchmarking of telecommunications prices and price changes

    OpenAIRE

    Productivity Commission

    2002-01-01

    The report, a series of international benchmarking studies conducted by the Productivity Commission, compares Australian telecommunications prices, price changes and regulatory arrangements with those in nine other OECD countries, updating a similar study, International Benchmarking of Australian Telecommunications Services, released in March 1999.

  19. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  20. Distributional benchmarking in tax policy evaluations

    OpenAIRE

    Thor O. Thoresen; Zhiyang Jia; Peter J. Lambert

    2013-01-01

    Given an objective to exploit cross-sectional micro data to evaluate the distributional effects of tax policies over a time period, the practitioner of public economics will find that the relevant literature offers a wide variety of empirical approaches. For example, studies vary with respect to the definition of individual well-being and to what extent explicit benchmarking techniques are utilized to describe policy effects. The present paper shows how the concept of distributional benchmark...

  1. BN-600 fully MOX fuelled core benchmark analyses (Phase 4). Draft synthesis report - Revision 1

    International Nuclear Information System (INIS)

    from the Republic of Korea, IPPE from the Russian Federation. The participants applied their own state-of-the-art basic data, computer codes and methods to the benchmark analysis. Within the scope of these core benchmark analyses, they have validated their efforts to update basic nuclear data, and to improve methodologies and computer codes for calculating safety relevant reactor physics parameters. This report first addresses the benchmark definitions and specifications given for Phase 4 and briefly introduces the basic data, computer codes, and methodologies applied to the benchmark analysis by various participants. Then, the results for integral and local reactivity coefficient values obtained by the participants are inter-compared in terms of calculational uncertainty resulting from different data and method approximations along with their effects on the ULOF and UTOP transient behaviours. In addition, the results are evaluated in comparison with the results for the hybrid core. This benchmark concludes that the safety analysis of such a design would require the use of the most advanced computational tools (transport theory and three-dimensional Hex-Z modeling are required) which should be checked by representative experiments. A side conclusion to this benchmark is that the use of MOX fuel in the BN-600 core could be envisaged given with a design penalty associated to the larger reactivity coefficient uncertainties

  2. A Large-update Interior-point Algorithm for Convex Quadratic Semi-definite Optimization Based on a New Kernel Function

    Institute of Scientific and Technical Information of China (English)

    Ming Wang ZHANG

    2012-01-01

    In this paper,we present a large-update interior-point algorithm for convex quadratic semi-definite optimization based on a new kernel function.The proposed function is strongly convex.It is not self-regular function and also the usual logarithmic function.The goal of this paper is to investigate such a kernel function and show that the algorithm has favorable complexity bound in terms of the elegant analytic properties of the kernel function.The complexity bound is shown to be O(√n(logn)2log n/ε). This bound is better than that by the classical primal-dual interior-point methods based on logarithmic barrier function and recent kernel functions introduced by some authors in optimization fields.Some computational results have been provided.

  3. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  4. Nea Benchmarks

    International Nuclear Information System (INIS)

    simulations and consequently to improve the understanding of safety issues and the design/operating conditions of nuclear reactors, definitely putting the basis for advancing the nuclear technology.

  5. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  6. A performance benchmark test for geodynamo simulations

    Science.gov (United States)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  7. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in...

  8. Reliable B cell epitope predictions: impacts of method development and improved benchmarking.

    Directory of Open Access Journals (Sweden)

    Jens Vindahl Kringelum

    Full Text Available The interaction between antibodies and antigens is one of the most important immune system mechanisms for clearing infectious organisms from the host. Antibodies bind to antigens at sites referred to as B-cell epitopes. Identification of the exact location of B-cell epitopes is essential in several biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping of B-cell epitopes has been moderate. Several issues regarding the evaluation data sets may however have led to the performance values being underestimated: Rarely, all potential epitopes have been mapped on an antigen, and antibodies are generally raised against the antigen in a given biological context not against the antigen monomer. Improper dealing with these aspects leads to many artificial false positive predictions and hence to incorrect low performance values. To demonstrate the impact of proper benchmark definitions, we here present an updated version of the DiscoTope method incorporating a novel spatial neighborhood definition and half-sphere exposure as surface measure. Compared to other state-of-the-art prediction methods, Discotope-2.0 displayed improved performance both in cross-validation and in independent evaluations. Using DiscoTope-2.0, we assessed the impact on performance when using proper benchmark definitions. For 13 proteins in the training data set where sufficient biological information was available to make a proper benchmark redefinition, the average AUC performance was improved from 0.791 to 0.824. Similarly, the average AUC performance on an independent evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve

  9. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  10. Research Reactor Benchmarks

    International Nuclear Information System (INIS)

    A criticality benchmark experiment performed at the Jozef Stefan Institute TRIGA Mark II research reactor is described. This experiment and its evaluation are given as examples of benchmark experiments at research reactors. For this reason the differences and possible problems compared to other benchmark experiments are particularly emphasized. General guidelines for performing criticality benchmarks in research reactors are given. The criticality benchmark experiment was performed in a normal operating reactor core using commercially available fresh 20% enriched fuel elements containing 12 wt% uranium in uranium-zirconium hydride fuel material. Experimental conditions to minimize experimental errors and to enhance computer modeling accuracy are described. Uncertainties in multiplication factor due to fuel composition and geometry data are analyzed by sensitivity analysis. The simplifications in the benchmark model compared to the actual geometry are evaluated. Sample benchmark calculations with the MCNP and KENO Monte Carlo codes are given

  11. BN-600 hybrid core benchmark analyses (phases 1, 2 and 3) (draft synthesis report)

    International Nuclear Information System (INIS)

    This report presents the results of benchmark analyses for a hybrid UOX/MOX fuelled core of the BN-600 reactor. This benchmark was proposed during the first Research Co-ordination Meeting (RCM) of the Co-ordinated Research Project (CRP) on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects, which took place in Vienna on 24 - 26 November 1999. The general objective of the CRP is to validate, verify and improve methodologies and computer codes used for the calculation of reactivity coefficients in fast reactors aiming at enhancing the utilization of plutonium and minor actinides. There has been no change in the view that energy production with breeding of fissile materials is the main goal of fast reactor development to ensure long-term fuel supply. However, before the breeding role of fast reactors is recognized economically, due to the increasingly available low-cost uranium from the 1980s onwards, the emphasis of fast reactor development shifted to incineration of stock-piled plutonium and partitioning and transmutation (P and T) of nuclear wastes to meet contemporary demands. Following a proposal of the Russian Federation, at the 32nd Annual Meeting of the International Working Group on Fast Reactors (IWG-FR), held in May 1999, a hybrid UOX/MOX (mixed oxide) fuelled BN-600 reactor core that has a combination of highly enriched uranium (HEU) and mixed oxide (MOX) assemblies in the core region, was chosen as a calculational model. Hence the benchmark clearly addresses the issues of weapons-grade plutonium for energy production in a mixed UOX/MOX fuelled core of the BN-600 reactor. The input data for the benchmark neutronics calculations have been prepared by OKBM and IPPE (Russia). The input data have been reviewed and modified in the first RCM of this CRP. The organizations participating in the BN-600 hybrid core benchmark analyses are: ANL from the USA, CEA and SA (its previous name was AEAT) from EU (France and the

  12. Gd-2 fuel cycle Benchmark (version 1)

    International Nuclear Information System (INIS)

    The new benchmark based on Dukovany NPP Unit-3 history of Gd-2 fuel type utilisation is defined. The main goal of this benchmark is to compare results obtained by different codes used for neutron-physics calculation. Input data are described in this paper including initial state definition. Requested output data format for automatic processing is defined. This paper includes: a) fuel description b) definition of starting point and five fuel cycles with profiled fuel 3.82% only c) definition of four fuel cycles with fuel Gd-2 (enr.4.25%) d) recommendation for calculation e) list of parameters for comparison f) methodology of comparison g) an example of results comparison (Authors)

  13. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  14. Bayesian Benchmark Dose Analysis

    OpenAIRE

    Fang, Qijun; Piegorsch, Walter W.; Barnes, Katherine Y.

    2014-01-01

    An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indee...

  15. Risk Management with Benchmarking

    OpenAIRE

    Suleyman Basak; Alex Shapiro; Lucie Teplá

    2005-01-01

    Portfolio theory must address the fact that, in reality, portfolio managers are evaluated relative to a benchmark, and therefore adopt risk management practices to account for the benchmark performance. We capture this risk management consideration by allowing a prespecified shortfall from a target benchmark-linked return, consistent with growing interest in such practice. In a dynamic setting, we demonstrate how a risk-averse portfolio manager optimally under- or overperforms a target benchm...

  16. Aeroelastic Benchmark Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the...

  17. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  18. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as...... important; (2) will, that activists and issue entrepreneurs will carry the message forward; and (3) expertise, that benchmarks created can be defended as accurate representations of what is happening on the issue of concern. We contrast two types of benchmarking cycles where salience, will, and expertise...

  19. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt at...

  20. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  1. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  2. ICSBEP criticality benchmarking for nuclear data validations, KAMINI, PURNIMA-II and PURNIMA-I

    International Nuclear Information System (INIS)

    India has contributed three experimental benchmarks to the International handbook of the International Criticality safety Benchmark Evaluation Project (ICSBEP) of the US-DOE/NEA-DB. This presentation describes the interesting experience in creating these three Indian experimental benchmarks for nuclear data and code validation studies. The concept of definition of benchmark is also reviewed for convenience. Series of sensitivity studies are performed to assess the various uncertainties that arise in knowledge of the description of the actual system

  3. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  4. Introduction to 'International Handbook of Criticality Safety Benchmark Experiments'

    International Nuclear Information System (INIS)

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated year by year by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used. The author briefly introduces the informative handbook and would like to encourage Japanese engineers who are in charge of nuclear criticality safety to use the handbook. (author)

  5. A performance geodynamo benchmark

    Science.gov (United States)

    Matsui, H.; Heien, E. M.

    2014-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. However, to approach the parameters regime of the Earth's outer core, we need massively parallel computational environment for extremely large spatial resolutions. Local methods are expected to be more suitable for massively parallel computation because the local methods needs less data communication than the spherical harmonics expansion, but only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, some numerical dynamo models using spherical harmonics expansion has performed successfully with thousands of processes. We perform benchmark tests to asses various numerical methods to asses the next generation of geodynamo simulations. The purpose of the present benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the present study, we will report the results of the performance benchmark. We perform the participated dynamo models under the same computational environment (XSEDE TACC Stampede), and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions as possible to investigate computational capability (e

  6. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  7. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  8. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  9. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  10. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  11. Benchmarking in University Toolbox

    OpenAIRE

    Katarzyna Kuźmicz

    2015-01-01

    In the face of global competition and rising challenges that higher education institutions (HEIs) meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indica...

  12. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  13. Benchmarking conflict resolution algorithms

    OpenAIRE

    Vanaret, Charlie; Gianazza, David; Durand, Nicolas; Gotteland, Jean-Baptiste

    2012-01-01

    Applying a benchmarking approach to conflict resolution problems is a hard task, as the analytical form of the constraints is not simple. This is especially the case when using realistic dynamics and models, considering accelerating aircraft that may follow flight paths that are not direct. Currently, there is a lack of common problems and data that would allow researchers to compare the performances of several conflict resolution algorithms. The present paper introduces a benchmarking approa...

  14. Benchmarking and regulation

    OpenAIRE

    Agrell, Per Joakim; Bogetoft, Peter

    2013-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publication...

  15. Accelerator shielding benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, H.; Ban, S.; Nakamura, T. [and others

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author).

  16. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  17. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  18. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  19. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    More than 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The benchmark calculations reported here are part of an ongoing multiyear, multiperson effort to benchmark version 4 of the MCNP code. The MCNP is a Monte Carlo three-dimensional general-purpose, continuous-energy neutron, photon, and electron transport code. It is used around the world for many applications including aerospace, oil-well logging, physics experiments, criticality safety, reactor analysis, medical imaging, defense applications, accelerator design, radiation hardening, radiation shielding, health physics, fusion research, and education. The first phase of the benchmark project consisted of analytic and photon problems. The second phase consists of the ENDF/B-V neutron problems reported in this paper and in more detail in the comprehensive report. A cooperative program being carried out a General Electric, San Jose, consists of light water reactor benchmark problems. A subsequent phase focusing on electron problems is planned

  20. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  1. Remote Sensing Segmentation Benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.

    Piscataway, NJ : IEEE Press, 2012, s. 1-4. ISBN 978-1-4673-4960-4. [IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS). Tsukuba Science City (JP), 11.11.2012] R&D Projects: GA ČR GAP103/11/0335; GA ČR GA102/08/0593 Grant ostatní: CESNET(CZ) 409/2011 Keywords : remote sensing * segmentation * benchmark Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2013/RO/mikes-remote sensing segmentation benchmark.pdf

  2. WIMS Library updating

    International Nuclear Information System (INIS)

    At the end of 1990 the WIMS Library Update Project (WLUP) has been initiated at the International Atomic Energy Agency. The project was organized as an international research project, coordinated at the J. Stefan Institute. Up to now, 22 laboratories from 19 countries joined the project. Phase 1 of the project, which included WIMS input optimization for five experimental benchmark lattices, has been completed. The work presented in this paper describes also the results of Phase 2 of the Project, in which the cross sections based on ENDF/B-IV evaluated nuclear data library have been processed. (author)

  3. Benchmarking the World's Best

    Science.gov (United States)

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  4. Benchmark problem proposal

    International Nuclear Information System (INIS)

    The meeting of the Radiation Energy Spectra Unfolding Workshop organized by the Radiation Shielding Information Center is discussed. The plans of the unfolding code benchmarking effort to establish methods of standardization for both the few channel neutron and many channel gamma-ray and neutron spectroscopy problems are presented

  5. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  6. CCF benchmark test

    International Nuclear Information System (INIS)

    A benchmark test on common cause failures (CCF) was performed giving interested institutions in Germany the opportunity of demonstrating and justifying their interpretations of events, their methods and models for analyzed CCF. The participants of this benchmark test belonged to expert and consultant organisations and to industrial institutions. The task for the benchmark test was to analyze two typical groups of motor-operated valves in German nuclear power plants. The benchmark test was carried out in two steps. In the first step the participants were to assess in a qualitative way some 200 event-reports on isolation valves. They then were to establish, quantitatively, the reliability parameters for the CCF in the two groups of motor-operated valves using their own methods and their own calculation models. In a second step the reliability parameters were to be recalculated on the basis of a common reference of well defined events, chosen from all given events, in order to analyze the influence of the calculation models on the reliability parameters. (orig.)

  7. Benchmarking Public Procurement 2016

    OpenAIRE

    World Bank Group

    2015-01-01

    Benchmarking Public Procurement 2016 Report aims to develop actionable indicators which will help countries identify and monitor policies and regulations that impact how private sector companies do business with the government. The project builds on the Doing Business methodology and was initiated at the request of the G20 Anti-Corruption Working Group.

  8. NAS Parallel Benchmarks Results

    Science.gov (United States)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  9. BN-600 fully MOX fuelled core benchmark analyses (phase 4) (draft synthesis report)

    International Nuclear Information System (INIS)

    This report presents the results of benchmark analysis for a fully Mixed Oxide (MOX) fuelled core of the BN-600 reactor. This benchmark analysis is an extension to the study of a hybrid UOX/MOX fuelled core performed during 1999 - 2001. These benchmark core analyses have been performed within the frame of the IAEA sponsored Co-ordinated Research Project (CRP) on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects' commenced in 1999. The general objective of the CRP is to validate, verify and improve methodologies and computer codes used for the calculation of reactivity coefficients in fast reactors aiming at enhancing the utilization of plutonium and minor actinides. In the hybrid BN-600 core benchmark analyses, the substantial spread between the different participants noticed for several reactivity coefficients and power distributions did not have a significant impact on the transient behavior prediction, especially up to the onset of sodium boiling in the ULOF transient analyses. This result highlighted the compensating effects between several reactivity effects in the specific design of the hybrid core mainly loaded with UOX fuel. This gave confidence that the outcome of this type of transient could be understood in the partially MOX fuelled hybrid core type. From the recognition of significant interest of the analysis of a fully fuelled MOX core, a study of a BN-600 fully MOX fuelled core design with sodium plenum above the core including transient analyses has been defined for Phase 4 as an additional phase work in the 3rd Research Co-ordination Meeting (RCM) of the CRP. The specifications and input data for the benchmark neutronics calculations have been prepared by IPPE (Russia) and posted to the collaborator web site. The specifications given here describe a preliminary core model variant. It represents only conceptual approaches to a BN-600 full MOX core design and does not represent the real technical

  10. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  11. Texture Segmentation Benchmark

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Mikeš, Stanislav

    Los Alamitos : IEEE Press, 2008, s. 2933-2936. ISBN 978-1-4244-2174-9. [19th International Conference on Pattern Recognition. Tampa (US), 07.12.2008-11.12.2008] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA ČR GA102/07/1594; GA ČR GA102/08/0593 Grant ostatní: GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : texture segmentation * image segmentation * benchmark Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2008/RO/haindl-texture segmentation benchmark.pdf

  12. Radiography benchmark 2014

    International Nuclear Information System (INIS)

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed

  13. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of di?erent architectural and hyperparameter choices on performance. Significant ?ndings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperfor...

  14. Texture Fidelity Benchmark

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Kudělka, Miloš

    Los Alamitos, USA: IEEE Computer Society CPS, 2014. ISBN 978-1-4799-7971-4. [International Workshop on Computational Intelligence for Multimedia Understanding 2014 (IWCIM). Paris (FR), 01.11.2014-02.11.2014] R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : Benchmark testing * fidelity criteria * texture Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2014/RO/haindl-0439654.pdf

  15. Cloud benchmarking for performance

    OpenAIRE

    Varghese, Blesson; Akgun, Ozgur; Miguel, Ian; Thai, Long; Barker, Adam

    2014-01-01

    How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the applic...

  16. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  17. WIPP Benchmark calculations with the large strain SPECTROM codes

    International Nuclear Information System (INIS)

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems

  18. About Updating

    OpenAIRE

    Smets, Philippe

    2013-01-01

    Survey of several forms of updating, with a practical illustrative example. We study several updating (conditioning) schemes that emerge naturally from a common scenarion to provide some insights into their meaning. Updating is a subtle operation and there is no single method, no single 'good' rule. The choice of the appropriate rule must always be given due consideration. Planchet (1989) presents a mathematical survey of many rules. We focus on the practical meaning of these rules. After sum...

  19. Entropy-based benchmarking methods

    OpenAIRE

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  20. Benchmarking in the Semantic Web

    OpenAIRE

    García-Castro, Raúl; Gómez-Pérez, A.

    2009-01-01

    The Semantic Web technology needs to be thoroughly evaluated for providing objective results and obtaining massive improvement in its quality; thus, the transfer of this technology from research to industry will speed up. This chapter presents software benchmarking, a process that aims to improve the Semantic Web technology and to find the best practices. The chapter also describes a specific software benchmarking methodology and shows how this methodology has been used to benchmark the inter...

  1. Selecting benchmarks for reactor calculations

    OpenAIRE

    Alhassan, Erwin; Sjöstrand, Henrik; Duan, Junfeng; Helgesson, Petter; Pomp, Stephan; Österlund, Michael; Rochman, Dimitri; Koning, Arjan J.

    2014-01-01

    Criticality, reactor physics, fusion and shielding benchmarks are expected to play important roles in GENIV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used to validate computer codes and test nuclear data libraries. However the selection of these benchmarks are usually done by visual inspection which is dependent on the expertise and the experience of the user and there by resulting in a user...

  2. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  3. RESULTS FOR THE INTERMEDIATE-SPECTRUM ZEUS BENCHMARK OBTAINED WITH NEW 63,65Cu CROSS-SECTION EVALUATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Sobes, Vladimir [ORNL; Leal, Luiz C [ORNL

    2014-01-01

    The four HEU, intermediate-spectrum, copper-reflected Zeus experiments have shown discrepant results between measurement and calculation for the last several major releases of the ENDF library. The four benchmarks show a trend in reported C/E values with increasing energy of average lethargy causing fission. Recently, ORNL has made improvements to the evaluations of three key isotopes involved in the benchmark cases in question. Namely, an updated evaluation for 235U and evaluations of 63,65Cu. This paper presents the benchmarking results of the four intermediate-spectrum Zeus cases using the three updated evaluations.

  4. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins, Robert

    2010-01-01

    Im Bereich der Regionalpolitik erfreuen sich Benchmarking-Untersuchungen wachsender Beliebtheit. In diesem Beitrag werden das Konzept des regionalen Benchmarking sowie seine Verbindungen mit den regionalpolitischen Gestaltungsprozessen analysiert. Ich entwickle eine Typologie der regionalen Benchmarking-Untersuchungen und Benchmarker und unterziehe die Literatur einer kritischen Uumlberpruumlfung. Ich argumentiere, dass die Kritiker des regionalen Benchmarking nicht die Vielfalt und Entwicklu...

  5. Shielding benchmark test

    International Nuclear Information System (INIS)

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  6. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    Science.gov (United States)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  7. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  8. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  9. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  10. Benchmark experiments for nuclear data

    International Nuclear Information System (INIS)

    Benchmark experiments offer the most direct method for validation of nuclear data. Benchmark experiments for several areas of application of nuclear data were specified by CSEWG. These experiments are surveyed and tests of recent versions of ENDF/B are presented. (U.S.)

  11. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  12. Quantum benchmarks for Gaussian states

    CERN Document Server

    Chiribella, Giulio

    2014-01-01

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.

  13. Wind turbine reliability database update.

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  14. A proposal to Asian countries with operating research reactors for making nuclear criticality safety benchmark evaluations

    International Nuclear Information System (INIS)

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated yearly by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear criticality facilities around the world. However, the handbook lacks criticality data of 20 wt%-enriched uranium fuel. The author proposes to make benchmark specifications derived from modern research reactors in Asia. Future evaluations of these reactors will facilitate to fill the 'enrichment gap'. (author)

  15. Benchmarking Combined Biological Phosphorus and Nitrogen Removal Wastewater Treatment Processes

    DEFF Research Database (Denmark)

    Gernaey, Krist; Jørgensen, Sten Bay

    2004-01-01

    This paper describes the implementation of a simulation benchmark for studying the influence of control strategy implementations on combined nitrogen and phosphorus removal processes in a biological wastewater treatment plant. The presented simulation benchmark plant and its performance criteria...... conditions respectively, the definition of performance indexes that include the phosphorus removal processes, and the selection of a suitable operating point for the plant. Two control loops were implemented: one for dissolved oxygen control using the oxygen transfer coefficient K(L)a as manipulated variable...... are to a large extent based on the already existing nitrogen removal simulation benchmark. The paper illustrates and motivates the selection of the treatment plant lay-out, the selection of the biological process model, the development of realistic influent disturbance scenarios for dry, rain and storm weather...

  16. Circular Updates

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Circular Updates are periodic sequentially numbered instructions to debriefing staff and observers informing them of changes or additions to scientific and specimen...

  17. Cybersecurity Update

    CERN Document Server

    Heagerty, Denise

    2008-01-01

    An update on recent security issues and vulnerabilities affecting Windows, Linux and Mac platforms. This talk is based on contributions and input from a range of colleagues both within and outside CERN. It covers clients, servers and control systems.

  18. Email Updates

    Science.gov (United States)

    ... https://www.nlm.nih.gov/medlineplus/listserv.html Email Updates To use the sharing features on this ... view your email history or unsubscribe. Prevent MedlinePlus emails from being marked as "spam" or "junk" To ...

  19. Website updates

    Data.gov (United States)

    National Aeronautics and Space Administration — Updates to Website: (Please add new items at the top of this description with the date of the website change) May 9, 2012: Uploaded experimental data in matlab...

  20. Selecting benchmarks for reactor calculations

    International Nuclear Information System (INIS)

    Criticality, reactor physics, fusion and shielding benchmarks are expected to play important roles in GENIV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used to validate computer codes and test nuclear data libraries. However the selection of these benchmarks are usually done by visual inspection which is dependent on the expertise and the experience of the user and thereby resulting in a user bias in the process. In this paper we present a method for the selection of these benchmarks for reactor applications and uncertainty reduction based on Total Monte Carlo (TMC) method. Similarities between an application case and one or several benchmarks are quantified using the correlation coefficient. Based on the method, we also propose two approaches for reducing nuclear data uncertainty using integral benchmark experiments as an additional constrain in the TMC method: a binary accept/reject method and a method of uncertainty reduction using weights. Finally, the methods were applied to a full Lead Fast Reactor core and a set of criticality benchmarks. (author)

  1. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Sediment-Associated Biota

    Energy Technology Data Exchange (ETDEWEB)

    Hull, R.N.

    1993-01-01

    A hazardous waste site may contain hundreds of chemicals; therefore, it is important to screen contaminants of potential concern for the ecological risk assessment. Often this screening is done as part of a screening assessment, the purpose of which is to evaluate the available data, identify data gaps, and screen contaminants of potential concern. Screening may be accomplished by using a set of toxicological benchmarks. These benchmarks are helpful in determining whether contaminants warrant further assessment or are at a level that requires no further attention. If a chemical concentration or the reported detection limit exceeds a proposed lower benchmark, further analysis is needed to determine the hazards posed by that chemical. If, however, the chemical concentration falls below the lower benchmark value, the chemical may be eliminated from further study. The use of multiple benchmarks is recommended for screening chemicals of concern in sediments. Integrative benchmarks developed for the National Oceanic and Atmospheric Administration and the Florida Department of Environmental Protection are included for inorganic and organic chemicals. Equilibrium partitioning benchmarks are included for screening nonionic organic chemicals. Freshwater sediment effect concentrations developed as part of the U.S. Environmental Protection Agency's (EPA's) Assessment and Remediation of Contaminated Sediment Project are included for inorganic and organic chemicals (EPA 1996). Field survey benchmarks developed for the Ontario Ministry of the Environment are included for inorganic and organic chemicals. In addition, EPA-proposed sediment quality criteria are included along with screening values from EPA Region IV and Ecotox Threshold values from the EPA Office of Solid Waste and Emergency Response. Pore water analysis is recommended for ionic organic compounds; comparisons are then made against water quality benchmarks. This report is an update of three prior reports

  2. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  3. Cleanroom energy benchmarking results

    Energy Technology Data Exchange (ETDEWEB)

    Tschudi, William; Xu, Tengfang

    2001-09-01

    A utility market transformation project studied energy use and identified energy efficiency opportunities in cleanroom HVAC design and operation for fourteen cleanrooms. This paper presents the results of this work and relevant observations. Cleanroom owners and operators know that cleanrooms are energy intensive but have little information to compare their cleanroom's performance over time, or to others. Direct comparison of energy performance by traditional means, such as watts/ft{sup 2}, is not a good indicator with the wide range of industrial processes and cleanliness levels occurring in cleanrooms. In this project, metrics allow direct comparison of the efficiency of HVAC systems and components. Energy and flow measurements were taken to determine actual HVAC system energy efficiency. The results confirm a wide variation in operating efficiency and they identify other non-energy operating problems. Improvement opportunities were identified at each of the benchmarked facilities. Analysis of the best performing systems and components is summarized, as are areas for additional investigation.

  4. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  5. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  6. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing...

  7. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  8. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Catalina SITNIKOV; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  9. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking as a...... market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type...

  10. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  11. Benchmarking Developing Asia's Manufacturing Sector

    OpenAIRE

    Felipe, Jesus; Gemma ESTRADA

    2007-01-01

    This paper documents the transformation of developing Asia's manufacturing sector during the last three decades and benchmarks its share in GDP with respect to the international regression line by estimating a logistic regression.

  12. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  13. Swiss electricity grid - Benchmarking pilot project

    International Nuclear Information System (INIS)

    This report for the Swiss Federal Office of Energy (SFOE) describes a benchmarking pilot project carried out as a second phase in the development of a formula for the regulation of an open electricity market in Switzerland. It follows on from an initial phase involving the definition of a 'blue print' and a basic concept. The aims of the pilot project - to check out the practicability of the concept - are discussed. The collection of anonymised data for the benchmarking model from over 30 electricity utilities operating on all 7 Swiss grid levels and their integration in the three areas 'Technology', 'Grid Costs' and 'Capital Invested' are discussed in detail. In particular, confidentiality and data protection aspects are looked at. The methods used in the analysis of the data are described and the results of an efficiency analysis of various utilities are presented. The report is concluded with a listing of questions concerning data collection and analysis as well as operational and capital costs that are still to be answered

  14. Swiss electricity grid - Benchmarking pilot project

    International Nuclear Information System (INIS)

    This article is a short version of the ENET number 210369. This report for the Swiss Federal Office of Energy (SFOE) describes a benchmarking pilot project carried out as a second phase in the development of a formula for the regulation of an open electricity market in Switzerland. It follows on from an initial phase involving the definition of a 'blue print' and a basic concept. The aims of the pilot project - to check out the practicability of the concept - are discussed. The collection of anonymised data for the benchmarking model from over 30 electricity utilities operating on all 7 Swiss grid levels and their integration in the three areas 'Technology', 'Grid Costs' and 'Capital Invested' are discussed in detail. In particular, confidentiality and data protection aspects are looked at. The methods used in the analysis of the data are described and the results of an efficiency analysis of various utilities are presented. The report is concluded with a listing of questions concerning data collection and analysis as well as operational and capital costs that are still to be answered

  15. Benchmarking hypercube hardware and software

    Science.gov (United States)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  16. Strategic Behaviour under Regulation Benchmarking

    OpenAIRE

    Jamasb, Tooraj; Nillesen, Paul; Michael G. Pollitt

    2003-01-01

    Liberalisation of generation and supply activities in the electricity sectors is often followed by regulatory reform of distribution networks. In order to improve the efficiency of distribution utilities, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ?regulation game?, the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behav...

  17. Updated and revised neutron reaction data for 233U

    Institute of Scientific and Technical Information of China (English)

    YU Bao-Sheng; CHEN Guo-Chang; ZHANG Hua; CAO Wen-Tian; TANG Guo-You; TAO Xi

    2013-01-01

    A complete set of n+233U neutron reaction data from 10-5 eV-20 MeV is updated and revised based on the evaluated experimental data and the feedback information of various benchmark tests.The main revised quantities are nubars,cross sections as well as angular distributions,etc.The benchmark tests indicate that the present evaluated data achieve very promising results.

  18. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  19. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    International Nuclear Information System (INIS)

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  20. Update '98.

    Science.gov (United States)

    Mock, Karen R.

    1998-01-01

    Updates cases and issues previously discussed in this regular column on human rights in Canada, including racism and anti-Semitism, laws on hate crimes, hate sites on the World Wide Web, the use of the "free speech" defense by hate groups, and legal challenges to antiracist groups by individuals criticized by them. (DSK)

  1. Closed-Loop Neuromorphic Benchmarks

    Science.gov (United States)

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  2. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  3. ZZ IHEAS-BENCHMARKS, High-Energy Accelerator Shielding Benchmarks

    International Nuclear Information System (INIS)

    Description of program or function: Six kinds of Benchmark problems were selected for evaluating the model codes and the nuclear data for the intermediate and high energy accelerator shielding by the Shielding Subcommittee in the Research Committee on Reactor Physics. The benchmark problems contain three kinds of neutron production data from thick targets due to proton, alpha and electron, and three kinds of shielding data for secondary neutron and photon generated by proton. Neutron and photo-neutron reaction cross section data are also provided for neutrons up to 500 MeV and photons up to 300 MeV, respectively

  4. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  5. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    OpenAIRE

    Dreher, Patrick; Byun, Chansup; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many...

  6. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  7. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of Keff, control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  8. Benchmarking of radiological departments. Starting point for successful process optimization

    International Nuclear Information System (INIS)

    Continuous optimization of the process of organization and medical treatment is part of the successful management of radiological departments. The focus of this optimization can be cost units such as CT and MRI or the radiological parts of total patient treatment. Key performance indicators for process optimization are cost- effectiveness, service quality and quality of medical treatment. The potential for improvements can be seen by comparison (benchmark) with other hospitals and radiological departments. Clear definitions of key data and criteria are absolutely necessary for comparability. There is currently little information in the literature regarding the methodology and application of benchmarks especially from the perspective of radiological departments and case-based lump sums, even though benchmarking has frequently been applied to radiological departments by hospital management. The aim of this article is to describe and discuss systematic benchmarking as an effective starting point for successful process optimization. This includes the description of the methodology, recommendation of key parameters and discussion of the potential for cost-effectiveness analysis. The main focus of this article is cost-effectiveness (efficiency and effectiveness) with respect to cost units and treatment processes. (orig.)

  9. ABM11 PDFs and the cross section benchmarks in NNLO

    OpenAIRE

    Alekhin, S.; Blümlein, J.; Moch, S. -O.

    2013-01-01

    We report an updated version of the ABKM09 NNLO PDF fit, which includes the most recent HERA collider data on the inclusive cross sections and an improved treatment of the heavy-quark contribution to deep-inelastic scattering using advantages of the running-mass definition for the heavy quarks. The ABM11 PDFs obtained from the updated fit are in a good agreement with the recent LHC data on the W- and Z-production within the experimental and PDF uncertainties. We also perform a determination o...

  10. Benchmarking biodiversity performances of farmers

    NARCIS (Netherlands)

    Snoo, de G.R.; Lokhorst, A.M.; Dijk, van J.; Staats, H.; Musters, C.J.M.

    2010-01-01

    Farmers are the key players when it comes to the enhancement of farmland biodiversity. In this study, a benchmark system that focuses on improving farmers’ nature conservation was developed and tested among Dutch arable farmers in different social settings. The results show that especially tailored

  11. Benchmark calculations for EGS5

    International Nuclear Information System (INIS)

    In the past few years, EGS4 has undergone an extensive upgrade to EGS5, in particularly in the areas of low-energy electron physics, low-energy photon physics, PEGS cross section generation, and the coding from Mortran to Fortran programming. Benchmark calculations have been made to assure the accuracy, reliability and high quality of the EGS5 code system. This study reports three benchmark examples that show the successful upgrade from EGS4 to EGS5 based on the excellent agreements among EGS4, EGS5 and measurements. The first benchmark example is the 1969 Crannell Experiment to measure the three-dimensional distribution of energy deposition for 1-GeV electrons shower in water and aluminum tanks. The second example is the 1995 Compton-scattered spectra measurements for 20-40 keV, linearly polarized photon by Namito et. al., in KEK, which was a main part of the low-energy photon expansion work for both EGS4 and EGS5. The third example is the 1986 heterogeneity benchmark experiment by Shortt et. al., who used a monoenergetic 20-MeV electron beam to hit the front face of a water tank containing both air and aluminum cylinders and measured spatial depth dose distribution using a small solid-state detector. (author)

  12. Nominal GDP: Target or Benchmark?

    OpenAIRE

    Hetzel, Robert L.

    2015-01-01

    Some observers have argued that the Federal Reserve would best fulfill its mandate by adopting a target for nominal gross domestic product (GDP). Insights from the monetarist tradition suggest that nominal GDP targeting could be destabilizing. However, adopting benchmarks for both nominal and real GDP could offer useful information about when monetary policy is too tight or too loose.

  13. Monte Carlo photon benchmark problems

    International Nuclear Information System (INIS)

    Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems. 8 refs., 5 figs

  14. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  15. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  16. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  17. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  18. A Framework for Urban Transport Benchmarking

    OpenAIRE

    Henning, Theuns; Essakali, Mohammed Dalil; Oh, Jung Eun

    2011-01-01

    This report summarizes the findings of a study aimed at exploring key elements of a benchmarking framework for urban transport. Unlike many industries where benchmarking has proven to be successful and straightforward, the multitude of the actors and interactions involved in urban transport systems may make benchmarking a complex endeavor. It was therefore important to analyze what has bee...

  19. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  20. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  1. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  2. Gaming in a benchmarking environment. A non-parametric analysis of benchmarking in the water sector

    OpenAIRE

    De Witte, Kristof; Marques, Rui

    2009-01-01

    This paper discusses the use of benchmarking in general and its application to the drinking water sector. It systematizes the various classifications on performance measurement, discusses some of the pitfalls of benchmark studies and provides some examples of benchmarking in the water sector. After presenting in detail the institutional framework of the water sector of the Belgian region of Flanders (without benchmarking experiences), Wallonia (recently started a public benchmark) and the Net...

  3. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past the doorstep and is now in early trials in the project and construction environment....

  4. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  5. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Experimental results of pulse parameters and control rod worth measurements at TRIGA Mark 2 reactor in Ljubljana are presented. The measurements were performed with a completely fresh, uniform, and compact core. Only standard fuel elements with 12 wt% uranium were used. Special efforts were made to get reliable and accurate results at well-defined experimental conditions, and it is proposed to use the results as a benchmark test case for TRIGA reactors

  6. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  7. Local Innovation Systems and Benchmarking

    OpenAIRE

    Cantner, Uwe

    2008-01-01

    This paper reviews approaches used for evaluating the performance of local or regional innovation systems. This evaluation is performed by a benchmarking approach in which a frontier production function can be determined, based on a knowledge production function relating innovation inputs and innovation outputs. In analyses on the regional level and especially when acknowledging regional innovation systems those approaches have to take into account cooperative invention and innovation - the c...

  8. Solution of the fifth AER benchmark with code package ATHLET/BIPR8KN

    International Nuclear Information System (INIS)

    The fifth three-dimensional hexagonal benchmark problem continues a series of the international benchmark problems defined during 1992-1996 in the international VVER cooperation forum AER. The initial event of the fifth AER benchmark is a symmetrical break in the middle part of the main steam header at the end of the first fuel cycle and under the hot shutdown condition with one stuck control rod group. The main difference from previous benchmark is that the system works of the primary and secondary sides are considered in this benchmark. The main aim of the benchmark is a calculation of the transient after the recriticality had achieved. The solution of the fifth three-dimensional hexagonal dynamic AER benchmark problem obtained by code package ATHLET/BIPR8KN is presented. The used reactor scheme is described including the description of the core, primary and secondary side. The amount of necessary tuning and tools of tuning to achieve a requested in the definition of the problem reference values are considered. Comparative analysis of the results obtained by using a different detalization schemes are carried out. (author)

  9. Solution of the fifth AER benchmark with code package ATHLET/BIPR8KN

    International Nuclear Information System (INIS)

    The fifth three-dimensional hexagonal benchmark problem continues a series of the international benchmark problems defined during 1992-1996 in the international WWER cooperation forum atomic energy research. The initial event of the fifth AER benchmark is a symmetrical break in the middle part of the main steam header at the end of the first fuel cycle and under the hot shutdown condition with one stuck control rod group. The main difference from previous benchmark is that the system works of the primary and secondary sides are considered in this benchmark. The main aim of the benchmark is a calculation of the transient after the recriticality had achieved. The solution of the fifth three-dimensional hexagonal dynamic AER benchmark problem obtained by code package ATHLET/BIPR8KN is presented. The report contains the description of the used reactor scheme including the description of the core, primary and secondary side. The amount of necessary tuning and tools of tuning to achieve a requested in the definition of the problem reference values are considered. Comparative analysis of the results obtained by using a different detalization schemes are carried out.(Authors)

  10. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  11. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  12. Prismatic VHTR neutronic benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Connolly, Kevin John, E-mail: connolly@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA (United States); Rahnema, Farzad, E-mail: farzad@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA (United States); Tsvetkov, Pavel V. [Department of Nuclear Engineering, Texas A& M University, College Station, TX (United States)

    2015-04-15

    Highlights: • High temperature gas-cooled reactor neutronics benchmark problems. • Description of a whole prismatic VHTR core in its full heterogeneity. • Modeled using continuous energy nuclear data at a representative hot operating temperature. • Benchmark results for core eigenvalue, block-averaged power, and some selected pin fission density results. - Abstract: This paper aims to fill an apparent scarcity of benchmarks based on high temperature gas-cooled reactors. Within is a description of a whole prismatic VHTR core in its full heterogeneity and modeling using continuous energy nuclear data at a representative hot operating temperature. Also included is a core which has been simplified for ease in modeling while attempting to preserve as faithfully as possible the neutron physics of the core. Fuel and absorber pins have been homogenized from the particle level, however, the blocks which construct the core remain strongly heterogeneous. A six group multigroup (discrete energy) cross section set has been developed via Monte Carlo using the original heterogeneous core as a basis. Several configurations of the core have been solved using these two cross section sets; eigenvalue results, block-averaged power results, and some selected pin fission density results are presented in this paper, along with the six-group cross section data, so that method developers may use these problems as a standard reference point.

  13. Prismatic VHTR neutronic benchmark problems

    International Nuclear Information System (INIS)

    Highlights: • High temperature gas-cooled reactor neutronics benchmark problems. • Description of a whole prismatic VHTR core in its full heterogeneity. • Modeled using continuous energy nuclear data at a representative hot operating temperature. • Benchmark results for core eigenvalue, block-averaged power, and some selected pin fission density results. - Abstract: This paper aims to fill an apparent scarcity of benchmarks based on high temperature gas-cooled reactors. Within is a description of a whole prismatic VHTR core in its full heterogeneity and modeling using continuous energy nuclear data at a representative hot operating temperature. Also included is a core which has been simplified for ease in modeling while attempting to preserve as faithfully as possible the neutron physics of the core. Fuel and absorber pins have been homogenized from the particle level, however, the blocks which construct the core remain strongly heterogeneous. A six group multigroup (discrete energy) cross section set has been developed via Monte Carlo using the original heterogeneous core as a basis. Several configurations of the core have been solved using these two cross section sets; eigenvalue results, block-averaged power results, and some selected pin fission density results are presented in this paper, along with the six-group cross section data, so that method developers may use these problems as a standard reference point

  14. An introduction to benchmarking in healthcare.

    Science.gov (United States)

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related. PMID:10139084

  15. Updates of pathologic myopia.

    Science.gov (United States)

    Ohno-Matsui, Kyoko; Lai, Timothy Y Y; Lai, Chi-Chun; Cheung, Chiu Ming Gemmy

    2016-05-01

    Complications from pathologic myopia are a major cause of visual impairment and blindness, especially in east Asia. The eyes with pathologic myopia may develop loss of the best-corrected vision due to various pathologies in the macula, peripheral retina and the optic nerve. Despite its importance, the definition of pathologic myopia has been inconsistent. The refractive error or axial length alone often does not adequately reflect the 'pathologic myopia'. Posterior staphyloma, which is a hallmark lesion of pathologic myopia, can occur also in non-highly myopic eyes. Recently a revised classification system for myopic maculopathy has been proposed to standardize the definition among epidemiological studies. In this META-PM (meta analyses of pathologic myopia) study classification, pathologic myopia was defined as the eyes having chorioretinal atrophy equal to or more severe than diffuse atrophy. In addition, the advent of new imaging technologies such as optical coherence tomography (OCT) and three dimensional magnetic resonance imaging (3D MRI) has enabled the detailed observation of various pathologies specific to pathologic myopia. New therapeutic approaches including intravitreal injections of anti-vascular endothelial growth factor agents and the advance of vitreoretinal surgeries have greatly improved the prognosis of patients with pathologic myopia. The purpose of this review article is to provide an update on topics related to the field of pathologic myopia, and to outline the remaining issues which need to be solved in the future. PMID:26769165

  16. Lesson learned from the SARNET wall condensation benchmarks

    International Nuclear Information System (INIS)

    Highlights: • The results of the benchmarking activity on wall condensation are reported. • The work was performed in the frame of SARNET. • General modelling techniques for condensation are discussed. • Results of University of Pisa and of other benchmark participants are discussed. • The lesson learned is drawn. - Abstract: The prediction of condensation in the presence of noncondensable gases has received continuing attention in the frame of the Severe Accident Research Network of Excellence, both in the first (2004–2008) and in the second (2009–2013) EC integrated projects. Among the different reasons for considering so relevant this basic phenomenon, coped with by classical treatments dated in the first decades of the last century, there is the interest for developing updated CFD models for reactor containment analysis, requiring validating at a different level the available modelling techniques. In the frame of SARNET, benchmarking activities were undertaken taking advantage of the work performed at different institutions in setting up and developing models for steam condensation in conditions of interest for nuclear reactor containment. Four steps were performed in the activity, involving: (1) an idealized problem freely inspired at the actual conditions occurring in an experimental facility, CONAN, installed at the University of Pisa; (2) a first comparison with experimental data purposely collected by the CONAN facility; (3) a second comparison with data available from experimental campaigns performed in the same apparatus before the inclusion of the activities in SARNET; (4) a third exercise involving data obtained at lower mixture velocity than in previous campaigns, aimed at providing conditions closer to those addressed in reactor containment analyses. The last step of the benchmarking activity required to change the configuration of the experimental apparatus to achieve the lower flow rates involved in the new test specifications. The

  17. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  18. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  19. Benchmarking a fission-product release computer program containing a Gibbs energy minimizer

    International Nuclear Information System (INIS)

    The computer program SOURCE IST 2.0 contains a 1997 model of fission-product vaporization, developed by B.J. Corse et al. That model was tractable on computers of that day. However, the understanding of fuel thermochemistry has advanced since that time. A new prototype computer program was developed with: a) newer Royal Military College of Canada thermodynamic model of uranium dioxide fuel, b) new model for fission-product vaporization from the fuel surface, c) a user-callable thermodynamics subroutine library, d) an updated nuclear data library, and e) an updated nuclide generation and depletion algorithm. The prototype has been benchmarked against experimental results. (author)

  20. The implementation of benchmarking process in marketing education services by Ukrainian universities

    Directory of Open Access Journals (Sweden)

    G.V. Okhrimenko

    2016-03-01

    Full Text Available The aim of the article. The consideration of theoretical and practical aspects of benchmarking at universities is the main task of this research. At first, the researcher identified the essence of benchmarking. It involves comparing the characteristics of college or university leading competitors in the industry and copying of proven designs. Benchmarking tries to eliminate the fundamental problem of comparison – the impossibility of being better than the one from whom they borrow solution. Benchmarking involves, therefore, self-evaluation including systematic collection of data and information with the view to making relevant comparisons of strengths and weaknesses of performance aspects. Benchmarking identifies gaps in performance, seeks new approaches for improvements, monitors progress, reviews benefits and assures adoption of good practices. The results of the analysis. There are five types of benchmarking: internal, competitive, functional, procedural and general. Benchmarking is treated as a process of systematically applied and has specific stages: 1 identification of study object; 2 identification of businesses for comparison; 3 selection of data collection methods; 4 determining variations in terms of efficiency and determination of the levels of future results; 5 communicating of the results of benchmarking; 6 development of implementation plan, initiating the implementation, monitoring implementation; 7 new benchmarks definition. The researcher gave the results of practical use of the benchmarking algorithm at universities. In particular, the monitoring and SWOT-analysis were identified competitive practices used at Ukrainian universities. The main criteria for determining the potential for benchmarking of universities were: 1 the presence of new teaching methods at universities; 2 the involvement of foreign lecturers, partners of other universities for cooperation; 3 promoting education services for target groups; 4 violation of

  1. Gaia FGK benchmark stars: Metallicity

    Science.gov (United States)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  2. NFS Tricks and Benchmarking Traps

    OpenAIRE

    Seltzer, Margo; Ellard, Daniel

    2003-01-01

    We describe two modi cations to the FreeBSD 4.6 NFS server to increase read throughput by improving the read-ahead heuristic to deal with reordered requests and stride access patterns. We show that for some stride access patterns, our new heuristics improve end-to-end NFS throughput by nearly a factor of two. We also show that benchmarking and experimenting with changes to an NFS server can be a subtle and challenging task, and that it is often difficult to distinguish the impact of a new ...

  3. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    The experimental results of startup tests after reconstruction and modification of the TRIGA Mark II reactor in Ljubljana are presented. The experiments were performed with a completely fresh, compact, and uniform core. The operating conditions were well defined and controlled, so that the results can be used as a benchmark test case for TRIGA reactor calculations. Both steady-state and pulse mode operation were tested. In this paper, the following steady-state experiments are treated: critical core and excess reactivity, control rod worths, fuel element reactivity worth distribution, fuel temperature distribution, and fuel temperature reactivity coefficient

  4. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  5. A Privacy-Preserving Benchmarking Platform

    OpenAIRE

    Kerschbaum, Florian

    2010-01-01

    A privacy-preserving benchmarking platform is practically feasible, i.e. its performance is tolerable to the user on current hardware while fulfilling functional and security requirements. This dissertation designs, architects, and evaluates an implementation of such a platform. It contributes a novel (secure computation) benchmarking protocol, a novel method for computing peer groups, and a realistic evaluation of the first ever privacy-preserving benchmarking platform.

  6. Rethinking benchmark dates in international relations

    OpenAIRE

    Buzan, Barry; Lawson, George

    2014-01-01

    International Relations (IR) has an ‘orthodox set’ of benchmark dates by which much of its research and teaching is organized: 1500, 1648, 1919, 1945 and 1989. This article argues that IR scholars need to question the ways in which these orthodox dates serve as internal and external points of reference, think more critically about how benchmark dates are established, and generate a revised set of benchmark dates that better reflects macro-historical international dynamics. The first part of t...

  7. WIPP benchmark II results using SANCHO

    International Nuclear Information System (INIS)

    Results of the second Benchmark problem in the WIPP code evaluation series using the finite element dynamic relaxation code SANCHO are presented. A description of SANCHO and its model for sliding interfaces is given, along with a discussion of the various small routines used for generating stress plot data. Conclusions and a discussion of this benchmark problem, as well as recommendations for a possible third benchmark problem are presented

  8. The design and analysis of benchmark experiments

    OpenAIRE

    Hothorn, Torsten; Leisch, Friedrich; Zeileis, Achim; Hornik, Kurt

    2003-01-01

    The assessment of the performance of learners by means of benchmark experiments is established exercise. In practice, benchmark studies are a tool to compare the performance of several competing algorithms for a certain learning problem. Cross-validation or resampling techniques are commonly used to derive point estimates of the performances which are compared to identify algorithms with good properties. For several benchmarking problems, test procedures taking the variability of those point ...

  9. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  10. Pynamic: the Python Dynamic Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  11. Method and system for benchmarking computers

    Science.gov (United States)

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  12. Characterizing universal gate sets via dihedral benchmarking

    Science.gov (United States)

    Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph

    2015-12-01

    We describe a practical experimental protocol for robustly characterizing the error rates of non-Clifford gates associated with dihedral groups, including small single-qubit rotations. Our dihedral benchmarking protocol is a generalization of randomized benchmarking that relaxes the usual unitary 2-design condition. Combining this protocol with existing randomized benchmarking schemes enables practical universal gate sets for quantum information processing to be characterized in a way that is robust against state-preparation and measurement errors. In particular, our protocol enables direct benchmarking of the π /8 gate even under the gate-dependent error model that is expected in leading approaches to fault-tolerant quantum computation.

  13. Analysis of VENUS-3 benchmark experiment

    International Nuclear Information System (INIS)

    The paper presents the revision and the analysis of VENUS-3 benchmark experiment performed at CEN/SCK, Mol (Belgium). This benchmark was found to be particularly suitable for validation of current calculation tools like 3-D neutron transport codes, and in particular of the 3D sensitivity and uncertainty analysis code developed within the EFF project. The compilation of the integral experiment was integrated into the SINBAD electronic data base for storing and retrieving information about the shielding experiments for nuclear systems. SINBAD now includes 33 reviewed benchmark descriptions and several compilations waiting for the review, among them many benchmarks relevant for pressure vessel dosimetry system validation.(author)

  14. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  15. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  16. Risk Assessment Update: Russian Segment

    Science.gov (United States)

    Christiansen, Eric; Lear, Dana; Hyde, James; Bjorkman, Michael; Hoffman, Kevin

    2012-01-01

    BUMPER-II version 1.95j source code was provided to RSC-E- and Khrunichev at January 2012 MMOD TIM in Moscow. MEMCxP and ORDEM 3.0 environments implemented as external data files. NASA provided a sample ORDEM 3.0 g."key" & "daf" environment file set for demonstration and benchmarking BUMPER -II v1.95j installation at the Jan-12 TIM. ORDEM 3.0 has been completed and is currently in beta testing. NASA will provide a preliminary set of ORDEM 3.0 ".key" & ".daf" environment files for the years 2012 through 2028. Bumper output files produced using the new ORDEM 3.0 data files are intended for internal use only, not for requirements verification. Output files will contain these words ORDEM FILE DESCRIPTION = PRELIMINARY VERSION: not for production. The projectile density term in many BUMPER-II ballistic limit equations will need to be updated. Cube demo scripts and output files delivered at the Jan-12 TIM have been updated for the new ORDEM 3.0 data files. Risk assessment results based on ORDEM 3.0 and MEM will be presented for the Russian Segment (RS) of ISS.

  17. BN-600 hybrid core benchmark Phase III results

    International Nuclear Information System (INIS)

    The main objective of the CRP on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects, is to validate, verify and improve methodologies and computer codes used for the calculation of reactivity coefficients in fast reactors aiming at using weapons-grade plutonium for energy production in fast reactors. BN-600 hybrid reactor taken as benchmark. Earlier, two dimensional and three dimensional diffusion theory BN-600 benchmark calculations were done. This report describes the results of the burnup and heterogeneous calculations done for the proposed BN-600 hybrid core model as a part of Phase III benchmark. BN-600 benchmark has been analyzed at beginning of cycle (BOC) with XSET98 data set and 2-D and 3-D diffusion codes. The 2-D results are compared with the earlier results using the older CV2M data set. The core has been burnt for one cycle using 3-D burnup code FARCOBAB. The burnt core parameter has also been analyzed in 3-D. Heterogeneity effects on reactivity have been computed at BOC. Relative to the use of CV2M data, use of XSET98 data results in increased magnitudes of fuel Doppler worth and sodium density worth. Compared to 2-D results , in the 3-D results, the Keff is lower by about 220 pcm, sodium density worth is higher by about 30% and steel density worth becomes nearly zero or small positive from a negative value in 2-D. The conversion ratio at BOC is 0.669 as computed in 3-D. The burnup reactivity loss due to 140 days at full power (1470 MWt) is 0.0252. The conversion ratio at end of cycle (EOC) is 0.701. The other parameters have been estimated with SHR up condition as desired in the phase III benchmark specifications. Fuel Doppler worth is 7% more negative, sodium density worth is 16% less positive and steel density worth is more negative at EOC compared to BOC. Absorber rod (SHR) worth is higher by 4.9 % at EOC. Heterogeneity effect (core and SHR combined) on multiplication factor is small. For mid SHR

  18. The fourth research co-ordination meeting (RCM) on 'Updated codes and methods to reduce the calculational uncertainties of liquid metal fast reactors reactivity effects'. Working material

    International Nuclear Information System (INIS)

    The fourth Research Co-ordination Meeting (RCM) of the Co-ordinated Research Project (CRP) on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effect' was held during 19-23 May, 2003 in Obninsk, Russian Federation. The general objective of the CRP is to validate, verify and improve methodologies and computer codes used for the calculation of reactivity coefficients in fast reactors aiming at enhancing the utilization of plutonium and minor actinides. The first RCM took place in Vienna on 24 - 26 November 1999. The meeting was attended by 19 participants from 7 Member States and one from an international organization (France, Germany, India, Japan, Rep. of Korea, Russian Federation, the United Kingdom, and IAEA). The participants from two Member States (China and the U.S.A.) provided their results and presentation materials even though being absent at the meeting. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN- 600 core were evaluated. Contributions of the participants in the benchmark analyses is shown. This report first addresses the benchmark definitions and specifications given for each Phase and briefly introduces the basic data, computer codes, and methodologies applied to the benchmark analyses by various participants. Then, the results obtained by the participants in terms of calculational uncertainty and their effect on the core transient behavior are intercompared. Finally it addresses some conclusions drawn in the benchmarks

  19. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  20. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  1. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  2. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  3. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235U, 239Pu, 238U, and 237Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  4. Benchmark scenarios for the NMSSM

    CERN Document Server

    Djouadi, A; Ellwanger, U; Godbole, R; Hugonie, C; King, S F; Lehti, S; Moretti, S; Nikitenko, A; Rottlander, I; Schumacher, M; Teixeira, A

    2008-01-01

    We discuss constrained and semi--constrained versions of the next--to--minimal supersymmetric extension of the Standard Model (NMSSM) in which a singlet Higgs superfield is added to the two doublet superfields that are present in the minimal extension (MSSM). This leads to a richer Higgs and neutralino spectrum and allows for many interesting phenomena that are not present in the MSSM. In particular, light Higgs particles are still allowed by current constraints and could appear as decay products of the heavier Higgs states, rendering their search rather difficult at the LHC. We propose benchmark scenarios which address the new phenomenological features, consistent with present constraints from colliders and with the dark matter relic density, and with (semi--)universal soft terms at the GUT scale. We present the corresponding spectra for the Higgs particles, their couplings to gauge bosons and fermions and their most important decay branching ratios. A brief survey of the search strategies for these states a...

  5. VHTRC temperature coefficient benchmark problem

    International Nuclear Information System (INIS)

    As an activity of IAEA Coordinated Research Programme, a benchmark problem is proposed for verifications of neutronic calculation codes for a low enriched uranium fuel high temperature gas-cooled reactor. Two problems are given on the base of heating experiments at the VHTRC which is a pin-in-block type core critical assembly loaded mainly with 4% enriched uranium coated particle fuel. One problem, VH1-HP, asks to calculate temperature coefficient of reactivity from the subcritical reactivity values at five temperature steps between an room temperature where the assembly is nearly at critical state and 200degC. The other problem, VH1-HC, asks to calculate the effective multiplication factor of nearly critical loading cores at the room temperature and 200degC. Both problems further ask to calculate cell parameters such as migration area and spectral indices. Experimental results corresponding to main calculation items are also listed for comparison. (author)

  6. Using benchmarking for the primary allocation of EU allowances. An application to the German power sector

    Energy Technology Data Exchange (ETDEWEB)

    Schleich, J.; Cremer, C.

    2007-07-01

    Basing allocation of allowances for existing installations under the EU Emissions Trading Scheme on specific emission values (benchmarks) rather than on historic emissions may have several advantages. Benchmarking may recognize early ac-tion, provide higher incentives for replacing old installations and result in fewer distortions in case of updating, facilitate EU-wide harmonization of allocation rules or allow for simplified and more efficient closure rules. Applying an optimization model for the German power sector, we analyze the distributional effects of vari-ous allocation regimes across and within different generation technologies. Re-sults illustrate that regimes with a single uniform benchmark for all fuels or with a single benchmark for coal- and lignite-fired plants imply substantial distributional effects. In particular, lignite- and old coal-fired plants would be made worse off. Under a regime with fuel-specific benchmarks for gas, coal, and lignite 50 % of the gas-fired plants and 4 % of the lignite and coal-fired plants would face an allow-ance deficit of at least 10 %, while primarily modern lignite-fired plants would benefit. Capping the surplus and shortage of allowances would further moderate the distributional effects, but may tarnish incentives for efficiency improvements and recognition of early action. (orig.)

  7. WIMS nuclear data library and its updating

    International Nuclear Information System (INIS)

    This report gives a brief overview of the status of reactor physics computer code WIMS-D/4 and its library. It presents the details of WIMS-D/4 Library Update Project (WLUP), initiated by International Atomic Energy Agency (IAEA) with the goal of providing updated nuclear data library to the user of WIMS-D/4. The WLUP was planned to be executed in several stages. In this report the calculations performed for the first stage are presented. A number of benchmarks for light water and heavy water lattices proposed by IAEA have been analysed and the results have been compared with the average of experimental values, the IAEA reference values and the average of calculated results from different international laboratories. (author) 8 figs

  8. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  9. Beyond Benchmarking: Value-Adding Metrics

    Science.gov (United States)

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  10. Evaluating software verification systems: benchmarks and competitions

    NARCIS (Netherlands)

    Beyer, Dirk; Huisman, Marieke; Klebanov, Vladimir; Monahan, Rosemary

    2014-01-01

    This report documents the program and the outcomes of Dagstuhl Seminar 14171 “Evaluating Software Verification Systems: Benchmarks and Competitions”. The seminar brought together a large group of current and future competition organizers and participants, benchmark maintainers, as well as practition

  11. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticit

  12. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt...

  13. The Linked Data Benchmark Council Project

    NARCIS (Netherlands)

    Boncz, P.A.; Fundulaki, I.; Gubichev, A.; Larriba-Pey, J.; Neumann, T.

    2013-01-01

    Despite the fast growth and increasing popularity, the broad field of RDF and Graph database systems lacks an independent authority for developing benchmarks, and for neutrally assessing benchmark results through industry-strength auditing which would allow to quantify and compare the performance of

  14. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  15. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358. ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  16. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  17. Benchmark analyses for BN-600 MOX core with minor actinides

    International Nuclear Information System (INIS)

    In 1999 the IAEA has initiated a Co-ordinated Research Project on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects'. Three benchmark models representing different modifications of the BN-600 reactor UOX core have been sequentially established and analyzed, including a hybrid UOX/MOX core, a full MOX core with weapons-grade plutonium and a MOX core with plutonium and minor actinides coming from spent LWR fuel. The paper describes studies for the latter MOX core model. The benchmark results include core criticality at the beginning and end of the equilibrium fuel cycle, kinetics parameters, spatial distributions of power and reactivity coefficients obtained by employing different computation tools and nuclear data. Sensitivity studies were performed to better understand in particular the influence of variations in different nuclear data libraries on the computed results. Transient simulations were done to investigate consequences of employing a few different sets of power and reactivity distributions on the system behavior at the initial phase of ULOF. The obtained results are analyzed in the paper. (author)

  18. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  19. ABM11 PDFs and the cross section benchmarks in NNLO

    International Nuclear Information System (INIS)

    We report an updated version of the ABKM09 NNLO PDF fit, which includes the most recent HERA collider data on the inclusive cross sections and an improved treatment of the heavy-quark contribution to deep-inelastic scattering using advantages of the running-mass definition for the heavy quarks. The ABM11 PDFs obtained from the updated fit are in a good agreement with the recent LHC data on the W- and Z-production within the experimental and PDF uncertainties. We also perform a determination of the strong coupling constant αs in a variant of the ABM11 fit insensitive to the influence of the higher twist terms and find the value of αs=0.1133(11) which is in good agreement with the nominal ABM11 one and our earlier determination.

  20. Benchmarking--Measuring and Comparing for Continuous Improvement.

    Science.gov (United States)

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  1. NASA in-house Commercially Developed Space Facility (CDSF) study report. Volume 1: Concept configuration definition

    Science.gov (United States)

    Deryder, L. J.; Chiger, H. D.; Deryder, D. D.; Detweiler, K. N.; Dupree, R. L.; Gillespie, V. P.; Hall, J. B.; Heck, M. L.; Herrick, D. C.; Katzberg, S. J.

    1989-01-01

    The results of a NASA in-house team effort to develop a concept definition for a Commercially Developed Space Facility (CDSF) are presented. Science mission utilization definition scenarios are documented, the conceptual configuration definition system performance parameters qualified, benchmark operational scenarios developed, space shuttle interface descriptions provided, and development schedule activity was assessed with respect to the establishment of a proposed launch date.

  2. Developing and Using Benchmarks for Eddy Current Simulation Codes Validation to Address Industrial Issues

    Science.gov (United States)

    Mayos, M.; Buvat, F.; Costan, V.; Moreau, O.; Gilles-Pascaud, C.; Reboud, C.; Foucher, F.

    2011-06-01

    To achieve performance demonstration, which is a legal requirement for the qualification of NDE processes applied on French nuclear power plants, the use of modeling tools is a valuable support, provided that the employed models have been previously validated. To achieve this, in particular for eddy current modeling, a validation methodology based on the use of specific benchmarks close to the actual industrial issue has to be defined. Nonetheless, considering the high variability in code origin and complexity, the feedback from experience on actual cases has shown that it was critical to define simpler generic and public benchmarks in order to perform a preliminary selection. A specific Working Group has been launched in the frame of COFREND, the French Association for NDE, resulting in the definition of several benchmark problems. This action is now ready for mutualization with similar international approaches.

  3. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  4. Comparative results for benchmark test problems in CANDU lattices

    International Nuclear Information System (INIS)

    The paper presents comparative results for the main types of lattice cell calculation performed by using the available versions of WIMS and DRAGON codes: WIMSD5B and DRAGON 3.05. The lattice cell calculations main goal is to obtain the optimal input parameters combination that gives closer results to IAEA measurements. The comparison was made for IAEA benchmark problems applied to CANDU lattices. IAEA nuclear data libraries updated in WLUP (WIMS Libraries Updates Project) project were used. The input data have been set from test problems description. The comparisons have been performed for the following heavy water cell configuration: square lattice, natural uranium (NU) 37 fuel rods bundle with 0.72% enrichment in U-235, 28.58 cm lattice pitch, 0.5965 cm for central rod radius and 0.6050 cm for the other fuel rods, Zircaloy-4 as cladding material and 1050 Al alloy for the pressure and calandria tubes, respectively. Lattice calculations were effectuated for observing which combination gave the closer results to IAEA measurements. (author)

  5. Toxicological benchmarks for screening contaminants of potential concern for effects on sediment-associated biota: 1994 Revision. Environmental Restoration Program

    International Nuclear Information System (INIS)

    Because a hazardous waste site may contain hundreds of chemicals, it is important to screen contaminants of potential concern for the ecological risk assessment. Often this screening is done as part of a Screening Assessment, the purpose of which is to evaluate the available data, identify data gaps, and screen contaminants of potential concern. Screening may be accomplished by using a set of toxicological benchmarks. These benchmarks are helpful in determining whether contaminants warrant further assessment or are at a level that requires no further attention. If a chemical concentration or the reported detection limit exceeds a proposed lower benchmark, more analysis is needed to determine the hazards posed by that chemical. If, however, the chemical concentration falls below the lower benchmark value, the chemical may be eliminated from further study. This report briefly describes three categories of approaches to the development of sediment quality benchmarks. These approaches are based on analytical chemistry, toxicity test and field survey data. A fourth integrative approach incorporates all three types of data. The equilibrium partitioning approach is recommended for screening nonpolar organic contaminants of concern in sediments. For inorganics, the National Oceanic and Atmospheric Administration has developed benchmarks that may be used for screening. There are supplemental benchmarks from the province of Ontario, the state of Wisconsin, and US Environmental Protection Agency Region V. Pore water analysis is recommended for polar organic compounds; comparisons are then made against water quality benchmarks. This report is an update of a prior report. It contains revised ER-L and ER-M values, the five EPA proposed sediment quality criteria, and benchmarks calculated for several nonionic organic chemicals using equilibrium partitioning

  6. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  7. The PBMR steady-state and coupled kinetics core thermal-hydraulics benchmark test problems

    International Nuclear Information System (INIS)

    In support of the pebble bed modular reactor (PBMR) Verification and Validation (V and V) effort, a set of benchmark test problems has been defined that focus on coupled core neutronics and thermal-hydraulic code-to-code comparisons. The motivation is not only to test the existing methods or codes available for high-temperature gas-cooled reactors (HTGRs), but also to serve as a basis for the development of more accurate and efficient tools to analyse the neutronics and thermal-hydraulic behaviour for design and safety evaluations in future. The reference design for the PBMR268 benchmark problem is derived from the 268 MW PBMR design with a dynamic central column containing only graphite spheres. Several simplifications were made to the design in order to limit the need for any further approximations when defining code models. During this process, care was taken to ensure that all the important characteristics of the reactor design were preserved. The definition and initial phases of the benchmark were performed under a cooperative research project between NRG, Penn State University (PSU) and PBMR (Pty) Ltd. However, participation has been extended to include Purdue University and INL. All contributions to the benchmark effort were made in-kind by the participating members including the participation in four benchmark meetings over a period of 3 years. Based on the work performed in this benchmark the PBMR 400 MW design with fixed central reflector has been accepted as an OECD benchmark problem and work has already started. In this paper, the benchmark definition and the different test cases are described in some detail. Phase 1 focuses on steady-state conditions with the purpose of quantifying differences between code systems, models and basic data. It also serves as the basis to establish a common starting condition for the transient cases. In Phase 2, the focus is on performing coupled kinetics/core thermal-hydraulics test problems with a common cross

  8. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  9. Benchmarking in healthcare using aggregated indicators

    DEFF Research Database (Denmark)

    Traberg, Andreas; Jacobsen, Peter

    2010-01-01

    databases, the model is constructed as a comprehensive hierarchy of indicators. By aggregating the outcome of each indicator, the model is able to benchmark healthcare providing units. By assessing performance deeper in the hierarchy, a more detailed view of performance is obtained. The validity test of the...... model is performed at a Danish non-profit hospital, where four radiological sites are benchmarked against each other. Because of the multifaceted perspective on performance, the model proved valuable both as a benchmarking tool and as an internal decision support system....

  10. LAPUR-K BWR stability benchmark

    International Nuclear Information System (INIS)

    This paper documents the stability benchmark of the LAPUR-K code using the measurements taken at the Ringhals Unit 1 plant over four cycles of operation. This benchmark was undertaken to demonstrate the ability of LAPUR-K to calculate the decay ratios for both core-wide and regional mode oscillations. This benchmark contributes significantly to assuring that LAPUR-K can be used to define the exclusion region for the Monticello Plant in response to recent US Nuclear Regulatory Commission notices concerning oscillation observed at Boiling Water Reactor plants. Stability is part of Northern States Power Reload Safety Evaluation of the Monticello Plant

  11. Overview and Discussion of the OECD/NRC Benchmark Based on NUPEC PWR Subchannel and Bundle Tests

    Directory of Open Access Journals (Sweden)

    M. Avramova

    2013-01-01

    Full Text Available The Pennsylvania State University (PSU under the sponsorship of the US Nuclear Regulatory Commission (NRC has prepared, organized, conducted, and summarized the Organisation for Economic Co-operation and Development/US Nuclear Regulatory Commission (OECD/NRC benchmark based on the Nuclear Power Engineering Corporation (NUPEC pressurized water reactor (PWR subchannel and bundle tests (PSBTs. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency (NEA of OECD and the Japan Nuclear Energy Safety Organization (JNES, Japan. The OECD/NRC PSBT benchmark was organized to provide a test bed for assessing the capabilities of various thermal-hydraulic subchannel, system, and computational fluid dynamics (CFDs codes. The benchmark was designed to systematically assess and compare the participants’ numerical models for prediction of detailed subchannel void distribution and department from nucleate boiling (DNB, under steady-state and transient conditions, to full-scale experimental data. This paper provides an overview of the objectives of the benchmark along with a definition of the benchmark phases and exercises. The NUPEC PWR PSBT facility and the specific methods used in the void distribution measurements are discussed followed by a summary of comparative analyses of submitted final results for the exercises of the two benchmark phases.

  12. Comparison of reserve estimates using different reserve definitions

    International Nuclear Information System (INIS)

    Factors that can impact an oil reserve volume estimate were described. The importance of setting standards for the definition of reserve categories was discussed. The continuing process of updating and revising reserve definitions is a reflection of the changing nature of the petroleum industry and also points up the difficulty of writing a comprehensive definition of reserves and guidelines that would serve the needs of industry without unnecessary complexity. Current working definitions of some common terms dealing with reserves estimating were included

  13. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  14. DOE Commercial Building Benchmark Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  15. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  16. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and...

  17. Benchmarking Optimization Software with Performance Profiles

    OpenAIRE

    Dolan, Elizabeth D.; Moré, Jorge J.

    2001-01-01

    We propose performance profiles-distribution functions for a performance metric-as a tool for benchmarking and comparing optimization software. We show that performance profiles combine the best features of other tools for performance evaluation.

  18. Old Curses, New Approaches? Fiscal Benchmarks for Oil-Producing Countries in Sub-Saharan Africa

    OpenAIRE

    Jan-Peter Olters

    2007-01-01

    Buoyant oil prices have allowed oil-producing countries in sub-Saharan Africa (SSA OPCs) to increase oil exports and fiscal revenues, providing them with resources necessary to address the pressing social needs. To preclude another boom-bust cycle, this paper advocates the definition of a fiscal benchmark anchored in sustainability grounds, following Leigh- Olters (2006). The difference between current primary deficits and those that could be maintained after oil reserves are exhausted repres...

  19. A Modification of the Halpern-Pearl Definition of Causality

    OpenAIRE

    Halpern, Joseph Y.

    2015-01-01

    The original Halpern-Pearl definition of causality [Halpern and Pearl, 2001] was updated in the journal version of the paper [Halpern and Pearl, 2005] to deal with some problems pointed out by Hopkins and Pearl [2003]. Here the definition is modified yet again, in a way that (a) leads to a simpler definition, (b) handles the problems pointed out by Hopkins and Pearl, and many others, (c) gives reasonable answers (that agree with those of the original and updated definition) in the standard pr...

  20. A benchmark for comparison of dental radiography analysis algorithms.

    Science.gov (United States)

    Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia

    2016-07-01

    Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/). PMID:26974042

  1. Benchmarking carbon emissions performance in supply chains

    OpenAIRE

    Acquaye, Adolf; Genovese, Andrea; Barrett, John W.; Koh, Lenny

    2014-01-01

    Purpose – The paper aims to develop a benchmarking framework to address issues such as supply chain complexity and visibility, geographical differences and non-standardized data, ensuring that the entire supply chain environmental impact (in terms of carbon) and resource use for all tiers, including domestic and import flows, are evaluated. Benchmarking has become an important issue in supply chain management practice. However, challenges such as supply chain complexity and visibility, geogra...

  2. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  3. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  4. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  5. Benchmark Two-Good Utility Functions

    OpenAIRE

    de Jaegher, K.

    2007-01-01

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price elasticity. It is shown how each of these utility functions arises from a simple graphical construction based on a single given indifference curve. Also, it is shown that possessors of such utility function...

  6. Bundesländer-Benchmarking 2002

    OpenAIRE

    Blancke, Susanne; Hedrich, Horst; Schmid, Josef

    2002-01-01

    Das Bundesländer Benchmarking 2002 basiert auf einer Untersuchung ausgewählter Arbeitsmarkt- und Wirtschaftsindikatoren in den deutschen Bundesländern. Hierfür wurden drei Benchmarkings nach der Radar-Chart Methode vorgenommen: Eines welches nur Arbeitsmarktindikatoren betrachtet; eines, welches nur Wirtschaftsindikatoren betrachtet; und eines welches gemischte Arbeitsmarkt- und Wirtschaftsindikatoren beleuchtet. Verglichen wurden die Länder untereinander im Querschnitt zu zwei Zeitpunkten –...

  7. Benchmarking Deep Reinforcement Learning for Continuous Control

    OpenAIRE

    Duan, Yan; Chen, Xi; Houthooft, Rein; Schulman, John; Abbeel, Pieter

    2016-01-01

    Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suit...

  8. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  9. Features and technology of enterprise internal benchmarking

    OpenAIRE

    A. V. Dubodelova; Yurynets, O. V.

    2013-01-01

    The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard rese...

  10. Overview of CSEWG shielding benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Maerker, R.E.

    1979-01-01

    The fundamental philosophy behind the choosing of CSEWG shielding benchmarks is that the accuracy of a certain range of cross section data be adequately tested. The benchmarks, therefore, consist of measurements and calculations of these measurements. Calculations for which there are no measurements provide little information on the adequacy of the data, although they can perhaps indicate the sensitivity of results to variations in data.

  11. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes......-related achievement. We attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  12. Under Pressure Benchmark for DDBMS Availability

    OpenAIRE

    Fior, Alessandro Gustavo; Meira, Jorge Augusto; Cunha De Almeida, Eduardo; Coelho, Ricardo Gonçalves; Didonet Del Fabro, Marcos; Le Traon, Yves

    2013-01-01

    The availability of Distributed Database Management Systems (DDBMS) is related to the probability of being up and running at a given point in time, and managing failures. One well-known and widely used mechanism to ensure availability is replication, which includes performance impact on maintaining data replicas across the DDBMS's machine nodes. Benchmarking can be used to measure such impact. In this article, we present a benchmark that evaluates the performance of DDBMS, considering availab...

  13. DWEB: A Data Warehouse Engineering Benchmark

    OpenAIRE

    Darmont, Jérôme; Bentayeb, Fadila; Boussaïd, Omar

    2005-01-01

    Data warehouse architectural choices and optimization techniques are critical to decision support query performance. To facilitate these choices, the performance of the designed data warehouse must be assessed. This is usually done with the help of benchmarks, which can either help system users comparing the performances of different systems, or help system engineers testing the effect of various design choices. While the TPC standard decision support benchmarks address the first point, they ...

  14. MPI Benchmarking Revisited: Experimental Design and Reproducibility

    OpenAIRE

    Hunold, Sascha; Carpen-Amarie, Alexandra

    2015-01-01

    The Message Passing Interface (MPI) is the prevalent programming model used on today's supercomputers. Therefore, MPI library developers are looking for the best possible performance (shortest run-time) of individual MPI functions across many different supercomputer architectures. Several MPI benchmark suites have been developed to assess the performance of MPI implementations. Unfortunately, the outcome of these benchmarks is often neither reproducible nor statistically sound. To overcome th...

  15. Karma1.1 benchmark calculations for the numerical benchmark problems and the critical experiments

    International Nuclear Information System (INIS)

    The transport lattice code KARMA 1.1 has been developed at KAERI for the reactor physics analysis of the pressurized water reactor. This program includes the multi-group library processed from ENDF/B-VI R8 and also utilizes the macroscopic cross sections for the benchmark problems. Benchmark calculations were performed for the C5G7 and the KAERI benchmark problems given with seven group cross sections, for various fuels loaded in the operating pressurized water reactors in South Korea, and for the critical experiments including CE, B and W and KRITZ. Benchmark results show that KARMA 1.1 is working reasonably. (author)

  16. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  17. Clinically meaningful performance benchmarks in MS

    Science.gov (United States)

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (10 seconds) ranges of performance. PMID:24174581

  18. ''FULL-CORE'' VVER-440 calculation benchmark

    International Nuclear Information System (INIS)

    Because of the difficulties with experimental validation of power distribution predicted by macro-code on the pin by pin level we decided to prepare a calculation benchmark named ''FULL-CORE'' VVER-440. This benchmark is a two-dimensional (2D) calculation benchmark based on the VVER-440 reactor core cold state geometry with taking into account the geometry of explicit radial reflector. The main task of this benchmark is to test the pin by pin power distribution in fuel assemblies predicted by macro-codes that are used for neutron-physics calculations especially for VVER-440 reactors. The proposal of this benchmark was presented at the 21st Symposium of AER in 2011. The reference solution has been calculated by MCNP code using Monte Carlo method and the results have been published in the AER community. The results of reference calculation were presented at the 22nd Symposium of AER in 2012. In this paper we will compare the available macro-codes results of this calculation benchmark.

  19. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  20. Methods report on the development of the 2013 revision and update of the EAACI/GA2 LEN/EDF/WAO guideline for the definition, classification, diagnosis, and management of urticaria

    DEFF Research Database (Denmark)

    Zuberbier, T; Aberer, W; Asero, R;

    2014-01-01

    This methods report describes the process of guideline development in detail. It is the result of a systematic literature review using the 'Grading of Recommendations Assessment, Development and Evaluation' (GRADE) methodology and a structured consensus conference held on 28 and 29 November 2012...... (WAO) with the participation of delegates of 21 national and international societies. This guideline covers the definition and classification of urticaria, taking into account the recent progress in identifying its causes, eliciting factors and pathomechanisms. In addition, it outlines evidence...

  1. Bringing Definitions into High Definition

    Science.gov (United States)

    Mason, John

    2010-01-01

    Why do definitions play such a central role in mathematics? It may seem obvious that precision about the terms one uses is necessary in order to use those terms reasonably (while reasoning). Definitions are chosen so as to be definite about the terms one uses, but also to make both the statement of, and the reasoning to justify, theorems as…

  2. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  3. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  4. Definition and kinematics of the northern of the Puerto Rico-Virgin Islands block and the Lesser Antilles forearc based on an updated and improved GPS velocity field and revised block models

    Science.gov (United States)

    Mattioli, G. S.; Jansma, P. E.; Stafford-Glenn, M.; Calais, E.

    2011-12-01

    The presence of small tectonic blocks the Greater Antilles, for example the Puerto Rico-Virgin Islands block (PRVI), which may be translating, rotating, and possibly internally deforming, has been proposed and some cases well-documented by several workers. In addition, the existence of a Lesser Antilles forearc has been proposed based on interplate earthquake slip vectors (Lopez et al. 2006). Manaker et al. (2008) used sparse GPS and earthquake slip data from the northeastern Caribbean to construct a DEFNODE block and fault model to constrain interseismic fault coupling among the microplates in the northeastern Caribbean. They concluded that the Enriquillo fault in Haiti could produce a Mw7.2, if the entire accumulated elastic strain was released in one event. On January 12, 2010, the strain was released in a Mw7.0 earthquake that left Port-au-Prince in rubble. The interseismic GPS velocity field has been updated for Hispanolia (Calais et al, 2010); in addition, new data have been collected in the northern Lesser Antilles (NLA) in 2009 as well as throughout the PRVI block in 2007 and 2011, and the existing GPS time series updated and transformed into ITRF05 (IGS05). GPS data from the NLA are consistent with a NLA forearc sliver that moves differently from the Caribbean and North American plates as originally proposed by Lopez et al. (2006). The forearc does not, however, continue as single tectonic entity across the Anegada Passage as previously suggested. Here we report revised DEFNODE models using both the original geometry and constraints of Manaker et al. (2008) with an updated GPS data set as well as new models that explicitly include a forearc block. The models may be used to explicitly define the rotation parameters of the block as well as the coupling along block bounding faults. The original model geometry (without a forearc sliver) yields a higher reduced chi-squared (2.57 vs. 2.01), when additional the GPS velocities from NLA are used to condition the

  5. Moving Objects Updating

    Science.gov (United States)

    Chen, Jidong; Meng, Xiaofeng

    In moving objects applications, large numbers of locations can be sampled by sensors or GPS periodically, then sent from moving clients to the server and stored in a database. Therefore, continuously maintaining in a database the current locations of moving objects by using a tracking technique becomes very important. The key issue is minimizing the number of updates, while providing precise locations for query results. In this chapter, we will introduce some underlying location update methods. Then, we describe two location update strategies in detail, which can improve the performance. One is the proactive location update strategy, which predicts the movement of moving objects to lower the update frequency; the other is the group location update strategy, which groups the objects to minimize the total number of objects reporting their locations.

  6. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  7. AN OPTIMAL SELF-SCALING STRATEGY TO THE MODIFIED SYMMETRIC RANK ONE UPDATING

    Institute of Scientific and Technical Information of China (English)

    Yang Yueting; Xu Chengxian; Gao Yuelin

    2005-01-01

    In the paper, the optimal self-scaling strategy to the modified symmetric rank one (HSR1) update, which satisfies the modified quasi-Newton equation, is derived to improve the condition number of the updates. The scaling factors are derived from minimizing the estimate of upper bounds on the condition number of the updating matrix. Theoretical analysis, and numerical experiments and comparisons show that introducing the optimal scaling factor into the modified symmetric rank one update preserves the positive definiteness of updates, and greatly improves the stability and numerical performance of the modified symmetric rank one algorithm.

  8. State of the art of dynamic software updating in Java

    DEFF Research Database (Denmark)

    Gregersen, Allan Raundahl; Rasmussen, Michael; Jørgensen, Bo Nørregaard

    2014-01-01

    most comprehensive dynamic updating systems developed for Java to date. Together, these systems provide comprehensive support for changing class definitions of live objects, including adding, removing and moving fields, methods, classes and interfaces anywhere in the inheritance hierarchy. We then......The dynamic software updating system JRebel from Zeroturnaround has proven to be an efficient mean to improve developer productivity, as it allows developers to change the code of their applications while developing and testing them. Hence, developers no longer have to go through the tedious cycle...... Gosh! with JRebel. The successful integration of these two systems will set a new standard for dynamic software updating in Java....

  9. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  10. Higher education information technology management benchmarking in Europe

    OpenAIRE

    Juult, Janne

    2013-01-01

    Objectives of the Study: This study aims to facilitate the rapprochement of the European higher education benchmarking projects towards a unified European benchmarking project. Total of four higher education IT benchmarking projects are analysed by comparing their categorisation of benchmarking indicators and their data manipulation processes. Four select benchmarking projects are compared in this fashion for the first time. The focus is especially on the Finnish Bencheit project's point o...

  11. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying and...... questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend the...

  12. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  13. Behavior therapy: a clinical update.

    Science.gov (United States)

    Black, J L; Bruce, B K

    1989-11-01

    Through refinements from research and judicious combination with other therapies, behavior therapy has become increasingly relevant in the treatment of psychiatric disorders. After outlining the four models that serve as a framework for behavior therapy (classical conditioning, operant conditioning, social learning theory, and cognitive behavior modification), the authors provide an update for clinicians on developments in the behavioral treatment of anxiety disorders, sexual disorders, depression, and schizophrenia. Most advances have been made in the treatment of anxiety disorders, including definition of variables for successful use of exposure to phobic stimuli in the treatment of phobic disorders and the use of flooding for post-traumatic stress disorder. By becoming better acquainted with cognitive and behavioral therapies, clinicians may be able to offer their patients more effective treatment options. PMID:2680882

  14. 'Wasteaware' benchmark indicators for integrated sustainable waste management in cities.

    Science.gov (United States)

    Wilson, David C; Rodic, Ljiljana; Cowing, Michael J; Velis, Costas A; Whiteman, Andrew D; Scheinberg, Anne; Vilches, Recaredo; Masterson, Darragh; Stretz, Joachim; Oelz, Barbara

    2015-01-01

    This paper addresses a major problem in international solid waste management, which is twofold: a lack of data, and a lack of consistent data to allow comparison between cities. The paper presents an indicator set for integrated sustainable waste management (ISWM) in cities both North and South, to allow benchmarking of a city's performance, comparing cities and monitoring developments over time. It builds on pioneering work for UN-Habitat's solid waste management in the World's cities. The comprehensive analytical framework of a city's solid waste management system is divided into two overlapping 'triangles' - one comprising the three physical components, i.e. collection, recycling, and disposal, and the other comprising three governance aspects, i.e. inclusivity; financial sustainability; and sound institutions and proactive policies. The indicator set includes essential quantitative indicators as well as qualitative composite indicators. This updated and revised 'Wasteaware' set of ISWM benchmark indicators is the cumulative result of testing various prototypes in more than 50 cities around the world. This experience confirms the utility of indicators in allowing comprehensive performance measurement and comparison of both 'hard' physical components and 'soft' governance aspects; and in prioritising 'next steps' in developing a city's solid waste management system, by identifying both local strengths that can be built on and weak points to be addressed. The Wasteaware ISWM indicators are applicable to a broad range of cities with very different levels of income and solid waste management practices. Their wide application as a standard methodology will help to fill the historical data gap. PMID:25458855

  15. Gas-cooled fast breeder reactor shielding benchmark calculation

    Energy Technology Data Exchange (ETDEWEB)

    Rouse, C.A.; Mathews, D.R.; Koch, P.K.

    1977-01-01

    This report summarizes the results of a shielding benchmark calculation performed by General Atomic (GA) and Oak Ridge National Laboratory (ORNL). The problem analyzed was a neutron-coupled gamma ray transport calculation of the core blanket shield of the 300-MW(e) gas-cooled fast breeder reactor (GCFR). Comparison of the initial GA and ORNL results indicated good agreement for fast fluxes (E greater than 0.9 MeV and E greater than 0.086 MeV) but poor agreement for epithermal and thermal neutron fluxes. Examination of the results revealed that a deficiency in the GA fine-group cross section preparation code was responsible for the differences in the GA and ORNL iron cross sections. Modification of the GA cross sections to include self-shielding was accomplished, and the updated GA benchmark calculation performed with the self-shielded iron cross sections was in excellent agreement with the ORNL results for fast neutron fluxes with E greater than 0.9 MeV and E greater than 0.086 MeV and in good agreement for epithermal and thermal fluxes. The agreement of the gamma heating rates also improved significantly. Thus, it was concluded that the good agreement of the GA and ORNL neutron-coupled gamma ray transport calculation indicates that (1) the methods and cross sections used by both laboratories were compatible and consistent and (2) the use of 24 neutron energy groups and 15 gamma energy groups by GA was adequate compared with the use of 51 neutron energy groups and 25 gamma energy groups by ORNL.

  16. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  17. Benchmarking of the construct of dimensionless correlations regarding batch bubble columns with suspended solids: Performance of the Pressure Transform Approach

    CERN Document Server

    Hristov, Jordan

    2010-01-01

    Benchmark of dimensionless data correlations pertinent to batch bubble columns (BC) with suspended solids has been performed by the pressure transform approach (PTA). The main efforts have addressed the correct definition of dimensionless groups referring to the fact that solids dynamics and the bubble dynamics have different velocity and length scales. The correct definition of the initial set of variable in the classical dimensional analysis depends mainly on the experience of the investigator while the pressure transform approach (PTA) avoids errors at this initial stage. PTA addresses the physics of the phenomena occurring in complex systems involving many phases and allows straightforward definitions of dimensionless numbers.

  18. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  19. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  20. Updated research nosology for HIV-associated neurocognitive disorders

    OpenAIRE

    Antinori, A; Arendt, G.; Becker, J.T.; Brew, B J; Byrd, D.A.; Cherner, M; Clifford, D B; Cinque, P.; Epstein, L.G.; Goodkin, K.; Gisslen, M; Grant, I.; Heaton, R K.; Joseph, J.; Marder, K.

    2007-01-01

    In 1991, the AIDS Task Force of the American Academy of Neurology published nomenclature and research case definitions to guide the diagnosis of neurologic manifestations of HIV-1 infection. Now, 16 years later, the National Institute of Mental Health and the National Institute of Neurological Diseases and Stroke have charged a working group to critically review the adequacy and utility of these definitional criteria and to identify aspects that require updating. This report represents a majo...

  1. Benchmark analyses for BN-600 MOX core with minor actinides

    International Nuclear Information System (INIS)

    Full text: The IAEA has initiated in 1999 a Coordinated Research Project (CRP) on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects'. The general objective of the CRP is to validate, verify and improve methodologies and computer codes used for calculation of reactivity coefficients in fast reactors aiming at enhancing the utilization of plutonium and minor actinides (MAs). For this purpose, three benchmark models representing different modifications of the BN-600 reactor UOX core have been sequentially established and analyzed,the benchmark specifications being provided by IPPE. The first benchmark model is a hybrid UOX/MOX core, with UOX fuel in the inner core part and MOX fuel in the outer one, the fresh MOX fuel containing depleted uranium and weapons grade plutonium. The second model is a full MOX core, similar MOX fuel composition being assumed; a sodium plenum being introduced above the core to improve the core safety. The third model is analyzed in the paper. The model represents a similar full MOX core, but with plutonium and MAs from 60 GWd/t LWR spent fuel after 50 years cooling (thus assuming a so-called homogeneous recycling of MAs in a fast system). This option is the most challenging one (compared to those analyzed earlier in the CRP) as concerns the reactor safety since an increased content of MAs, in particular americium, and higher (than Pu239) isotopes of Pu leads to less favourable safety parameters. On the other hand, existing uncertainties in nuclear data for MAs and higher Pu isotopes may lead to relatively high uncertainties in the computation results for the considered model. The benchmark results include core criticality at the beginning and end of the equilibrium fuel cycle, kinetics parameters, spatial distributions of power and reactivity coefficients provided by CRP participants and obtained by employing different computation models and nuclear data. Sensitivity studies were performed at

  2. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W., II

    1993-01-01

    concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.

  3. Standardized benchmarking in the quest for orthologs.

    Science.gov (United States)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  4. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador;

    2016-01-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision......-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  5. Benchmarks for multicomponent diffusion and electrochemical migration

    DEFF Research Database (Denmark)

    Rasouli, Pejman; Steefel, Carl I.; Mayer, K. Ulrich; Rolle, Massimo

    2015-01-01

    been published to date. This contribution provides a set of three benchmark problems that demonstrate the effect of electric coupling during multicomponent diffusion and electrochemical migration and at the same time facilitate the intercomparison of solutions from existing reactive transport codes...... considered in solute transport problems, electromigration can strongly affect mass transport processes. The number of reactive transport models that consider electromigration has been growing in recent years, but a direct model intercomparison that specifically focuses on the role of electromigration has not....... The first benchmark focuses on the 1D transient diffusion of HNO3 (pH = 4) in a NaCl solution into a fixed concentration reservoir, also containing NaCl—but with lower HNO3 concentrations (pH = 6). The second benchmark describes the 1D steady-state migration of the sodium isotope 22Na triggered by...

  6. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment. PMID:23656950

  7. AGENT code - neutron transport benchmark examples

    International Nuclear Information System (INIS)

    The paper focuses on description of representative benchmark problems to demonstrate the versatility and accuracy of the AGENT (Arbitrary Geometry Neutron Transport) code. AGENT couples the method of characteristics and R-functions allowing true modeling of complex geometries. AGENT is optimized for robustness, accuracy, and computational efficiency for 2-D assembly configurations. The robustness of R-function based geometry generator is achieved through the hierarchical union of the simple primitives into more complex shapes. The accuracy is comparable to Monte Carlo codes and is obtained by following neutron propagation through true geometries. The computational efficiency is maintained through a set of acceleration techniques introduced in all important calculation levels. The selected assembly benchmark problems discussed in this paper are: the complex hexagonal modular high-temperature gas-cooled reactor, the Purdue University reactor and the well known C5G7 benchmark model. (author)

  8. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (Pij, Sn, Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  9. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  10. Shielding Integral Benchmark Archive and Database (SINBAD)

    International Nuclear Information System (INIS)

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  11. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  12. Benchmark field study of deep neutron penetration

    Science.gov (United States)

    Morgan, J. F.; Sale, K.; Gold, R.; Roberts, J. H.; Preston, C. C.

    1991-06-01

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry.

  13. Computational benchmark for deep penetration in iron

    International Nuclear Information System (INIS)

    A benchmark for calculation of neutron transport through iron is now available based upon a rigorous Monte Carlo treatment of ENDF/B-IV and ENDF/B-V cross sections. The currents, flux, and dose (from monoenergetic 2, 14, and 40 MeV sources) have been tabulated at various distances through the slab using a standard energy group structure. This tabulation is available in a Los Alamos Scientific Laboratory report. The benchmark is simple to model and should be useful for verifying the adequacy of one-dimensional transport codes and multigroup libraries for iron. This benchmark also provides useful insights regarding neutron penetration through iron and displays differences in fluxes calculated with ENDF/B-IV and ENDF/B-V data bases

  14. SP2Bench: A SPARQL Performance Benchmark

    CERN Document Server

    Schmidt, Michael; Lausen, Georg; Pinkel, Christoph

    2008-01-01

    Recently, the SPARQL query language for RDF has reached the W3C recommendation status. In response to this emerging standard, the database community is currently exploring efficient storage techniques for RDF data and evaluation strategies for SPARQL queries. A meaningful analysis and comparison of these approaches necessitates a comprehensive and universal benchmark platform. To this end, we have developed SP$^2$Bench, a publicly available, language-specific SPARQL performance benchmark. SP$^2$Bench is settled in the DBLP scenario and comprises both a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. As a proof of concept, we apply SP$^2$Bench to existing engines and discuss ...

  15. Automatisches Software-Update

    OpenAIRE

    Clauß, Matthias; Fischer, Günther

    2003-01-01

    Vorgestellt wird ein neuer Dienst zum eigenverantwortlichen Software-Update von PC-Systemen, die unter Linux Red Hat 7.3 betrieben werden. Grundlage des Dienstes bildet das Verfahren YARU (Yum based Automatic RPM Update) als Bestandteil der im URZ eingesetzten Admin-Technologie für Linux-Rechner.

  16. Livestock Update. January 2015

    OpenAIRE

    Greiner, Scott P.; McCann, Mark A.; Smith, Jason

    2015-01-01

    This issue of Livestock Update includes articles on dates to remember, herd management, selection for marbling in a cowherd, 2014 Culpeper Senior BCIA Bull Sale results, 2015 Stocker Cattle Summit to focus on forage management for optimal animal gain, sheep update, and the 48th Virginia Pork Industry Conference.

  17. Livestock Update. September 2014

    OpenAIRE

    Greiner, Scott Patrick; McCann, Mark A.; Saville, Joi; Neil, Scott J.; Harmon, Deidre D.; Callan, Peter; Estienne, Mark Joseph, 1960-; Wiegert, Jeffrey; Clark, Sherrie

    2014-01-01

    This LIVESTOCK UPDATE contains timely subject matter on beef cattle, horses, poultry, sheep, swine, and related junior work. This issue includes: Dates to Remember; September Herd Management Advisor; Time for Fall Nutrition Tune-Up; BVD's Role in Shipping Fever Pneumonia; Sheep Field Day & Ram Lamb Sale; 2014 Virginia Tech Sheep Management Basics Workshop; Sheep Update; and Swine Production in Virginia.

  18. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  19. Reactor group constants and benchmark test

    International Nuclear Information System (INIS)

    The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)

  20. Benchmarking of European power network companies

    International Nuclear Information System (INIS)

    A European benchmark has been conducted among 63 grid companies to obtain insight in the degree of efficiency of these companies and to identify the main cost drivers. The benchmark shows that, based on the full distribution cost, the performance differs greatly from company to company. The cost of the companies with the worst performers is five times higher than that of the best performer. Dutch grid operators turn out to work relatively efficient compared to other European companies. Consumers benefit from the consequently lower energy bills. [mk

  1. Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    SINBAD (Shielding integral benchmark archive and database) is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity. It has been designed to be able to include data from nuclear reactor shielding, fusion blankets and accelerator shielding experiments. (authors)

  2. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  3. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li17Pb83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li17Pb83 blankets. (author)

  4. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...

  5. Benchmark testing of 233U evaluations

    International Nuclear Information System (INIS)

    In this paper we investigate the adequacy of available 233U cross-section data (ENDF/B-VI and JENDL-3) for calculation of critical experiments. An ad hoc revised 233U evaluation is also tested and appears to give results which are improved relative to those obtained with either ENDF/B-VI or JENDL-3 cross sections. Calculations of keff were performed for ten fast benchmarks and six thermal benchmarks using the three cross-section sets. Central reaction-rate-ratio calculations were also performed

  6. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  7. Effects of Exciting Evaluated Nuclear Date Files on Nuclear Parameters of the BFS-62-3A Assembly Benchmark Model

    OpenAIRE

    Mikhail

    2002-01-01

    This report is continuation of studying of the experiments performed on BFS-62-3A critical assembly in Russia. The objective of work is definition of the cross section uncertainties on reactor neutronics parameters as applied to the hybrid core of the BN-600 reactor of Beloyarskaya NPP. Two-dimensional benchmark model of BFS-62-3A was created specially for these purposes and experimental values were reduced to it. Benchmark characteristics for this assembly are (1)criticality; (2)central fiss...

  8. A new neuro-FDS definition for indirect adaptive control of unknown nonlinear systems using a method of parameter hopping.

    Science.gov (United States)

    Boutalis, Yiannis; Theodoridis, Dimitris C; Christodoulou, Manolis A

    2009-04-01

    The indirect adaptive regulation of unknown nonlinear dynamical systems is considered in this paper. The method is based on a new neuro-fuzzy dynamical system (neuro-FDS) definition, which uses the concept of adaptive fuzzy systems (AFSs) operating in conjunction with high-order neural network functions (FHONNFs). Since the plant is considered unknown, we first propose its approximation by a special form of an FDS and then the fuzzy rules are approximated by appropriate HONNFs. Thus, the identification scheme leads up to a recurrent high-order neural network (RHONN), which however takes into account the fuzzy output partitions of the initial FDS. The proposed scheme does not require a priori experts' information on the number and type of input variable membership functions making it less vulnerable to initial design assumptions. Once the system is identified around an operation point, it is regulated to zero adaptively. Weight updating laws for the involved HONNFs are provided, which guarantee that both the identification error and the system states reach zero exponentially fast, while keeping all signals in the closed loop bounded. The existence of the control signal is always assured by introducing a novel method of parameter hopping, which is incorporated in the weight updating law. Simulations illustrate the potency of the method and comparisons with conventional approaches on benchmarking systems are given. Also, the applicability of the method is tested on a direct current (dc) motor system where it is shown that by following the proposed procedure one can obtain asymptotic regulation. PMID:19273046

  9. Status of the international criticality safety benchmark evaluation project (ICSBEP)

    International Nuclear Information System (INIS)

    Since ICNC'99, four new editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments have been published. The number of benchmark specifications in the Handbook has grown from 2157 in 1999 to 3073 in 2003, an increase of nearly 1000 specifications. These benchmarks are used to validate neutronics codes and nuclear cross-section data. Twenty evaluations representing 192 benchmark specifications were added to the Handbook in 2003. The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) is provided in this paper along with a summary of the newly added benchmark specifications that appear in the 2003 Edition of the Handbook. (author)

  10. Bumper 3 Update for IADC Protection Manual

    Science.gov (United States)

    Christiansen, Eric L.; Nagy, Kornel; Hyde, Jim

    2016-01-01

    The Bumper code has been the standard in use by NASA and contractors to perform meteoroid/debris risk assessments since 1990. It has undergone extensive revisions and updates [NASA JSC HITF website; Christiansen et al., 1992, 1997]. NASA Johnson Space Center (JSC) has applied BUMPER to risk assessments for Space Station, Shuttle, Mir, Extravehicular Mobility Units (EMU) space suits, and other spacecraft (e.g., LDEF, Iridium, TDRS, and Hubble Space Telescope). Bumper continues to be updated with changes in the ballistic limit equations describing failure threshold of various spacecraft components, as well as changes in the meteoroid and debris environment models. Significant efforts are expended to validate Bumper and benchmark it to other meteoroid/debris risk assessment codes. Bumper 3 is a refactored version of Bumper II. The structure of the code was extensively modified to improve maintenance, performance and flexibility. The architecture was changed to separate the frequently updated ballistic limit equations from the relatively stable common core functions of the program. These updates allow NASA to produce specific editions of the Bumper 3 that are tailored for specific customer requirements. The core consists of common code necessary to process the Micrometeoroid and Orbital Debris (MMOD) environment models, assess shadowing and calculate MMOD risk. The library of target response subroutines includes a board range of different types of MMOD shield ballistic limit equations as well as equations describing damage to various spacecraft subsystems or hardware (thermal protection materials, windows, radiators, solar arrays, cables, etc.). The core and library of ballistic response subroutines are maintained under configuration control. A change in the core will affect all editions of the code, whereas a change in one or more of the response subroutines will affect all editions of the code that contain the particular response subroutines which are modified. Note

  11. Prague texture segmentation data generator and benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal

    2006-01-01

    Roč. 2006, č. 64 (2006), s. 67-68. ISSN 0926-4981 R&D Projects: GA MŠk(CZ) 1M0572; GA AV ČR(CZ) 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * texture * benchmark * web Subject RIV: BD - Theory of Information

  12. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  13. Benchmarking 2010: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  14. Benchmarking 2011: Trends in Education Philanthropy

    Science.gov (United States)

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  15. What Is the Impact of Subject Benchmarking?

    Science.gov (United States)

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  16. A protein–DNA docking benchmark

    NARCIS (Netherlands)

    van Dijk, M.; Bonvin, A.M.J.J.

    2008-01-01

    We present a protein–DNA docking benchmark containing 47 unbound–unbound test cases of which 13 are classified as easy, 22 as intermediate and 12 as difficult cases. The latter shows considerable structural rearrangement upon complex formation. DNA-specific modifications such as flipped out bases an

  17. Resolution for the Loviisa benchmark problem

    International Nuclear Information System (INIS)

    In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations

  18. Comparative benchmarks of full QCD algorithms

    International Nuclear Information System (INIS)

    We report performance benchmarks for several algorithms that we have used to simulate the Schroedinger functional with two flavors of dynamical quarks. They include hybrid and polynomial hybrid Monte Carlo with preconditioning. An appendix describes a method to deal with autocorrelations for nonlinear functions of primary observables as they are met here due to reweighting. (orig.)

  19. First CSNI numerical benchmark problem: comparison report

    International Nuclear Information System (INIS)

    In order to be able to make valid statements about a model's ability to describe a certain physical situation, it is indispensable that the numerical errors are much smaller than the modelling errors; otherwise, numerical errors could compensate or over pronounce model errors in an uncontrollable way. Therefore, knowledge about the numerical errors dependence on discretization parameters (e.g. size of spatial and temporal mesh) is required. In recognition of this need, numerical benchmark problems have been introduced. In the area of transient two-phase flow, numerical benchmarks are rather new. In June 1978, the CSNI Working Group on Emergency Core Cooling of Water Reactors has proposed to ICD /CSNI to sponsor a First CSNI Numerical Benchmark exercise. By the end of October 1979, results of the computation had been received from 10 organisations in 10 different countries. Based on these contributions, a preliminary comparison report has been prepared and distributed to the members of the CSNI Working Group on Emergency Core Cooling of Water Reactors, and to the contributors to the benchmark exercise. Comments on the preliminary comparison report by some contributors have subsequently been received. They have been considered in writing this final comparison report

  20. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    Angles Rojas, R.; Pham, M.D.; Boncz, P.A.

    2014-01-01

    With inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics in industrial-st

  1. Benchmarking in radiation protection in pharmaceutical industries

    International Nuclear Information System (INIS)

    A benchmarking on radiation protection in seven pharmaceutical companies in Germany and Switzerland was carried out. As the result relevant parameters describing the performance and costs of radiation protection were acquired and compiled and subsequently depicted in figures in order to make these data comparable. (orig.)

  2. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  3. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  4. RADSAT Benchmarks for Prompt Gamma Neutron Activation Analysis Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Burns, Kimberly A.; Gesh, Christopher J.

    2011-07-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. High-resolution gamma-ray spectrometers are used in these applications to measure the spectrum of the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used simulation tool for this type of problem, but computational times can be prohibitively long. This work explores the use of multi-group deterministic methods for the simulation of coupled neutron-photon problems. The main purpose of this work is to benchmark several problems modeled with RADSAT and MCNP to experimental data. Additionally, the cross section libraries for RADSAT are updated to include ENDF/B-VII cross sections. Preliminary findings show promising results when compared to MCNP and experimental data, but also areas where additional inquiry and testing are needed. The potential benefits and shortcomings of the multi-group-based approach are discussed in terms of accuracy and computational efficiency.

  5. Description of WIMS Library Update Project (WLUP)

    International Nuclear Information System (INIS)

    WIMS-D is one of the few reactor lattice codes that are in the public domain and therefore are available on non-commercial terms, for research and power nuclear reactor calculations. The main weakness of the WIMS-D package is its multi-group constants library, which is based on very old data. Relatively good performance of WIMS-D is attributed to a series of empirical adjustments to the multi-group data. However, the adjustments are not always justified by more accurate and recent experimental measurements. In view of the recently available new, or revised, evaluated nuclear data files it was felt that the performance of WIMS-D could be improved by updating its library. The WIMS-D Library Update Project (WLUP) was initiated in the early 1990's and finished in 2001. The International Atomic Energy Agency (IAEA) supported its co-ordination, but the project itself consisted of voluntary contributions from a large number of participants. In due course, several benchmarks for testing the library were identified and analyzed, the WIMSR module of the NJOY code system was upgraded, a detailed parametric study was performed to investigate the effects of various data processing input options on integral results and, the data processing methods for the main reactor materials were optimized. The final product, available on CD-ROM from NDS-IAEA includes: 69 and 172 group WIMSD libraries prepared from the selected evaluated data files, IAEA-TECDOC with detailed documentation, Processing inputs, Benchmark inputs and, the system of auxiliary codes developed under the project. (author)

  6. Challenging More Updates: Towards Anonymous Re-publication of Fully Dynamic Datasets

    CERN Document Server

    Li, Feng

    2008-01-01

    Most existing anonymization work has been done on static datasets, which have no update and need only one-time publication. Recent studies consider anonymizing dynamic datasets with external updates: the datasets are updated with record insertions and/or deletions. This paper addresses a new problem: anonymous re-publication of datasets with internal updates, where the attribute values of each record are dynamically updated. This is an important and challenging problem for attribute values of records are updating frequently in practice and existing methods are unable to deal with such a situation. We initiate a formal study of anonymous re-publication of dynamic datasets with internal updates, and show the invalidation of existing methods. We introduce theoretical definition and analysis of dynamic datasets, and present a general privacy disclosure framework that is applicable to all anonymous re-publication problems. We propose a new counterfeited generalization principle alled m-Distinct to effectively anon...

  7. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking as...... it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This...... paper addresses these issues, and describes how effects are closely connected to the perception of benchmarking, the intended users of the system and the application of the benchmarking results. The fundamental basis of this paper is taken from the development of benchmarking in the Danish construction...

  8. Neuroretinitis -- definition

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/007624.htm Neuroretinitis - definition To use the sharing features on this page, ... M. is also a founding member of Hi-Ethics and subscribes to the principles of the Health ...

  9. System solution of evaluation of the brand management’s efficiency in the competitive environment: a methodology of benchmarking

    Directory of Open Access Journals (Sweden)

    Kendiukhov Oleksandr Volodymyrovych

    2014-12-01

    Full Text Available The aim of the article. The article is based on generalization of the results of leading scientists in the brand management sphere. It is proved that using of benchmarking to assess the brand management effectiveness is the most appropriate evaluation system in current market conditions. The purpose of this article is to develop the methodology for assessing the effectiveness of brand management through benchmarking. The results of the analysis. Brand management is the most important function in the entrepreneurship and it should provide the sustainable, competitive functioning and the development of brand. Brand management involves special studies of trademarks efficiency, development of strategies and programs of brand equity. The analysis and evaluation of brand management are associated with such scientific and practical tasks as improving the efficiency of the economic activities of the national enterprises, forming the effective organizational and economic mechanism of brand management. Benchmarking is systematic activity which is based on finding, evaluating and training on the best examples regardless of the business sector and geographic location. The main concept of benchmarking is comparison of both the activity of enterprises competing and the leading firms in other industries. Benchmarking concept and methods can reduce costs and increase the revenue and to optimize the structure’s dynamic and the strategy choice of the company. In terms of brand management efficiency analysis on the basis of benchmarking we propose such stages: 1 definition of benchmarking object; 2 choosing of brand standard; 3 search of information; 4 analysis; 5 implementation. Benchmarking approach leads to the significant change in branding decision-making procedure. Traditionally, the solutions for management of trademarks were adapted on the basis of market research and managers’ intuition in the relation to the effectiveness of brand promoting measures

  10. Energy Economic Data Base (EEDB) Program: Phase VI update (1983) report

    International Nuclear Information System (INIS)

    This update of the Energy Economic Data Base is the latest in a series of technical and cost studies prepared by United Engineers and Constructors Inc., during the last 18 years. The data base was developed during 1978 and has been updated annually since then. The purpose of the updates has been to reflect the impact of changing regulations and technology on the costs of electric power generating stations. This Phase VI (Sixth) Update report documents the results of the 1983 EEDB Program update effort. The latest effort was a comprehensive update of the technical and capital cost information for the pressurized water reactor, boiling water reactor, and liquid metal fast breeder reactor nuclear power plant data models and for the 800 MWe and 500 MWe high sulfur coal-fired power plant data models. The update provided representative costs for these nuclear and coal-fired power plants for the 1980's. In addition, the updated nuclear power plant data models for the 1980's were modified to provide anticipated costs for nuclear power plants for the 1990's. Consequently, the Phase VI Update has continued to provide important benchmark information through which technical and capital cost trends may be identified that have occurred since January 1, 1978

  11. Revaluering benchmarking - A topical theme for the construction industry

    OpenAIRE

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in...

  12. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  13. Destination benchmarking: facilities, customer satisfaction and levels of tourist expenditure

    OpenAIRE

    Metin KOZAK

    2000-01-01

    An extensive review of past benchmarking literature showed that there have been a substantial number of both conceptual and empirical attempts to formulate a benchmarking approach, particularly in the manufacturing industry. However, there has been limited investigation and application of benchmarking in tourism and particularly in tourist destinations. The aim of this research is to further develop the concept of benchmarking for application within tourist destinations and to evaluate its...

  14. On the Extrapolation with the Denton Proportional Benchmarking Method

    OpenAIRE

    Marco Marini; Tommaso Di Fonzo

    2012-01-01

    Statistical offices have often recourse to benchmarking methods for compiling quarterly national accounts (QNA). Benchmarking methods employ quarterly indicator series (i) to distribute annual, more reliable series of national accounts and (ii) to extrapolate the most recent quarters not yet covered by annual benchmarks. The Proportional First Differences (PFD) benchmarking method proposed by Denton (1971) is a widely used solution for distribution, but in extrapolation it may suffer when the...

  15. Benchmarking for major producers of limestone in the Czech Republic

    OpenAIRE

    Vaněk, Michal; Mikoláš, Milan; Bora, Petr

    2013-01-01

    The validity of information available to managers influences the quality of the decision-making processes controlled by those managers. Benchmarking is a method which can yield quality information. The importance of benchmarking is strengthened by the fact that many authors consider benchmarking to be an integral part of strategic management. In commercial practice, benchmarking data and conclusions usually become commercial secrets for internal use only. The wider professional public lacks t...

  16. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, and inefficient portfolio versions. The covariance between the of...

  17. Benchmarking of corporate social responsibility: Methodological problems and robustness.

    OpenAIRE

    Graafland, J.J.; Eijffinger, S.C.W.; Smid, H.

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  18. Towards a Benchmark Suite for Modelica Compilers: Large Models

    OpenAIRE

    Frenkel, Jens; Schubert, Christian; Kunze, Günter; Fritzson, Peter; Sjölund, Martin; Pop, Adrian

    2011-01-01

    The paper presents a contribution to a Modelica benchmark suite. Basic ideas for a tool independent benchmark suite based on Python scripting along with models for testing the performance of Modelica compilers regarding large systems of equation are given. The automation of running the benchmark suite is demonstrated followed by a selection of benchmark results to determine the current limits of Modelica tools and how they scale for an increasing number of equations.

  19. National Pediatric Program Update

    International Nuclear Information System (INIS)

    The book of the National Pediatric Program Update, issued by the Argentina Society of Pediatrics, describes important issues, including: effective treatment of addictions (drugs); defects of the neural tube; and the use of radiation imaging in diagnosis.

  20. Livestock Update. August 2013

    OpenAIRE

    Greiner, Scott Patrick; McCann, Mark A.; Neil, Scott J.; Harmon, Deidre D.; Whittier, W. Dee

    2013-01-01

    Includes articles on August herd management, phosphorus supplementation of beef cattle, 2013 across-breed EPD table, Applied Reproduction in Beef Cattle event, sheep breeding season tips, and a sheep update.

  1. The Alpha consensus meeting on cryopreservation key performance indicators and benchmarks: proceedings of an expert meeting.

    Science.gov (United States)

    2012-08-01

    This proceedings report presents the outcomes from an international workshop designed to establish consensus on: definitions for key performance indicators (KPIs) for oocyte and embryo cryopreservation, using either slow freezing or vitrification; minimum performance level values for each KPI, representing basic competency; and aspirational benchmark values for each KPI, representing best practice goals. This report includes general presentations about current practice and factors for consideration in the development of KPIs. A total of 14 KPIs were recommended and benchmarks for each are presented. No recommendations were made regarding specific cryopreservation techniques or devices, or whether vitrification is 'better' than slow freezing, or vice versa, for any particular stage or application, as this was considered to be outside the scope of this workshop. PMID:22727888

  2. Inference and update

    OpenAIRE

    Velázquez-Quesada, F.R.

    2008-01-01

    We look at two fundamental logical processes, often intertwined in planning and problem solving: inference and update. Inference is an internal process with which we draw new conclusions, uncovering what is implicit in the information we already have. Update, on the other hand, is produced by external communication, usually in the form of announcements and in general in the form of observations, giving us information that might have been not available (even implicitly) to us before. Both proc...

  3. Livestock Update. October 2014

    OpenAIRE

    Greiner, Scott Patrick; McCann, Mark A.; Groover, Gordon Eugene, 1956-; Callan, Peter; Wiegert, Jeffrey; Estienne, Mark Joseph, 1960-

    2014-01-01

    This LIVESTOCK UPDATE contains timely subject matter on beef cattle, horses, poultry, sheep, swine, and related junior work. This issue includes: Dates to Remember; November Herd Management Advisor; Evaluate Nutrition Needs and Plan for Winter; 2014 Culpeper Senior Bull Sale; Tax and Financial Management in Profitable Years; Sheep Management Tips- Late Fall; Sheep Update; and Methods for Improving Pre-Weaning Survival Rates of Piglets.

  4. Taking Stock of Corporate Benchmarking Practices: Panacea or Pandora's Box?

    Science.gov (United States)

    Fleisher, Craig S.; Burton, Sara

    1995-01-01

    Discusses why corporate communications/public relations (cc/pr) should be benchmarked (an approach used by cc/pr managers to demonstrate the value of their activities to skeptical organizational executives). Discusses myths about cc/pr benchmarking; types, targets, and focus of cc/pr benchmarking; a process model; and critical decisions about…

  5. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  6. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Science.gov (United States)

    2010-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to...

  7. 42 CFR 422.258 - Calculation of benchmarks.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly... the plan bids. (c) Calculation of MA regional non-drug benchmark amount. CMS calculates the...

  8. Measuring NUMA effects with the STREAM benchmark

    CERN Document Server

    Bergstrom, Lars

    2011-01-01

    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

  9. Non-judgemental Dynamic Fuel Cycle Benchmarking

    CERN Document Server

    Scopatz, Anthony Michael

    2015-01-01

    This paper presents a new fuel cycle benchmarking analysis methodology by coupling Gaussian process regression, a popular technique in Machine Learning, to dynamic time warping, a mechanism widely used in speech recognition. Together they generate figures-of-merit that are applicable to any time series metric that a benchmark may study. The figures-of-merit account for uncertainty in the metric itself, utilize information across the whole time domain, and do not require that the simulators use a common time grid. Here, a distance measure is defined that can be used to compare the performance of each simulator for a given metric. Additionally, a contribution measure is derived from the distance measure that can be used to rank order the importance of fuel cycle metrics. Lastly, this paper warns against using standard signal processing techniques for error reduction. This is because it is found that error reduction is better handled by the Gaussian process regression itself.

  10. MHTGR-350 Benchmark Analysis by MCS Code

    International Nuclear Information System (INIS)

    This benchmark contains various problems in three phases, which require the results for neutronics, thermal fluids solutions, transient calculation, and depletion calculation. The Phase-I exercise-1 problem was solved with MCS Monte Carlo (MC) code developed at UNIST. The global parameters and power distribution was compared with the results of McCARD MC code developed by SNU and a finite element method (FEM) - based diffusion code CAPP developed by KAERI. The MHTGR-350 benchmark Phase-I exercise 1 was solved with MCS. The results of MCS are compared with those of McCARD and CAPP. The results of MCS code showed good agreements with those of McCARD code while they showed considerable disagreements with those of CAPP code, which can be attributed to the fact that CAPP is a diffusion code while the others are MC transport codes

  11. Argonne Code Center: benchmark problem book

    International Nuclear Information System (INIS)

    This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification

  12. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  13. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  14. KENO-IV code benchmark calculation, (6)

    International Nuclear Information System (INIS)

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO3)4 aqueous solution, Pu metal or PuO2-polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)

  15. BENCHMARKING – BETWEEN TRADITIONAL & MODERN BUSINESS ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Mihaela Ungureanu

    2011-09-01

    Full Text Available The concept of benchmarking requires a continuous process of performance improvement of different organizations in order to obtain superiority towards those perceived as market leader’s competitors. This superiority can always be questioned, its relativity originating in the quick growing evolution of the economic environment. The approach supports innovation in relation with traditional methods and it is based on the will of those managers who want to determine limits and seek excellence. The end of the twentieth century is the period of broad expression of benchmarking in various areas and its transformation from a simple quantitative analysis tool, to a resource of information on performance and quality of goods and services.

  16. Shielding benchmark test for JENDL-3T

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, Akira (Japan Atomic Energy Research Inst., Tokai, Ibaraki. Tokai Research Establishment)

    1988-03-01

    The results of the shielding benchmark tests for JENDL-3T (testing stage version of JENDL-3), performed by JNDC Shielding Sub-working group, are summarized. Especially, problems of total cross-section in MeV range for O, Na, Fe, revealed from the analysis of the Broomstick's experiment, are discussed in details. For the deep penetration profiles of Fe, which is very important feature in shielding calculation, ASPIS benchmark experiment is analysed and discussed. From the study overall applicability of JENDL-3T data for the shielding calculation is confirmed. At the same time some problems still remained are also pointed out. By the reflection of this feedback information applicability of JENDL-3, forth coming official version, will be greatly improved.

  17. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  18. FENDL-2 and associated benchmark calculations

    International Nuclear Information System (INIS)

    The present Report contains the Summary of the IAEA Advisory Group Meeting on ''The FENDL-2 and Associated Benchmark Calculations'' convened on 18-22 November 1991, at the IAEA Headquarters in Vienna, Austria, by the IAEA Nuclear Data Section. The Advisory Group Meeting Conclusions and Recommendations and the Report on the Strategy for the Future Development of the FENDL and on Future Work towards establishing FENDL-2 are also included in this Summary Report. (author). 1 ref., 4 tabs

  19. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  20. MESURE Tool to benchmark Java Card platforms

    OpenAIRE

    Pierre Paradinas; Julien Cordry; Samia Bouzefrane

    2009-01-01

    The advent of the Java Card standard has been a major turning point in smart card technology. With the growing acceptance of this standard, understanding the performance behavior of these platforms is becoming crucial. To meet this need, we present in this paper a novel benchmarking framework to test and evaluate the performance of Java Card platforms. MESURE tool is the first framework which accuracy and effectiveness are independent from the particular Java Card platform tested and CAD used.

  1. A Simplified HTTR Diffusion Theory Benchmark

    International Nuclear Information System (INIS)

    The Georgia Institute of Technology (GA-Tech) recently developed a transport theory benchmark based closely on the geometry and the features of the HTTR reactor that is operational in Japan. Though simplified, the benchmark retains all the principal physical features of the reactor and thus provides a realistic and challenging test for the codes. The purpose of this paper is two-fold. The first goal is an extension of the benchmark to diffusion theory applications by generating the additional data not provided in the GA-Tech prior work. The second goal is to use the benchmark on the HEXPEDITE code available to the INL. The HEXPEDITE code is a Green's function-based neutron diffusion code in 3D hexagonal-z geometry. The results showed that the HEXPEDITE code accurately reproduces the effective multiplication factor of the reference HELIOS solution. A secondary, but no less important, conclusion is that in the testing against actual HTTR data of a full sequence of codes that would include HEXPEDITE, in the apportioning of inevitable discrepancies between experiment and models, the portion of error attributable to HEXPEDITE would be expected to be modest. If large discrepancies are observed, they would have to be explained by errors in the data fed into HEXPEDITE. Results based on a fully realistic model of the HTTR reactor are presented in a companion paper. The suite of codes used in that paper also includes HEXPEDITE. The results shown here should help that effort in the decision making process for refining the modeling steps in the full sequence of codes.

  2. Decentralized Reliable Control for a Building Benchmark

    Czech Academy of Sciences Publication Activity Database

    Bakule, Lubomír; Papík, Martin; Rehák, Branislav

    Barcelona : CIMNE, 2014 - (Rodellar, J.; Güemes, A.; Pozo, F.), s. 2242-2253 ISBN 978-84-942844-5-8. [World Conference on Structural Control and Monitoring /6./ - 6WCSCM. Barcelona (ES), 15.07.2014-17.07.2014] R&D Projects: GA ČR GA13-02149S Keywords : decenralized reliable control * structural control * building benchmark Subject RIV: BC - Control Systems Theory

  3. WIDER FACE: A Face Detection Benchmark

    OpenAIRE

    Yang, Shuo; Luo, Ping; Loy, Chen Change; Tang, Xiaoou

    2015-01-01

    Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categori...

  4. Benchmarking the Governance of Tertiary Education Systems

    OpenAIRE

    World Bank

    2012-01-01

    This paper presents a benchmarking approach for analyzing and comparing governance in tertiary education as a critical determinant of system and institutional performance. This methodology is tested through a pilot survey in East Asia and Central America. The paper is structured in the following way: (i) the first part highlights the link between good governance practices and the performance of tertiary institutions (ii) the second part introduces the analytical approach underpinning the gove...

  5. Benchmark calculations for MTR type cores

    International Nuclear Information System (INIS)

    The benchmark neutronies design study of MTR cores has been performed for various fuel enrichments. The reactivities and fluxes for fresh core have been evaluated. The reference calculations have been performed for a 10MW(th) reactor but the method is applicable to other power levels. As the results are in good agreement with those obtained at other establishments, the method of analysis used in this report for a fresh core can be relied upon with a fair amount of confidence. (authors)

  6. POLCA-T Neutron Kinetics Model Benchmarking

    OpenAIRE

    Kotchoubey, Jurij

    2015-01-01

    The demand for computational tools that are capable to reliably predict the behavior of a nuclear reactor core in a variety of static and dynamic conditions does inevitably require a proper qualification of these tools for the intended purposes. One of the qualification methods is the verification of the code in question. Hereby, the correct implementation of the applied model as well as its flawless implementation in the code are scrutinized. The present work concerns with benchmarking as a ...

  7. Reactor calculation benchmark PCA blind test results

    International Nuclear Information System (INIS)

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables

  8. BN-600 full MOX core benchmark analysis

    International Nuclear Information System (INIS)

    As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.

  9. Direct Simulation of a Solidification Benchmark Experiment

    OpenAIRE

    Carozzani, Tommy; Gandin, Charles-André; Digonnet, Hugues; Bellet, Michel; Zaidat, Kader; Fautrelle, Yves

    2013-01-01

    International audience A solidification benchmark experiment is simulated using a three-dimensional cellular automaton-finite element solidification model. The experiment consists of a rectangular cavity containing a Sn-3 wt pct Pb alloy. The alloy is first melted and then solidified in the cavity. A dense array of thermocouples permits monitoring of temperatures in the cavity and in the heat exchangers surrounding the cavity. After solidification, the grain structure is revealed by metall...

  10. Benchmark testing calculations for 232Th

    International Nuclear Information System (INIS)

    The cross sections of 232Th from CNDC and JENDL-3.3 were processed with NJOY97.45 code in the ACE format for the continuous-energy Monte Carlo Code MCNP4C. The Keff values and central reaction rates based on CENDL-3.0, JENDL-3.3 and ENDF/B-6.2 were calculated using MCNP4C code for benchmark assembly, and the comparisons with experimental results are given. (author)

  11. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge;

    2015-01-01

    the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the...... symmetric NACA airfoil family. (C) 2015 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license...

  12. Benchmarking research of steel companies in Europe

    OpenAIRE

    M. Antošová; A. Csikósová; K. Čulková; Seňová, A.

    2013-01-01

    In present time steelworks are at a stage of permanent changes that are marked with still stronger competition pressure. Therefore managers must solve questions of how to decrease production costs, how to overcome competition and how to survive in the world market. Still more attention should be paid to the modern managerial methods of market research and comparison with competition. Benchmarking research is one of the effective tools for such research. The goal of this contribution is to com...

  13. Benchmarking regulatory network reconstruction with GRENDEL

    OpenAIRE

    Haynes, Brian C; Brent, Michael R.

    2009-01-01

    Motivation: Over the past decade, the prospect of inferring networks of gene regulation from high-throughput experimental data has received a great deal of attention. In contrast to the massive effort that has gone into automated deconvolution of biological networks, relatively little effort has been invested in benchmarking the proposed algorithms. The rate at which new network inference methods are being proposed far outpaces our ability to objectively evaluate and compare them. This is lar...

  14. Benchmarking dynamic Bayesian network structure learning algorithms

    OpenAIRE

    Trabelsi, Ghada; Leray, Philippe; Ben Ayed, Mounir; Alimi, Adel

    2012-01-01

    Dynamic Bayesian Networks (DBNs) are probabilistic graphical models dedicated to modeling multivariate time series. Two-time slice BNs (2-TBNs) are the most current type of these models. Static BN structure learning is a well-studied domain. Many approaches have been proposed and the quality of these algorithms has been studied over a range of di erent standard networks and methods of evaluation. To the best of our knowledge, all studies about DBN structure learning use their own benchmarks a...

  15. BENCHMARKING – BETWEEN TRADITIONAL & MODERN BUSINESS ENVIRONMENT

    OpenAIRE

    Mihaela Ungureanu

    2011-01-01

    The concept of benchmarking requires a continuous process of performance improvement of different organizations in order to obtain superiority towards those perceived as market leader’s competitors. This superiority can always be questioned, its relativity originating in the quick growing evolution of the economic environment. The approach supports innovation in relation with traditional methods and it is based on the will of those managers who want to determine limits and seek excellence. Th...

  16. Benchmarking spatial joins à la carte

    OpenAIRE

    Günther, Oliver; Oria, Vincent; Picouet, Philippe; Saglio, Jean-Marc; Scholl, Michel

    1997-01-01

    Spatial joins are join operations that involve spatial data types and operators. Spatial access methods are often used to speed up the computation of spatial joins. This paper addresses the issue of benchmarking spatial join operations. For this purpose, we first present a WWW-based tool to produce sets of rectangles. Experimentators can use a standard Web browser to specify the number of rectangles, as well as the statistical distributions of their sizes, shapes, and locations. Second, using...

  17. SINBAD: Shielding integral benchmark archive and database

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W. [and others

    1996-04-01

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity.

  18. Benchmarking and energy management schemes in SMEs

    Energy Technology Data Exchange (ETDEWEB)

    Huenges Wajer, Boudewijn [SenterNovem (Netherlands); Helgerud, Hans Even [New Energy Performance AS (Norway); Lackner, Petra [Austrian Energy Agency (Austria)

    2007-07-01

    Many companies are reluctant to focus on energy management or to invest in energy efficiency measures. Nevertheless, there are many good examples proving that the right approach to implementing energy efficiency can very well be combined with the business-priorities of most companies. SMEs in particular can benefit from a facilitated European approach because they normally have a lack of resources and time to invest in energy efficiency. In the EU supported pilot project BESS, 60 SMEs from 11 European countries of the food and drink industries successfully tested a package of interactive instruments which offers such a facilitated approach. A number of pilot companies show a profit increase of 3 up to 10 %. The package includes a user-friendly and web based E-learning scheme for implementing energy management as well as a benchmarking module for company specific comparison of energy performance indicators. Moreover, it has several practical and tested tools to support the cycle of continuous improvement of energy efficiency in the company such as checklists, sector specific measure lists, templates for auditing and energy conservation plans. An important feature and also a key trigger for companies is the possibility for SMEs to benchmark anonymously their energy situation against others of the same sector. SMEs can participate in a unique web based benchmarking system to interactively benchmark in a way which fully guarantees confidentiality and safety of company data. Furthermore, the available data can contribute to a bottom-up approach to support the objectives of (national) monitoring and targeting and thereby also contributing to the EU Energy Efficiency and Energy Services Directive. A follow up project to expand the number of participating SMEs of various sectors is currently being developed.

  19. Spas Performances Benchmarking and Operation Efficiency*

    OpenAIRE

    Akarapong Untong; Mingsarn Kaosa-ard

    2014-01-01

    This paper aims to benchmark the performance and operational efficiency of spas by using key performance indicators. Data Envelopment Analysis (DEA) method with SBM super-efficiency model has applied to evaluate the operational efficiency of 21 spas which consist of 7 day spas and 14 hotel and resort spas. The result of the study found that spa with the best performance can use existing resources efficiently and encourage therapists to have the best productivity in service. However, the effic...

  20. Benchmark calculations on simple reactor systems

    International Nuclear Information System (INIS)

    The development of some calculation methods is described. Tests of these and other methods on benchmark problems are reported. The following items are treated: 1) Criticality of spheres and slabs for monoenergetic neutrons with Carlviks method. 2) High precision S sub (n) calculations on critical slabs. 3) Comparison of angular quadrature methods in S sub (n) calculations. 4) Tests of a standard ANISN program. 5) Presence of complex time eigenvalues in a fundamental problem. (Author)

  1. Benchmarking Nature Tourism between Zhangjiajie and Repovesi

    OpenAIRE

    Wu, Zhou

    2014-01-01

    Since nature tourism became a booming business in modern society, more and more tourists choose nature-based tourism destination for their holidays. To find ways to promote Repovesi national park is quite significant, in a bid to reinforce the competitiveness of Repovesi national park. The topic of this thesis is both to find good marketing strategies used by the Zhangjiajie national park, via benchmarking and to provide some suggestions to Repovesi national park. The Method used in t...

  2. The Benchmark Beta, CAPM, and Pricing Anomalies.

    OpenAIRE

    Cheol S. Eun

    1994-01-01

    Recognizing that a part of the unobservable market portfolio is certainly observable, the author first reformulate the capital asset pricing model so that asset returns can be related to the 'benchmark' beta computed against a set of observable assets as well as the 'latent' beta computed against the remaining unobservable assets, and then shows that when the pricing effect of the latent beta is ignored, assets would appear to be systematically mispriced even if the capital asset pricing mode...

  3. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248. ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 3.026, year: 2014 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  4. Benchmark calculations of sodium fast critical experiments

    International Nuclear Information System (INIS)

    The high expectations from fast critical experiments impose the additional requirements on reliability of final reconstructed values, obtained in experiments at critical facility. Benchmark calculations of critical experiments are characterized by impossibility of complete experiment reconstruction, the large amounts of input data (dependent and independent) with very different reliability. It should also take into account different sensitivity of the measured and appropriate calculated characteristics to the identical changes of geometry parameters, temperature, and isotopic composition of individual materials. The calculations of critical facility experiments are produced for the benchmark models, generated by the specific reconstructing codes with its features when adjusting model parameters, and using the nuclear data library. The generated benchmark model, providing the agreed calculated and experimental values for one or more neutronic characteristics can lead to considerable differences for other key characteristics. The sensitivity of key neutronic characteristics to the extra steel allocation in the core, and ENDF/B nuclear data sources is performed using a few calculated models of BFS-62-3A and BFS1-97 critical assemblies. The comparative analysis of the calculated effective multiplication factor, spectral indices, sodium void reactivity, and radial fission-rate distributions leads to quite different models, providing the best agreement the calculated and experimental neutronic characteristics. This fact should be considered during the refinement of computational models and code-verification purpose. (author)

  5. OECD/NEA Burnup Credit Criticality Benchmark

    International Nuclear Information System (INIS)

    The report describes the final result of the phase-1A of the Burnup Credit Criticality Benchmark conducted by OECD/NEA. The phase-1A benchmark problem is an infinite array of a simple PWR spent fuel rod. The analysis has been performed for the PWR spent fuels of 30 and 40 GWd/t after 1 and 5 years of cooling time. In total, 25 results from 19 institutes of 11 countries have been submitted. For the nuclides in spent fuel, 7 major actinides and 15 major fission products (FP) are selected for the benchmark calculation. In the case of 30 GWd/t burnup, it is found that the major actinides and the major FPs contribute more than 50% and 30% of the total reactivity loss due to burnup, respectively. Therefore, more than 80% of the reactivity loss can be covered by 22 nuclides. However, the larger deviation among the reactivity losses by participants has been found for cases including EPs than the cases with only actinides, indicating the existence of relatively large uncertainties in FP cross sections. The large deviation seen also in the case of the fresh fuel has been found to reduce sufficiently by replacing the cross section library from ENDF-B/IV with that from ENDF-B/V and taking the known bias of MONK6 into account. (author)

  6. Uncertainty analysis of benchmark experiments using MCBEND

    International Nuclear Information System (INIS)

    Differences between measurement and calculation for shielding benchmark experiments can arise from uncertainties in a number of areas including nuclear data, radiation transport modelling, source specification, geometry modelling, measurement, and calculation statistics. In order to understand the significance of these differences, detailed sensitivity analysis of these various uncertainties is required. This is of particular importance when considering the requirements for nuclear data improvements aimed at providing better agreement between calculation and measurement. As part of a programme of validation activity associated with the international JEFF data project, the Monte Carlo code MCBEND has been used to analyse a range of benchmark experiments using JEF-2.2 based nuclear data together with modern dosimetry data. This paper describes detailed uncertainty analyses that have been performed for the following Winfrith material benchmark experiments: graphite, water, iron, graphite/steel and steel/water. Conclusions are reported and compared with calculations using other nuclear data libraries. In addition, the effect that nuclear data uncertainties have on the calculated results is discussed by making use of the data adjustment code DATAK. Requirements for further nuclear data evaluation arising from this work are identified. (author)

  7. Perspective: Selected benchmarks from commercial CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, C.J. [Southwest Research Inst., San Antonio, TX (United States). Computational Mechanics Section

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  8. Proposed Post-LEP benchmarks for supersymmetry

    International Nuclear Information System (INIS)

    We propose a new set of supersymmetric benchmark scenarios, taking into account the constraints from LEP, b →s γ, gμ - 2 and cosmology. We work in the specific context of the constrained MSSM (CMSSM) with universal soft supersymmetry-breaking masses and vanishing trilinear terms, assuming that R parity is conserved. We propose benchmark points that exemplify the different generic possibilities in this context, including focus-point models, points where coannihilation effects on the relic density are important, and points with rapid relic annihilation via direct-channel Higgs poles. We discuss the principal decays and signatures of the different classes of benchmark scenarios, and make initial estimates of the physics reaches of different accelerators, including the Tevatron collider, the LHC, and e+ e- colliders in the sub- and multi-TeV ranges. We stress the complementarity of hadron and lepton colliders, with the latter favoured for non-strongly-interacting particles and precision measurements. We mention features that could usefully be included in future versions of supersymmetric event generators. (orig.)

  9. Introduction to the HPC Challenge Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  10. BENCHMARKING ON-LINE SERVICES INDUSTRIES

    Institute of Scientific and Technical Information of China (English)

    John HAMILTON

    2006-01-01

    The Web Quality Analyser (WQA) is a new benchmarking tool for industry. It hasbeen extensively tested across services industries. Forty five critical success features are presented as measures that capture the user's perception of services industry websites. This tool differs to previous tools, in that it captures the information technology (IT) related driver sectors of website performance, along with the marketing-services related driver sectors. These driver sectors capture relevant structure, function and performance components.An 'on-off' switch measurement approach determines each component. Relevant component measures scale into a relative presence of the applicable feature, with a feature block delivering one of the sector drivers. Although it houses both measurable and a few subjective components, the WQA offers a proven and useful means to compare relevant websites.The WQA defines website strengths and weaknesses, thereby allowing for corrections to the website structure of the specific business. WQA benchmarking against services related business competitors delivers a position on the WQA index, facilitates specific website driver rating comparisons, and demonstrates where key competitive advantage may reside. This paper reports on the marketing-services driver sectors of this new benchmarking WQA tool.

  11. Simulation benchmarks for low-pressure plasmas: capacitive discharges

    CERN Document Server

    Turner, M M; Donko, Z; Eremin, D; Kelly, S J; Lafleur, T; Mussenbrock, T

    2012-01-01

    Benchmarking is generally accepted as an important element in demonstrating the correctness of computer simulations. In the modern sense, a benchmark is a computer simulation result that has evidence of correctness, is accompanied by estimates of relevant errors, and which can thus be used as a basis for judging the accuracy and efficiency of other codes. In this paper, we present four benchmark cases related to capacitively coupled discharges. These benchmarks prescribe all relevant physical and numerical parameters. We have simulated the benchmark conditions using five independently developed particle-in-cell codes. We show that the results of these simulations are statistically indistinguishable, within bounds of uncertainty that we define. We therefore claim that the results of these simulations represent strong benchmarks, that can be used as a basis for evaluating the accuracy of other codes. These other codes could include other approaches than particle-in-cell simulations, where benchmarking could exa...

  12. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  13. Cross Section Evaluation Group shielding benchmark compilation. Volume II

    Energy Technology Data Exchange (ETDEWEB)

    Rose, P.F.; Roussin, R.W.

    1983-12-01

    At the time of the release of ENDF/B-IV in 1974, the Shielding Subcommittee had identified a series of 12 shielding data testing benchmarks (the SDT series). Most were used in the ENDF/B-IV data testing effort. A new concept and series was begun in the interim, the so-called Shielding Benchmark (SB) series. An effort was made to upgrade the SDT series as far as possible and to add new SB benchmarks. In order to be designated in the SB class, both an experiment and analysis must have been performed. The current recommended benchmark for Shielding Data Testing are listed. Until recently, the philosophy has been to include only citations to published references for shielding benchmarks. It is now our intention to provide adequate information in this volume for proper analysis of any new benchmarks added to the collection. These compilations appear in Section II, with the SB5 Fusion Reactor Shielding Benchmark as the first entry.

  14. Updated requirements for control room annunciation: an operations perspective

    International Nuclear Information System (INIS)

    The purpose of this paper is to describe the results of updating and aligning requirements for annunciation functionality and performance with current expectations for operational excellence. This redefinition of annunciation requirements was undertaken as one component of a project to characterize improvement priorities, establish the operational and economic basis for improvement, and identify preferred implementation options for Ontario Power Generation plants. The updated requirements express the kinds of information support annunciation should provide to Operations staff to support the detection, recognition and response to changes in plant conditions. The updated requirements were developed using several types of information: management and industry expectations for operations excellence, previous definitions of user needs for annunciation, and operational and ergonomic principles. Operations and engineering staff at several stations have helped refine and complete the initial requirements definition. Application of these updated requirements is expected to lead to more effective and task relevant annunciation system improvements that better serve plant operation needs. The paper outlines the project rationale, reviews development objectives, discusses the approaches applied for requirements definition and organization, describes key requirements findings in relation to current operations experience, and discusses the proposed application of these requirements for guiding future annunciation system improvements. (author)

  15. Benchmarking wastewater treatment plants under an eco-efficiency perspective.

    Science.gov (United States)

    Lorenzo-Toja, Yago; Vázquez-Rowe, Ian; Amores, María José; Termes-Rifé, Montserrat; Marín-Navarro, Desirée; Moreira, María Teresa; Feijoo, Gumersindo

    2016-10-01

    The new ISO 14045 framework is expected to slowly start shifting the definition of eco-efficiency toward a life-cycle perspective, using Life Cycle Assessment (LCA) as the environmental impact assessment method together with a system value assessment method for the economic analysis. In the present study, a set of 22 wastewater treatment plants (WWTPs) in Spain were analyzed on the basis of eco-efficiency criteria, using LCA and Life Cycle Costing (LCC) as a system value assessment method. The study is intended to be useful to decision-makers in the wastewater treatment sector, since the combined method provides an alternative scheme for analyzing the relationship between environmental impacts and costs. Two midpoint impact categories, global warming and eutrophication potential, as well as an endpoint single score indicator were used for the environmental assessment, while LCC was used for value assessment. Results demonstrated that substantial differences can be observed between different WWTPs depending on a wide range of factors such as plant configuration, plant size or even legal discharge limits. Based on these results the benchmarking of wastewater treatment facilities was performed by creating a specific classification and certification scheme. The proposed eco-label for the WWTPs rating is based on the integration of the three environmental indicators and an economic indicator calculated within the study under the eco-efficiency new framework. PMID:27235897

  16. Rules for scoring respiratory events in sleep: update of the 2007 AASM Manual for the Scoring of Sleep and Associated Events. Deliberations of the Sleep Apnea Definitions Task Force of the American Academy of Sleep Medicine.

    Science.gov (United States)

    Berry, Richard B; Budhiraja, Rohit; Gottlieb, Daniel J; Gozal, David; Iber, Conrad; Kapur, Vishesh K; Marcus, Carole L; Mehra, Reena; Parthasarathy, Sairam; Quan, Stuart F; Redline, Susan; Strohl, Kingman P; Davidson Ward, Sally L; Tangredi, Michelle M

    2012-10-15

    The American Academy of Sleep Medicine (AASM) Sleep Apnea Definitions Task Force reviewed the current rules for scoring respiratory events in the 2007 AASM Manual for the Scoring and Sleep and Associated Events to determine if revision was indicated. The goals of the task force were (1) to clarify and simplify the current scoring rules, (2) to review evidence for new monitoring technologies relevant to the scoring rules, and (3) to strive for greater concordance between adult and pediatric rules. The task force reviewed the evidence cited by the AASM systematic review of the reliability and validity of scoring respiratory events published in 2007 and relevant studies that have appeared in the literature since that publication. Given the limitations of the published evidence, a consensus process was used to formulate the majority of the task force recommendations concerning revisions.The task force made recommendations concerning recommended and alternative sensors for the detection of apnea and hypopnea to be used during diagnostic and positive airway pressure (PAP) titration polysomnography. An alternative sensor is used if the recommended sensor fails or the signal is inaccurate. The PAP device flow signal is the recommended sensor for the detection of apnea, hypopnea, and respiratory effort related arousals (RERAs) during PAP titration studies. Appropriate filter settings for recording (display) of the nasal pressure signal to facilitate visualization of inspiratory flattening are also specified. The respiratory inductance plethysmography (RIP) signals to be used as alternative sensors for apnea and hypopnea detection are specified. The task force reached consensus on use of the same sensors for adult and pediatric patients except for the following: (1) the end-tidal PCO(2) signal can be used as an alternative sensor for apnea detection in children only, and (2) polyvinylidene fluoride (PVDF) belts can be used to monitor respiratory effort (thoracoabdominal

  17. Interstitial brachytherapy dosimetry update

    International Nuclear Information System (INIS)

    In March 2004, the American Association of Physicists in Medicine (AAPM) published an update to the AAPM Task Group No. 43 Report (TG-43) which was initially published in 1995. This update was pursued primarily due to the marked increase in permanent implantation of low-energy photon-emitting brachytherapy sources in the United States over the past decade, and clinical rationale for the need of accurate dosimetry in the implementation of interstitial brachytherapy. Additionally, there were substantial improvements in the brachytherapy dosimetry formalism, accuracy of related parameters and methods for determining these parameters. With salient background, these improvements are discussed in the context of radiation dosimetry. As an example, the impact of this update on the administered dose is assessed for the model 200 103Pd brachytherapy source. (authors)

  18. Statistical Handbook: 1999 updates

    International Nuclear Information System (INIS)

    This publication consists of a series of tables containing the updated figures for: Crown land sales, drilling activity, established oil and natural gas reserves, crude oil, synthetic oil and bitumen and natural gas production, net cash expenditures of the petroleum industry, value of producers' sales, average crude oil and natural gas prices, Canadian demand for motor gasoline, diesel and heavy fuel oil, yields of refined petroleum products in Canada, and Canadian exports of energy materials. The data provided covers varying periods, in all cases updated to include data for 1999

  19. Update of European bioethics

    DEFF Research Database (Denmark)

    Rendtorff, Jacob Dahl

    2015-01-01

    principles of autonomy, dignity, integrity and vulnerability are proposed as the most important ethical principles for respect for the human person in biomedical and biotechnological development. This approach to bioethics and biolaw is presented here in a short updated version that integrates the earlier......This paper presents an update of the research on European bioethics undertaken by the author together with Professor Peter Kemp since the 1990s, on Basic ethical principles in European bioethics and biolaw. In this European approach to basic ethical principles in bioethics and biolaw, the...... research in a presentation of the present understanding of the basic ethical principles in bioethics and biolaw....

  20. Updating of the bovine neosporosis

    Directory of Open Access Journals (Sweden)

    Alexander Martínez Contreras

    2012-06-01

    Full Text Available In the fields of Medicine and bovine production, there is a wide variety of diseases affecting reproduction, in relation to the number of live births, the interval between births and open days, among others. Some of these diseases produce abortions and embryonic death, which explain the alteration of reproductive parameters. Many of these diseases have an infectious origin, such as parasites, bacteria, viruses and fungi, which are transmitted among animals. Besides, some of them have zoonotic features that generate problems to human health. Among these agents, the Neospora caninum, protozoan stands out. Its life cycle is fulfilled in several species of animals like the dog and the coyote. These two act as its definitive hosts and the cattle as its intermediary host. The Neospora caninum causes in the infected animals, reproductive disorders, clinical manifestations and decreased production which affects productivity of small, medium and large producers. Because of this, diagnostic techniques that allow understanding the epidemiological behavior of this disease have been developed. However in spite of being a major agent in the bovine reproductive health, few studies have been undertaken to determine the prevalence of this agent around the world. Therefore, the objective of this review was to collect updated information on the behavior of this parasite, targeting its epidemiology, its symptoms, its impact on production and the methods of its control and prevention.

  1. Adiabatic microcalorimetry in shielding benchmark experiments

    International Nuclear Information System (INIS)

    The application of a newly developed microcalorimeter is described : (1) for measuring energy-deposition rates in the mixed radiation fields of zero-energy reactors and shielding benchmark experiments. Methods of calculation for energy-deposition (n + γ) are usually validated by measuring the neutron component with the aid of activation detectors and the gamma-ray component using TLDs or ion-chambers. The major limitation with the use of calorimeters in low-power radiation fields has been lack of sensitivity. The aim of the present work has been to investigate the performance of a calorimeter which can measure heating-rates in the range down to 10 μW/g by comparison with conventional dosimetry techniques in a graphite benchmark experiment conducted in the NESSUS reference field in the NESTOR reactor at Winfrith. Major problems have been encountered with the neutron sensitivity of both TLDs and gamma-ray ion-chambers. When appropriate corrections are made good agreement can be achieved between all the dosimetry techniques and the results provide a benchmark test of calculational methods for energy-deposition in graphite. In power reactors, steel-walled calorimeters are used for the dosimetry of materials such as graphite and the net effect of electron migration between the sample and steel walls significantly increases the heating rate in the specimen. In the NESSUS experiments, an increase of 18% was observed in the graphite heating rate above that expected from the enhanced gamma source, when the calorimeter wall was changed from graphite to iron. (author)

  2. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  3. Benchmarking urban energy efficiency in the UK

    International Nuclear Information System (INIS)

    This study asks what is the ‘best’ way to measure urban energy efficiency. There has been recent interest in identifying efficient cities so that best practices can be shared, a process known as benchmarking. Previous studies have used relatively simple metrics that provide limited insight on the complexity of urban energy efficiency and arguably fail to provide a ‘fair’ measure of urban performance. Using a data set of 198 urban UK local administrative units, three methods are compared: ratio measures, regression residuals, and data envelopment analysis. The results show that each method has its own strengths and weaknesses regarding the ease of interpretation, ability to identify outliers and provide consistent rankings. Efficient areas are diverse but are notably found in low income areas of large conurbations such as London, whereas industrial areas are consistently ranked as inefficient. The results highlight the shortcomings of the underlying production-based energy accounts. Ideally urban energy efficiency benchmarks would be built on consumption-based accounts, but interim recommendations are made regarding the use of efficiency measures that improve upon current practice and facilitate wider conversations about what it means for a specific city to be energy-efficient within an interconnected economy. - Highlights: • Benchmarking is a potentially valuable method for improving urban energy performance. • Three different measures of urban energy efficiency are presented for UK cities. • Most efficient areas are diverse but include low-income areas of large conurbations. • Least efficient areas perform industrial activities of national importance. • Improve current practice with grouped per capita metrics or regression residuals

  4. Calculation of Sodium Cooled Fast Reactor Concepts. Preliminary results of an OECD NEA benchmark calculation

    International Nuclear Information System (INIS)

    In this paper we present the results of our calculations of the OECD NEA benchmark on generation-IV advanced sodium-cooled fast reactor (SFR) concepts. The aim of this benchmark is to study the core design features, moreover the feedback and transient behaviour of four SFR concepts. At the present state, static global neutronic parameters, e.g. keff, effective delayed neutron fraction, Doppler constant, sodium void worth, control rod worth, power distribution; and burnup were calculated for both the beginning and the end of cycle. In the benchmark definition, the following core descriptions were specified: two large cores (3600 MW thermal power) with carbide and oxide fuel, and two medium cores (1000 MW thermal power) with metal and oxide fuel. The calculations were performed by using the ECCO module of the ERANOS code system at the subassembly level, and with the KIKO3DMG code at the core level. The former code produced the assembly homogenized cross sections applying 1968 group collision probability calculations; the latter one determined the core multiplication factor, the radial power distribution using a 3D nodal diffusion method in 9 energy groups. We examined the effects of increasing the energy groups to 17 in the core calculation. The reflector and shield assembly homogenization methodology was also tested: a “homogeneous region model” was compared with a “concentric cylindrical core” calculation. The breeding ratio was also determined for the beginning of cycle. (author)

  5. 外国直接投资统计基本定义剖析%Analyzing the Basic Definition of Foreign Direct Investment Statistics

    Institute of Scientific and Technical Information of China (English)

    高敏雪; 谷泓

    2005-01-01

    Foreign direct investment (FDI) is a kind of complicated economic activity. In this paper,the author discusses systematically benchmark definition of FDI, introduces some countries statistics of FDI, and shows the problems in China FDI statistics.

  6. Update of telephone exchange

    CERN Multimedia

    2006-01-01

    As part of the upgrade of telephone services, the CERN switching centre will be updated on Wednesday 14 June between 8.00 p.m. and midnight. Telephone services may be disrupted and possibly even interrupted during this operation. We apologise in advance for any inconvenience this may cause. CERN TELECOM Service

  7. Update of telephone exchange

    CERN Multimedia

    2006-01-01

    As part of the upgrade of telephone services, the CERN switching centre will be updated on Monday 3 July between 8.00 p.m. and 3.00 a.m. Telephone services may be disrupted and possibly even interrupted during this operation. We apologise in advance for any inconvenience this may cause. CERN TELECOM Service

  8. Update of telephone exchange

    CERN Multimedia

    2006-01-01

    As part of the upgrade of telephone services, the CERN switching centre will be updated on between Monday 23 October 8.00 p.m. and Tuesday 24 October 2.00 a.m. Telephone services may be disrupted and possibly even interrupted during this operation. We apologise in advance for any inconvenience this may cause. CERN TELECOM Service

  9. Update of telephone exchange

    CERN Multimedia

    2006-01-01

    As part of the upgrade of telephone services, the CERN switching centre will be updated on Monday 3 July between 8.00 p.m. and 3.00 a.m. Telephone services may be disrupted and possibly even interrupted during this operation.We apologise in advance for any inconvenience this may cause. CERN TELECOM Service

  10. Updating: Learning versus Supposing

    Science.gov (United States)

    Zhao, Jiaying; Crupi, Vincenzo; Tentori, Katya; Fitelson, Branden; Osherson, Daniel

    2012-01-01

    Bayesian orthodoxy posits a tight relationship between conditional probability and updating. Namely, the probability of an event "A" after learning "B" should equal the conditional probability of "A" given "B" prior to learning "B". We examine whether ordinary judgment conforms to the orthodox view. In three experiments we found substantial…

  11. Updated opal opacities

    International Nuclear Information System (INIS)

    The reexamination of astrophysical opacities has eliminated gross discrepancies between a variety of observations and theoretical calculations; thus allowing for more detailed tests of stellar models. A number of such studies indicate that model results are sensitive to modest changes in the opacity. Consequently, it is desirable to update available opacity databases with recent improvements in physics, refinements of element abundance, and other such factors affecting the results. Updated OPAL Rosseland mean opacities are presented. The new results have incorporated improvements in the physics and numerical procedures as well as corrections. The main opacity changes are increases of as much as 20% for Population I stars due to the explicit inclusion of 19 metals (compared to 12 metals in the earlier calculations) with the other modifications introducing opacity changes smaller than 10%. In addition, the temperature and density range covered by the updated opacity tables has been extended. As before, the tables allow accurate interpolation in density and temperature as well as hydrogen, helium, carbon, oxygen, and metal mass fractions. Although a specific metal composition is emphasized, opacity tables for different metal distributions can be made readily available. The updated opacities are compared to other work. copyright 1996 The American Astronomical Society

  12. North Sea update

    International Nuclear Information System (INIS)

    The article deals with the offshore activity in the North Sea bringing together a special update feature for the petroleum industries in the United Kingdom, Norway, Denmark and the Netherlands. The total capital expenditure required for the period from 1995 to 1998 for the North Sea area which includes exploration, development projects and well abandoning, are discussed and presented. 20 figs., 5 tabs

  13. Livestock Update. August 2014

    OpenAIRE

    Greiner, Scott Patrick; McCann, Mark A.; Saville, Joi; Neil, Scott J.; Harmon, Deidre D.; Callan, Peter; Estienne, Mark Joseph, 1960-; Wiegert, Jeffrey; Clark, Sherrie

    2014-01-01

    This LIVESTOCK UPDATE contains timely subject matter on beef cattle, horses, poultry, sheep, swine, and related junior work. This issue includes: Dates to Remember; August Herd Management Advisor; Weaning Nutrition and Management; Breeding Season Management - Ewes and Rams; 2014 Small Ruminant Field Day; Heat Stress and Small-Scale and Niche Market Pork Production in Virginia,

  14. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different...... this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of...

  15. BEGAFIP. Programming service, development and benchmark calculations

    International Nuclear Information System (INIS)

    This report summarizes improvements to BEGAFIP (the Swedish equivalent to the Oak Ridge computer code ORIGEN). The improvements are: addition of a subroutine making it possible to calculate neutron sources, exchange of fission yields and branching ratios in the data library to those published by Meek and Rider in 1978. In addition, BENCHMARK-calculations have been made with BEGAFIP as well as with ORIGEN regarding the build-up of actinides for a fuel burnup of 33 MWd/kg U. The results were compared to those arrived upon from the more sophisticated code CASMO. (author)

  16. Measurement Methods in the field of benchmarking

    Directory of Open Access Journals (Sweden)

    István Szűts

    2004-05-01

    Full Text Available In benchmarking we often come across with parameters being difficultto measure while executing comparisons or analyzing performance, yet they haveto be compared and measured so as to be able to choose the best practices. Thesituation is similar in the case of complex, multidimensional evaluation as well,when the relative importance and order of different dimensions, parameters to beevaluated have to be determined or when the range of similar performanceindicators have to be decreased with regard to simpler comparisons. In suchcases we can use the ordinal or interval scales of measurement elaborated by S.S.Stevens.

  17. Benchmarking research of steel companies in Europe

    Directory of Open Access Journals (Sweden)

    M. Antošová

    2013-07-01

    Full Text Available In present time steelworks are at a stage of permanent changes that are marked with still stronger competition pressure. Therefore managers must solve questions of how to decrease production costs, how to overcome competition and how to survive in the world market. Still more attention should be paid to the modern managerial methods of market research and comparison with competition. Benchmarking research is one of the effective tools for such research. The goal of this contribution is to compare chosen steelworks and to indicate new directions for their development with the possibility of increasing the productivity of steel production.

  18. Consistent utilization of shielding benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    D' Angelo, A. (Univ. of Calabria, Cosenza, Italy); Oliva, A.; Palmiotti, G.; Salvatores, M.; Zero, S.

    1978-03-01

    Benchmark experiments of neutron propagation in iron and iron--sodium mixtures were used to generate an ''adjusted'' ENDF/B data file for iron, Mat = 1192. In particular, the secondary neutron energy distribution in the continuous level energy range was adjusted by use of such high-energy responses as the /sup 32/S(n,p)/sup 32/P reaction, which are significantly sensitive to changes in that probability distribution. The experimental analysis used carefully checked two-dimensional transport methods to avoid bias in the adjustment procedure due to inadequate calculational methods. 10 figures, 13 tables.

  19. Benchmarking Finnish and Irish Equestrian Tourism

    OpenAIRE

    Räbinä, Riikka-Liisa

    2010-01-01

    The purpose of this thesis was to benchmark Finnish and Irish equestrian tourism. One of the goals was also to examine the current status of equestrian tourism in Finland, as well as the use of the Finn-horse in equestrian tourism services. Improvement suggestions were created based on research about Irish equestrian tourism as well as the Irish Draught Horse and the Irish Sport Horse. There was no commissioner for the thesis. The topic arose from personal interest in equestrian tourism and t...

  20. COVE 2A Benchmarking calculations using NORIA

    International Nuclear Information System (INIS)

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs

  1. Two dimensional shielding benchmark analysis for sodium

    International Nuclear Information System (INIS)

    Results of the analysis of a shielding benchmark experiment on 'fast reactor source' neutron transport through 1.8 metres of sodium is presented in this paper. The two dimensional discrete ordinates code DOT and DLC 37 coupled neutron-gamma multigroup cross sections were used in the analyses. These calculations are compared with measurements on: (i) neutron spectral distribution given by activation detector response, and (ii) gamma ray doses. The agreement is found to be within ± 30 per cent in the fast spectrum region, and within a factor 3.5 in thermal region. For gammas these calculations overpredict the dose rate by a factor of four. (author)

  2. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    While the knowledge management literature has addressed the explicit and tacit skills needed for successful performance in the modern enterprise, little attention has been paid to date in this particular literature as to how these wide-ranging skills may be suitably acquired during the course of an...... experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...

  3. Memory Updating and Mental Arithmetic

    OpenAIRE

    Han, Cheng-Ching; Yang, Tsung-Han; Lin, Chia-Yuan; Yen, Nai-Shing

    2016-01-01

    Is domain-general memory updating ability predictive of calculation skills or are such skills better predicted by the capacity for updating specifically numerical information? Here, we used multidigit mental multiplication (MMM) as a measure for calculating skill as this operation requires the accurate maintenance and updating of information in addition to skills needed for arithmetic more generally. In Experiment 1, we found that only individual differences with regard to a task updating num...

  4. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  5. Development of a California commercial building benchmarking database

    International Nuclear Information System (INIS)

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database

  6. Numerical benchmarks for MTR fuel assemblies with burnable poison

    International Nuclear Information System (INIS)

    This work presents a preliminary version of a set of burn-up dependent numerical benchmarks of MTR fuel assemblies using burnable poisons. The numerical benchmark calculations were carried out using two different types of calculation methodologies: Monte Carlo methodology using MCNP-ORIGEN coupled codes and deterministic methodology using CONDOR collision probabilities code. The main purpose of this work is to provide a numerical benchmark for several geometries, for example number and diameter of the Cadmium wires. The numerical benchmark provides meat and Cadmium numerical density information and the geometry and material data of the calculated systems. These benchmarks provide information for the validation of MTR FA cell codes. This paper is the preliminary work of a 3 dimensional numerical benchmark for research reactors using MTR fuel assemblies with burnable poisons. A short description of the MCNP and ORIGEN coupling method and the CONDOR code are given in the present paper. (author)

  7. Investigation of the PWR subchannel void distribution benchmark (OECD/NRC PSBT benchmark) using ANSYS CFX

    International Nuclear Information System (INIS)

    The presented CFD investigations using ANSYS CFX 13.0 are focused on the “Phase I - Void Distribution Benchmark, Exercise 1 -Steady-state Single Subchannel Benchmark” of the OECD/NRC PSBT benchmark. In this particular subsection of the entire benchmark flow through a test section representing a central subchannel of a PWR fuel assembly under nucleate subcooled boiling conditions is investigated. The investigations using ANSYS CFX had been carried out for 10 different test conditions (with respect to pressure, inlet fluid temperature, power and mass flow rate) from the PSBT test matrix. Emphasis had been given to a CFD best practice guidelines oriented investigation of the subcooled nucleate boiling flow through the subchannel configuration of the test section. By comparing CFD results to the benchmark data reasonably good agreement could be observed. Depending on the applied CFD submodels the results differ from the measured data by ±8% with respect to cross-sectional averaged void fraction at the measurement plane, where the averaged void fraction varied between 0.038 and 0.62 for the test conditions under investigation. (author)

  8. Integral benchmarks with reference to thorium fuel cycle

    International Nuclear Information System (INIS)

    This is a power point presentation about the Indian participation in the CRP 'Evaluated Data for the Thorium-Uranium fuel cycle'. The plans and scope of the Indian participation are to provide selected integral experimental benchmarks for nuclear data validation, including Indian Thorium burn up benchmarks, post-irradiation examination studies, comparison of basic evaluated data files and analysis of selected benchmarks for Th-U fuel cycle

  9. Benchmarking – A tool for judgment or improvement?

    OpenAIRE

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking as it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and us...

  10. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    OpenAIRE

    Jahn, Franziska; Baltschukat, Klaus; Buddrus, Uwe; Günther, Uwe; Kutscha, Ansgar; Liebe, Jan-David; Lowitsch, Volker; Schlegel, Helmut; Winter, Alfred

    2015-01-01

    Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benc...

  11. THE IMPORTANCE OF BENCHMARKING IN MAKING MANAGEMENT DECISIONS

    OpenAIRE

    Adriana-Mihaela IONESCU; Cristina Elena BIGIOI

    2016-01-01

    Launching a new business or project leads managers to make decisions and choose strategies that will then apply in their company. Most often, they take decisions only on instinct, but there are also companies that use benchmarking studies. Benchmarking is a highly effective management tool and is useful in the new competitive environment that has emerged from the need of organizations to constantly improve their performance in order to be competitive. Using this benchmarking process, organiza...

  12. The Inverted Pendulum Benchmark in Nonlinear Control Theory: A Survey

    OpenAIRE

    Olfa Boubaker

    2013-01-01

    For at least fifty years, the inverted pendulum has been the most popular benchmark, among others, in nonlinear control theory. The fundamental focus of this work is to enhance the wealth of this robotic benchmark and provide an overall picture of historical and current trend developments in nonlinear control theory, based on its simple structure and its rich nonlinear model. In this review, we will try to explain the high popularity of such a robotic benchmark, which is frequently used to re...

  13. The importance of an accurate benchmark choice: the spanish case

    OpenAIRE

    Ruiz Campo, Sofía; Monjas Barroso, Manuel

    2012-01-01

    The performance of a fund cannot be judged unless it is first measured, and measurement is not possible without an objective frame of reference. A benchmark serves as a reliable and consistent gauge of the multiple dimensions of performance: return, risk and correlation. The benchmark must be a fair target for investment managers and be representative of the relevant opportunity set. The objective of this paper is to analyse whether the different benchmarks generally used to me...

  14. Indian Management Education and Benchmarking Practices: A Conceptual Framework

    OpenAIRE

    Dr. Dharmendra MEHTA; Er. Sunayana SONI; Dr. Naveen K MEHTA; Dr. Rajesh K MEHTA

    2015-01-01

    Benchmarking can be defined as a process through which practices are analyzed to provide a standard measurement (‘benchmark’) of effective performance within an organization (such as a university/institute). Benchmarking is also used to compare performance with other organizations and other sectors. As management education is passing through challenging times so some modern management tool like benchmarking is required to improve the quality of management education and to overcome the challen...

  15. Empirical policy functions as benchmarks for evaluation of dynamic models

    OpenAIRE

    Bazdresch, Santiago; Kahn, R. Jay; Whited, Toni

    2011-01-01

    We describe a set of model-dependent statistical benchmarks that can be used to estimate and evaluate dynamic models of firms' investment and financing. The benchmarks characterize the empirical counterparts of the models' policy functions. These empirical policy functions (EPFs) are intuitively related to the corresponding model, their features can be estimated very easily and robustly, and they describe economically important aspects of firms' dynamic behavior. We calculate the benchmarks f...

  16. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  17. 2009 Purchasing Power Parity Update Selected Economies in Asia and the Pacific, A Research Study

    OpenAIRE

    Asian Development Bank

    2012-01-01

    This report presents the research initiative to explore an alternative methodology for extrapolating purchasing power parities (PPPs) for 21 participating economies in the Asia and Pacific region. The 2009 PPP Update provides an intermediate benchmark and more firmly based real expenditures and price level indexes for 2009 than would have been possible using the conventional extrapolation technique. The results include PPP-based gross domestic product and its major aggregates of actual final ...

  18. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  19. Direct data access protocols benchmarking on DPM

    Science.gov (United States)

    Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina

    2015-12-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.

  20. MARS code developments, benchmarking and applications

    International Nuclear Information System (INIS)

    Recent developments of the MARS Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electron volt up to 100 TeV are described. The physical model of hadron and lepton interactions with nuclei and atoms has undergone substantial improvements. These include a new nuclear cross section library, a model for soft prior production, a cascade-exciton model, a dual parton model, deuteron-nucleus and neutrino-nucleus interaction models, a detailed description of negative hadron and muon absorption, and a unified treatment of muon and charged hadron electro-magnetic interactions with matter. New algorithms have been implemented into the code and benchmarked against experimental data. A new Graphical-User Interface has been developed. The code capabilities to simulate cascades and generate a variety of results in complex systems have been enhanced. The MARS system includes links to the MCNP code for neutron and photon transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings. Results of recent benchmarking of the MARS code are presented. Examples of non-trivial code applications are given for the Fermilab Booster and Main Injector, for a 1.5 MW target station and a muon storage ring

  1. MARS code developments, benchmarking and applications

    International Nuclear Information System (INIS)

    Recent developments of the MARS Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. The physical model of hadron and lepton interactions with nuclei and atoms has undergone substantial improvements. These include a new nuclear cross section library, a model for soft pion production, a cascade-exciton model, a dual parton model, deuteron-nucleus and neutrino-nucleus interaction models, a detailed description of negative hadron and muon absorption, and a unified treatment of muon and charged hadron electromagnetic interactions with matter. New algorithms have been implemented into the code and benchmarked against experimental data. A new Graphical-User Interface has been developed. The code capabilities to simulate cascades and generate a variety of results in complex systems have been enhanced. The MARS system includes links to the MCNP code for neutron and photon transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings. Results of recent benchmarking of the MARS code are presented. Examples of non-trivial code applications are given for the Fermilab Booster and Main Injector, for a 1.5 MW target station and a muon storage ring. (author)

  2. REVISED STREAM CODE AND WASP5 BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    Chen, K

    2005-05-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within {+-}20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within {+-}3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls.

  3. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  4. Benchmark results in vector atmospheric radiative transfer

    International Nuclear Information System (INIS)

    In this paper seven vector radiative transfer codes are inter-compared for the case of underlying black surface. They include three techniques based on the discrete ordinate method (DOM), two Monte-Carlo methods, the successive orders scattering method, and a modified doubling-adding technique. It was found that all codes give very similar results. Therefore, we were able to produce benchmark results for the Stokes parameters both for reflected and transmitted light in the cases of molecular, aerosol and cloudy multiply scattering media. It was assumed that the single scattering albedo is equal to one. Benchmark results have been provided by several studies before, including Coulson et al., Garcia and Siewert, Wauben and Hovenier, and Natraj et al. among others. However, the case of the elongated phase functions such as for a cloud and with a high angular resolution is presented here for the first time. Also in difference with other studies, we make inter-comparisons using several codes for the same input dataset, which enables us to quantify the corresponding errors more accurately.

  5. Benchmark analysis of KRITZ-2 critical experiments

    International Nuclear Information System (INIS)

    In the KRITZ-2 critical experiments, criticality and pin power distributions were measured at room temperature and high temperature (about 245 degC) for three different cores (KRITZ-2:1, KRITZ-2:13, KRITZ-2:19) loading slightly enriched UO2 or MOX fuels. Recently, international benchmark problems were provided by ORNL and OECD/NEA based on the KRITZ-2 experimental data. The published experimental data for the system with slightly enriched fuels at high temperature are rare in the world and they are valuable for nuclear data testing. Thus, the benchmark analysis was carried out with a continuous-energy Monte Carlo code MVP and its four nuclear data libraries based on JENDL-3.2, JENDL-3.3, JEF-2.2 and ENDF/B-VI.8. As a result, fairly good agreements with the experimental data were obtained with any libraries for the pin power distributions. However, the JENDL-3.3 and ENDF/B-VI.8 give under-prediction of criticality and too negative isothermal temperature coefficients for slightly enriched UO2 cores, although the older nuclear data JENDL-3.2 and JEF-2.2 give rather good agreements with the experimental data. From the detailed study with an infinite unit cell model, it was found that the differences among the results with different libraries are mainly due to the different fission cross section of U-235 in the energy range below 1.0 eV. (author)

  6. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  7. Hospital Energy Benchmarking Guidance - Version 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  8. A 3D stylized half-core CANDU benchmark problem

    International Nuclear Information System (INIS)

    A 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem is presented. The benchmark problem is comprised of a heterogeneous lattice of 37-element natural uranium fuel bundles, heavy water moderated, heavy water cooled, with adjuster rods included as reactivity control devices. Furthermore, a 2-group macroscopic cross section library has been developed for the problem to increase the utility of this benchmark for full-core deterministic transport methods development. Monte Carlo results are presented for the benchmark problem in cooled, checkerboard void, and full coolant void configurations.

  9. Data Testing CIELO Evaluations with ICSBEP Benchmarks

    International Nuclear Information System (INIS)

    We review criticality data testing performed at Los Alamos with a combination of ENDF/B-VII.1 + potential CIELO nuclear data evaluations. The goal of CIELO is to develop updated, best available evaluated nuclear data files for 1H, 16O, 56Fe, 235,238U and 239Pu. because the major international evaluated nuclear data libraries don't agree on the internal cross section details of these most important nuclides.

  10. The centriole adjunct of insects: Need to update the definition.

    Science.gov (United States)

    Dallai, Romano; Paoli, Francesco; Mercati, David; Lupetti, Pietro

    2016-04-01

    The ancestral eukaryotes presumably had an MTOC (microtubule organizing center) which late gave origin to the centriole and the flagellar axoneme. The centrosome of insect early spermatids is in general composed of two components: a single centriole and a cloud of electron-dense pericentriolar material (PCM). During spermiogenesis, the centriole changes its structure and gives rise to a flagellar axoneme, while the proteins of PCM, gamma tubulin in particular, are involved in the production of microtubules for the elongation and shaping of spermatid components. At the end of spermiogenesis, in many insects, additional material is deposited beneath the nucleus to form the centriole adjunct (ca). This material can also extend along the flagellum in two accessory bodies (ab) flanking the axoneme. Among Homoptera Sternorrhyncha, a progressive modification of their sperm flagella until complete disappearance has been verified. In the Archaeococcidae Matsucoccus feytaudi, however, a motile sperm flagellum-like structure is formed by an MTOC activity. This finding gives support to the hypothesis that an evolutionary reversal has occurred in the group and that the cell, when a non-functional centriole is present, activates an ancestral structure, an MTOC, to form a polarized motile bundle of microtubules restoring sperm motility. The presence and extension of the centriole adjunct in the different insect orders is also enlisted. PMID:26899558

  11. Sensors, Update 9

    Science.gov (United States)

    Baltes, Henry; Göpel, Wolfgang; Hesse, Joachim

    2001-10-01

    Sensors Update ensures that you stay at the cutting edge of the field. Built upon the series Sensors, it presents an overview of highlights in the field. Coverage includes current developments in materials, design, production, and applications of sensors, signal detection and processing, as well as new sensing principles. Each volume is divided into three sections. Sensor Technology, reviews highlights in applied and basic research, Sensor Applications, covers new or improved applications of sensors, Sensor Markets, provides a survey of suppliers and market trends for a particular area. With this unique combination of information in each volume, Sensors Update will be of value for scientists and engineers in industry and at universities, to sensors developers, distributors, and users.

  12. Sensors, Update 10

    Science.gov (United States)

    Baltes, Henry; Fedder, Gary K.; Korvink, Jan G.

    2002-04-01

    Sensors Update ensures that you stay at the cutting edge of the field. Built upon the series Sensors, it presents an overview of highlights in the field. Coverage includes current developments in materials, design, production, and applications of sensors, signal detection and processing, as well as new sensing principles. Each volume is divided into three sections. Sensor Technology, reviews highlights in applied and basic research, Sensor Applications, covers new or improved applications of sensors, Sensor Markets, provides a survey of suppliers and market trends for a particular area. With this unique combination of information in each volume, Sensors Update will be of value for scientists and engineers in industry and at universities, to sensors developers, distributors, and users.

  13. Sensors, Update 11

    Science.gov (United States)

    Baltes, Henry; Fedder, Gary K.; Korvink, Jan G.

    2003-03-01

    Sensors Update ensures that you stay at the cutting edge of the field, presenting the current highlights of sensor and related microelectromechanical systems technology. Coverage includes most recent developments in materials, design, production, and applications of sensors, signal detection and processing, as well as new sensing principles based on micro- and nanotechnology. Each volume is divided into three sections: Sensor Technology reviews highlights in applied and basic research, Sensor Applications covers new or improved applications of sensors and Sensor Markets provides a survey of suppliers and market trends for a particular area. With this unique combination of information in each volume, Sensors Update is of must-have value for scientists and engineers in industry and at universities, to sensors developers, distributors, and users.

  14. Sensors, Update 2

    Science.gov (United States)

    Baltes, Henry; Göpel, Wolfgang; Hesse, Joachim

    1996-10-01

    Sensors Update ensures that you stay at the cutting edge of the field. Built upon the series Sensors, it presents an overview of highlights in the field. Coverage includes current developments in materials, design, production, and applications of sensors, signal detection and processing, as well as new sensing principles. Furthermore, the sensor market as well as peripheral aspects such as standards are covered. Each volume is divided into four sections. Sensor Technology, reviews highlights in applied and basic research, Sensor Applications, covers new or improved applications of sensors, Sensor Markets, provides a survey of suppliers and market trends for a particular area. With this unique combination of information in each volume, Sensors Update will be of value for scientists and engineers in industry and at universities, to sensors developers, distributors, and users.

  15. Sensors, Update 8

    Science.gov (United States)

    Baltes, Henry; Göpel, Wolfgang; Hesse, Joachim

    2001-02-01

    Sensors Update ensures that you stay at the cutting edge of the field. Built upon the series Sensors, it presents an overview of highlights in the field. Coverage includes current developments in materials, design, production, and applications of sensors, signal detection and processing, as well as new sensing principles. Each volume is divided into three sections: Sensor Technology reviews highlights in applied and basic research, while Sensor Applications covers new or improved applications of sensors, and Sensor Markets provides a survey of suppliers and market trends for a particular area. With this unique combination of information in each volume, Sensors Update will be invaluable to scientists and engineers in industry and at universities, to sensors developers, distributors, and users.

  16. Sensors, Update 12

    Science.gov (United States)

    Baltes, Henry; Fedder, Gary K.; Korvink, Jan G.

    2003-04-01

    Sensors Update ensures that you stay at the cutting edge of the field. Built upon the series Sensors, it presents an overview of highlights in the field. Coverage includes current developments in materials, design, production, and applications of sensors, signal detection and processing, as well as new sensing principles. Each volume is divided into three sections. Sensor Technology, reviews highlights in applied and basic research, Sensor Applications, covers new or improved applications of sensors, Sensor Markets, provides a survey of suppliers and market trends for a particular area. With this unique combination of information in each volume, Sensors Update will be of value for scientists and engineers in industry and at universities, to sensors developers, distributors, and users.

  17. Sensors, Update 1

    Science.gov (United States)

    Baltes, Henry; Göpel, Wolfgang; Hesse, Joachim

    1996-12-01

    Sensors Update ensures that you stay at the cutting edge of the field. Built upon the series Sensors, it presents an overview of highlights in the field. Treatments include current developments in materials, design, production, and applications of sensors, signal detection and processing, as well as new sensing principles. Furthermore, the sensor market as well as peripheral aspects such as standards are covered. Each volume is divided into four sections. Sensor Technology, reviews highlights in applied and basic research, Sensor Applications, covers new or improved applications of sensors, Sensor Markets, provides an overview of suppliers and market trends for a particular section, and Sensor Standards, reviews recent legislation and requirements for sensors. With this unique combination of information in each volume, Sensors Update will be of value for scientists and engineers in industry and at universities, to sensors developers, distributors, and users.

  18. Updating Choquet Beliefs.

    OpenAIRE

    Jurgen Eichberger; Simon Grant; David Kelsey

    2006-01-01

    We apply Pires’s coherence property between unconditional and conditional preferences that admit a CEU representation. In conjunction with consequentialism (only those outcomes on states which are still possible can matter for conditional preference) this implies that the conditional preference may be obtained from the unconditional preference by taking the Full Bayesian Update of the capacity. Attitudes towards sequential versus simultaneous resolution of uncertainty for a simple bet are ana...

  19. Annual Pension Fund Update

    CERN Multimedia

    Pension Fund

    2011-01-01

    All members and beneficiaries of the Pension Fund are invited to attend the Annual Pension Fund Update to be held in the CERN Council Chamber on Tuesday 20 September 2011 from 10-00 to 12-00 a.m. Copies of the 2010 Financial Statements are available from departmental secretariats. Coffee and croissants will be served prior to the meeting as of 9-30 a.m.

  20. Journal Update : issue 1

    OpenAIRE

    Malta Medical Journal Club

    2014-01-01

    Contents: Letter from the editors - Thomas Borg Barthet and Dale Brincat; Message for Journal Update - Gilbert Gravino and Gianluca Gonzi; Anemia, an independent predictive factor for amputation and mortality in patients hospitalized for peripheral artery disease - Thomas Borg Barthet; Peripheral blood lymphocyte telomere length as a predictor of response to immunosuppressive therapy in childhood aplastic anaemia - Thomas Borg Barthet; Influenza vaccination of pregnant women and pro...

  1. How Documentalists Update SIMBAD

    Science.gov (United States)

    Buga, M.; Bot, C.; Brouty, M.; Bruneau, C.; Brunet, C.; Cambresy, L.; Eisele, A.; Genova, F.; Lesteven, S.; Loup, C.; Neuville, M.; Oberto, A.; Ochsenbein, F.; Perret, E.; Siebert, A.; Son, E.; Vannier, P.; Vollmer, B.; Vonflie, P.; Wenger, M.; Woelfel, F.

    2015-04-01

    The Strasbourg astronomical Data Center (CDS) was created in 1972 and has had a major role in astronomy for more than forty years. CDS develops a service called SIMBAD that provides basic data, cross-identifications, bibliography, and measurements for astronomical objects outside the solar system. It brings to the scientific community an added value to content which is updated daily by a team of documentalists working together in close collaboration with astronomers and IT specialists. We explain how the CDS staff updates SIMBAD with object citations in the main astronomical journals, as well as with astronomical data and measurements. We also explain how the identification is made between the objects found in the literature and those already existing in SIMBAD. We show the steps followed by the documentalist team to update the database using different tools developed at CDS, like the sky visualizer Aladin, and the large catalogues and survey database VizieR. As a direct result of this teamwork, SIMBAD integrates almost 10.000 bibliographic references per year. The service receives more than 400.000 queries per day.

  2. Comparing, Optimising and Benchmarking Quantum Control Algorithms in a Unifying Programming Framework

    CERN Document Server

    Machnes, S; Glaser, S J; de Fouquieres, P; Gruslys, A; Schirmer, S; Schulte-Herbrueggen, T

    2010-01-01

    For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e. piecewise constant control amplitudes, iteratively into an optimised shape. Here, we present the first comparative study of optimal control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and KROTOV-type methods which do so sequentially. Guidelines for their use are given and open research questions are pointed out. --- Moreover we introduce a novel unifying algorithmic framework, DYNAMO (Dynamic Optimisation Platform) designed to provide the quantum-technology community with a convenient MATLAB-based toolset for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and compari...

  3. GNF Defense in Depth Update

    International Nuclear Information System (INIS)

    Global Nuclear Fuel (GNF) has designed, fabricated, and placed into operation more than 9 million fuel rods in approximately 135 thousand assemblies. Customer satisfaction has always compelled GNF to reduce fuel rod failures (defined here as fuel rods that breach or leak in service), However, increasing success with and subsequent expectations for economic performance of nuclear reactor plants have raised broader Industry emphasis on fuel reliability. In 2005, GNF established its Defense-in-Depth (DID) Program for the purpose of focusing attention on the many aspects of fuel design, fabrication, performance, and utilization that affect fuel reliability as well as on the key methods that govern the utilization of GNF fuel. The Program is structured to address each of the identified in-service, fuel failure mechanisms. This paper provides a summary of GNF fuel performance, following previous updates. This paper will discuss recent GNF fuel reliability and channel performance, GNF2 introduction status, and methods. GNF's more recent fuel experience includes approximately 3.8 million GE11/13 (9x9) and GE12/14 (10x10) fuel rods, well over half of which are the GE12/14 design. (Those figures also include roughly 25,000 recently-introduced GNF2 fuel rods.) Reliability, expressed as annual, observed fuel failure rates (i.e., number of rods failed each year divided by the number of opportunities, or fuel rods in service), has improved for each year since 2005. The GNF fuel failure rate for years leading up to 2007 and 2008 has been on the order of 5 to 7 ppm (excluding the corrosion events of 2001-2003), and as of this writing (January 2009) the current in-service failure has decreased to around 1.5 ppm. Failures in GE14 fuel rod failures have been primarily due to debris-fretting (> 60%), with other failures being duty-related or yet undetermined. The only failure observed in GNF2 to date was a single, early-life debris failure in a bundle not equipped with GNF's current

  4. SARNET benchmark on QUENCH-11. Final report

    International Nuclear Information System (INIS)

    The QUENCH out-of-pile experiments at Forschungszentrum Karlsruhe (Karlsruhe Research Center) are set up to investigate the hydrogen source term that results from the water or steam injection into an uncovered core of a Light-Water Reactor, to examine the behavior of overheated fuel elements under different flooding conditions, and to create a database for model development and improvement of Severe Fuel Damage (SFD) code packages. The boil-off experiment QUENCH-11 was performed on December 8, 2005 as the second of two experiments in the frame of the EC-supported LACOMERA program. It was to simulate ceasing pumps in case of a small break LOCA or a station blackout with a late depressurization of the primary system, starting with boil-down of a test bundle that was partially filled with water. It is the first test to investigate the whole sequence of an anticipated reactor accident from the boil-off phase to delayed reflood of the bundle with a low water injection rate. The test is characterized by an interaction of thermal-hydraulics and material interactions that is even stronger than in previous QUENCH tests. It was proposed by INRNE Sofia (Bulgarian Academy of Sciences) and defined together with Forschungszentrum Karlsruhe. After the test, QUENCH-11 was chosen as a SARNET code benchmark exercise. Its task is a comparison between experimental data and analytical results to assess the reliability of the code prediction for different phases of an accident and the experiment. The SFD codes used were ASTEC, ATHLET-CD, ICARE-CATHARE, MELCOR, RATEG/SVECHA, RELAP/-SCDAPSIM, and SCDAP/RELAP5. The INRNE took responsibility as benchmark coordinator to compare the code results with the experimental data. As a basis of the present work, histories of temperatures, hydrogen production and other important variables were used. Besides, axial profiles at quench initiation and the final time of 7000 s, above all of temperatures, are presented. For most variables a mainstream of

  5. SARNET benchmark on QUENCH-11. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Stefanova, A. [Bylgarska Akademiya na Naukite, Sofia (Bulgaria). Inst. for Nuclear Research and Nuclear Energy; Drath, T. [Ruhr-Univ. Bochum (Germany). Energy Systems and Energy Economics; Duspiva, J. [Nuclear Research Inst., Rez (CZ). Dept. of Reactor Technology] (and others)

    2008-03-15

    The QUENCH out-of-pile experiments at Forschungszentrum Karlsruhe (Karlsruhe Research Center) are set up to investigate the hydrogen source term that results from the water or steam injection into an uncovered core of a Light-Water Reactor, to examine the behavior of overheated fuel elements under different flooding conditions, and to create a database for model development and improvement of Severe Fuel Damage (SFD) code packages. The boil-off experiment QUENCH-11 was performed on December 8, 2005 as the second of two experiments in the frame of the EC-supported LACOMERA program. It was to simulate ceasing pumps in case of a small break LOCA or a station blackout with a late depressurization of the primary system, starting with boil-down of a test bundle that was partially filled with water. It is the first test to investigate the whole sequence of an anticipated reactor accident from the boil-off phase to delayed reflood of the bundle with a low water injection rate. The test is characterized by an interaction of thermal-hydraulics and material interactions that is even stronger than in previous QUENCH tests. It was proposed by INRNE Sofia (Bulgarian Academy of Sciences) and defined together with Forschungszentrum Karlsruhe. After the test, QUENCH-11 was chosen as a SARNET code benchmark exercise. Its task is a comparison between experimental data and analytical results to assess the reliability of the code prediction for different phases of an accident and the experiment. The SFD codes used were ASTEC, ATHLET-CD, ICARE-CATHARE, MELCOR, RATEG/SVECHA, RELAP/-SCDAPSIM, and SCDAP/RELAP5. The INRNE took responsibility as benchmark coordinator to compare the code results with the experimental data. As a basis of the present work, histories of temperatures, hydrogen production and other important variables were used. Besides, axial profiles at quench initiation and the final time of 7000 s, above all of temperatures, are presented. For most variables a mainstream of

  6. HELIOS-2: Benchmarking against hexagonal lattices

    International Nuclear Information System (INIS)

    The critical experiments performed at the Hungarian ZR-6 reactor and experiments performed at General Purpose P critical facility at RRC 'Kurchatov Institute' are used to benchmark HELIOS-2, as part of its ongoing validation and verification activities. The comparisons presented in this paper are based on, ZR6-WWER-EXP-001, Vol. 1 (2007), LEU-COMP-THERM-015, Vol. 4 (2005), and LEU-COMP-THERM-061, Vol. 4 (2002). On ZR-6, single cell, macro cell, and 2D calculations on selected experiments, regular and perturbed, are made. In the 2D calculations, the radial leakage is treated by including the reflector in the calculations, representing only axial leakage by a measured axial buckling. keff and RMS reaction rates comparisons are presented. On General Purpose P critical facility, the entire experiment is modelled as 2D. Comparisons of keff and fission rates are presented and the effect of the axial buckling on keff is investigated. (Author)

  7. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  8. Clinical Benchmark for Gastric Stapling Procedures.

    Science.gov (United States)

    Graves

    1994-08-01

    To help answer the call to cut costs of surgical care, hospitals and physicians have joined to compare methods of care for the more common Diagnosis Related Group (DRG) diagnoses to form a Benchmark. Since many bariatric surgeons are the only ones performing this surgery in their primary hospitals, they do not have two or more surgical routines for comparison. This presentation compares data for the preoperative work-up, operating-room, and methods of postoperative care used by 29 members of the American Society for Bariatric Surgery (ASBS). There was representation of both academic and private surgeons and hospitals. To target areas for possible savings, the hospital bills of 16 patients without complication were compared. The synthesis of this information revealed significant differences in the extent and cost of preoperative work-up, antibiotic coverage, other postoperative care, and length of stay. These differences are examined under the assumption that patient outcome was the same. PMID:10742779

  9. Hybrid BN-600 core benchmark analyses

    International Nuclear Information System (INIS)

    Cross section library KAFAX used for BN-600 core benchmark calculations was based on nuclear data files ENDF-B/VI and JEF-2.2. Generation of effective cross sections were generated by a homogeneous cell model, group collapsing from 80 to 9 groups. Core neutron flux calculations were done by coarse mesh nodal diffusion approximation (DIF3D code), Nodal simplified P2 transport calculation (SOLTRAN code), and discrete SN approximation (TWODAT code). First order perturbation theory was used for reactivity parameter calculation. Calculation code applied for burnup calculation was the three-dimensional code REBUS-3 with 9 group cross section library from basic neutronic calculations. Results obtained include: multiplication factor k-eff at the beginning and at the end of cycled, reactivity burnup loss, fuel Doppler coefficient and sodium density coefficient. Results of heterogeneity calculations include k-eff, control rod worth and sodium density coefficient

  10. Benchmark Specification for HTGR Fuel Element Depletion

    International Nuclear Information System (INIS)

    explicitly represent the dynamics of neutron slowing down in a heterogeneous environment with randomised grain distributions, but traditional tracking simulations can be extremely slow, and the large number of grains in a fuel element may often represent an extreme burden on computational resources. A number of approximations or simplifying assumptions have been developed to simplify the computational process and reduce the effort. Multi-group (MG) methods, on the other hand, require special treatment of DH fuels in order to properly capture resonance effects, and generally cannot explicitly represent a random distribution of grains due to the excessive computational burden resulting from the spatial grain distribution. The effect of such approximations may be important and has potential to misrepresent the spectrum within a fuel grain. Depletion methods utilised in lattice calculations typically rely on point depletion methods, based on the isotopic inventory of fuel depleted, assuming a single localised neutron flux. This flux is generally determined using either a CE or MG transport solver. Hence, in application to DH fuels, the primary factor influencing the accuracy of a depletion calculation will be the accuracy of the local flux calculated within the transport solution and the cross-sections. The current lack of well-qualified experimental measurements for spent HGTR fuel elements limits the validation of advanced DH depletion method. Because of this shortage of data, this benchmark has been developed as the first, simplest phase in a planned series of increasingly complex set of code-to-code benchmarks. The intent of this benchmark is to encourage submission of a wide range of computational results for depletion calculations in a set of basic fuel cell models. Comparison of results using independent methods and data should provide insight into potential limitations in various modelling approximations. The benchmark seeks to provide the simplest possible models, in

  11. Shielding integral benchmark archive and database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L.; Grove, R.E. [Radiation Safety Information Computational Center RSICC, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831-6171 (United States); Kodeli, I. [Josef Stefan Inst., Jamova 39, 1000 Ljubljana (Slovenia); Gulliford, J.; Sartori, E. [OECD NEA Data Bank, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  12. Shielding integral benchmark archive and database (SINBAD)

    International Nuclear Information System (INIS)

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  13. Benchmarking triple stores with biological data

    CERN Document Server

    Mironov, Vladimir; Blonde, Ward; Antezana, Erick; Lindi, Bjorn; Kuiper, Martin

    2010-01-01

    We have compared the performance of five non-commercial triple stores, Virtuoso-open source, Jena SDB, Jena TDB, SWIFT-OWLIM and 4Store. We examined three performance aspects: the query execution time, scalability and run-to-run reproducibility. The queries we chose addressed different ontological or biological topics, and we obtained evidence that individual store performance was quite query specific. We identified three groups of queries displaying similar behavior across the different stores: 1) relatively short response time, 2) moderate response time and 3) relatively long response time. OWLIM proved to be a winner in the first group, 4Store in the second and Virtuoso in the third. Our benchmarking showed Virtuoso to be a very balanced performer - its response time was better than average for all the 24 queries; it showed a very good scalability and a reasonable run-to-run reproducibility.

  14. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  15. Development of an MPI benchmark program library

    International Nuclear Information System (INIS)

    Distributed parallel simulation software with message passing interfaces has been developed to realize large-scale and high performance numerical simulations. The most popular API for message communication is an MPI. The MPI will be provided on the Earth Simulator. It is known that performance of message communication using the MPI libraries gives a significant influence on a whole performance of simulation programs. We developed an MPI benchmark program library named MBL in order to measure the performance of message communication precisely. The MBL measures the performance of major MPI functions such as point-to-point communications and collective communications and the performance of major communication patterns which are often found in application programs. In this report, the description of the MBL and the performance analysis of the MPI/SX measured on the SX-4 are presented. (author)

  16. Development of Benchmarks for Operating Costs and Resources Consumption to be Used in Healthcare Building Sustainability Assessment Methods

    Directory of Open Access Journals (Sweden)

    Maria de Fátima Castro

    2015-09-01

    Full Text Available Since the last decade of the twentieth century, the healthcare industry is paying attention to the environmental impact of their buildings and therefore new regulations, policy goals, and Building Sustainability Assessment (HBSA methods are being developed and implemented. At the present, healthcare is one of the most regulated industries and it is also one of the largest consumers of energy per net floor area. To assess the sustainability of healthcare buildings it is necessary to establish a set of benchmarks related with their life-cycle performance. They are both essential to rate the sustainability of a project and to support designers and other stakeholders in the process of designing and operating a sustainable building, by allowing the comparison to be made between a project and the conventional and best market practices. This research is focused on the methodology to set the benchmarks for resources consumption, waste production, operation costs and potential environmental impacts related to the operational phase of healthcare buildings. It aims at contributing to the reduction of the subjectivity found in the definition of the benchmarks used in Building Sustainability Assessment (BSA methods, and it is applied in the Portuguese context. These benchmarks will be used in the development of a Portuguese HBSA method.

  17. On the feasibility of using emergy analysis as a source of benchmarking criteria through data envelopment analysis: A case study for wind energy

    International Nuclear Information System (INIS)

    The definition of criteria for the benchmarking of similar entities is often a critical issue in analytical studies because of the multiplicity of criteria susceptible to be taken into account. This issue can be aggravated by the need to handle multiple data for multiple facilities. This article presents a methodological framework, named the Em + DEA method, which combines emergy analysis with Data Envelopment Analysis (DEA) for the ecocentric benchmarking of multiple resembling entities (i.e., multiple decision making units or DMUs). Provided that the life-cycle inventories of these DMUs are available, an emergy analysis is performed through the computation of seven different indicators, which refer to the use of fossil, metal, mineral, nuclear, renewable energy, water and land resources. These independent emergy values are then implemented as inputs for DEA computation, thus providing operational emergy-based efficiency scores and, for the inefficient DMUs, target emergy flows (i.e., feasible emergy benchmarks that would turn inefficient DMUs into efficient). The use of the Em + DEA method is exemplified through a case study of wind energy farms. The potential use of CED (cumulative energy demand) and CExD (cumulative exergy demand) indicators as alternative benchmarking criteria to emergy is discussed. The combined use of emergy analysis with DEA is proven to be a valid methodological approach to provide benchmarks oriented towards the optimisation of the life-cycle performance of a set of multiple similar facilities, not being limited to the operational traits of the assessed units. - Highlights: • Combined emergy and DEA method to benchmark multiple resembling entities. • Life-cycle inventory, emergy analysis and DEA as key steps of the Em + DEA method. • Valid ecocentric benchmarking approach proven through a case study of wind farms. • Comparison with life-cycle energy-based benchmarking criteria (CED/CExD + DEA). • Analysts and decision and policy

  18. Benchmarking homogenization algorithms for monthly data

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2012-01-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  19. Thermal fatigue benchmark final - research report

    International Nuclear Information System (INIS)

    DNV (Det Norske Veritas) has analysed a 3D mock-up, loaded with variable temperature. The load is applied to the internal of a pipe, and deviates from the axisymmetrical case. The calculations were performed in blind in an international benchmark project. DNV's contribution was funded by SKI. The calculations show the importance of taking the non-axisymmetry into account. An axisymmetrical analysis would underestimate the stresses in the pipe. The temperature field in the mock-up was measured at several locations in the pre-test condition. It turned out to be difficult to capture the measured field by applying only convection, adjusting heat transfer coefficients. The adjustment of the heat transfer coefficient proved to be a major problem. No standard estimation of these parameters were capable of satisfyingly capture the temperature fields. This highlights the complexity of this kind of problems. It was reported by CEA that modelling of radiation was required for accurately resolving the stresses. The time to crack initiation was computed, as well as crack propagation rates. The computed crack initiation time is significantly longer than the crack propagation time. All results by DNV in terms of maximum stress range, computed design life and crack propagation time are comparable to those obtained by other contributors to the benchmark project. The DNV computed maximum stress range is Δσ = 715 MPa (von Mises). The contribution by other members range from 507 to 805 MPa. The DNV computed fatigue life (from two mean curves, ASME and CEA) range from 100.000 to 1.000.000 depending on different assumptions

  20. Lessons learned from benchmark orthopaedic trials.

    Science.gov (United States)

    Swiontkowski, Marc F; Agel, Julie

    2012-07-18

    Benchmark trials in orthopaedics are designed to address a question of substantial interest to clinicians and patients. They are also designed to have prospective data collection, an adequate sample size, an appropriate duration of follow-up based on the injury or treatment under study, blinded adjudication of the outcome variables, appropriate statistical analyses, and widespread and effective dissemination of the information learned in the trial. There are multiple lessons to be gleaned from these trials: (1) Identifying an engaging and relevant clinical question will make it easier to identify centers that are willing to participate. (2) Individual site leadership, both of the overall project and at the individual site, is critical to the success of any trial. (3) Not every trial needs to have a randomized design; observational trials can provide data that will impact clinical care. (4) Patients should understand the long-term goals of the project when they are enrolled so that they have a sense of the importance of their role in the study. (5) Follow-up rates that are >90% are possible for orthopaedic trials, but effort and money are required to achieve this. (6) Patients who do not agree to be randomized should be enrolled as subjects in a parallel observational design if it is available. (7) Blinded adjudication of the outcome variables is recommended whenever feasible. (8) Partnership with the academic community is mandatory for the success of industry-funded, phase-3 United States Food and Drug Administration trials. (9) Intention-to-treat analysis and as-treated analysis should be reported. Benchmark orthopaedic trials can and will change clinical practice, but detailed planning must occur to ensure that the results are believable and relevant to the orthopaedic community. These trials are time-consuming and expensive, but with the use of careful initial planning and continued oversight during the trial, Level-I evidence will be obtained and will be useful